Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
6,400 | 679 | Filter Selection Model for Generating
Visual Motion Signals
Steven J. Nowlan?
CNL, The Salk Institute
P.O. Box 85800, San Diego, CA
92186-5800
Terrence J. Sejnowski
CNL, The Salk Institute
P.O. Box 85800, San Diego, CA
92186-5800
Abstract
Neurons in area MT of primate visual cortex encode the velocity
of moving objects. We present a model of how MT cells aggregate
responses from VI to form such a velocity representation. Two
different sets of units, with local receptive fields, receive inputs
from motion energy filters. One set of units forms estimates of local
motion, while the second set computes the utility of these estimates.
Outputs from this second set of units "gate" the outputs from the
first set through a gain control mechanism . This active process
for selecting only a subset of local motion responses to integrate
into more global responses distinguishes our model from previous
models of velocity estimation. The model yields accurate velocity
estimates in synthetic images containing multiple moving targets
of varying size, luminance, and spatial frequency profile and deals
well with a number of transparency phenomena.
1
INTRODUCTION
Humans, and primates in general, are very good at complex motion processing
tasks such as tracking a moving target against a moving background under varying
luminance. In order to accomplish such tasks, the visual system must integrate
many local motion estimates from cells with limited spatial receptive fields and
marked orientation selectivity. These local motion estimates are sensitive not just
?Current address, Synaptics Inc., 2698 Orchard Parkway, San Jose, CA 95134.
369
370
Nowlan and Sejnowski
to the velocity of a visual target, but also to many other features of the target such
as its spatial frequency profile or local edge orientation. As a result, the integration
of these motion signals cannot be performed in a fixed manner, but must be a
dynamic process dependent on the visual stimulus.
Although cells with motion-sensitive responses are found in primary visual cortex
(VI in primates), mounting physiological evidence suggests that the integration of
these responses to produce responses which are tuned primarily to the velocity of a
visual target first occurs in primate visual area MT (Albright 1992, Maunsell and
Newsome 1987). We propose a computational model for integrating local motion
responses to estimate the velocity of objects in the visual scene. These velocity
estimates may be used for eye tracking or other visua-motor skills. Previous computational approaches to this problem (Grzywacz and Yuille 1990, Heeger 1987,
Heeger 1992, Horn and Schunk 1981, Nagel 1987) have primarily focused on how
to combine local motion responses into local velocity estimates at all points in an
image (the velocity flow field). We propose that the integration of local motion
measurements may be much simpler, if one does not try to integrate across all of
the local motion measurements but only a subset. Our model learns to estimate
the velocity of visual targets by solving the problems of what to integrate and how
to integrate in parallel. The trained model yields accurate velocity estimates from
synthetic images containing multiple moving targets of varying size, luminance, and
spatial frequency profile.
2
THE MODEL
The model is implemented as a cascade of networks of locally connected units
which has two parallel processing pathways (figure 1). All stages of the model
are represented as "layers" of units with a roughly retinotopic organization. The
figure schematically represents the activity in the model at one instant of time.
Conceptually, it is easier to think of the model as computing evidence for particular
velocities in an image rather than computing velocity directly. Processing in the
model may be divided into 3 stages, to be described in more detail below. In the
first stage, the input intensity image is converted into 36 local motion "images" (9
of which are shown in the figure) which represent the outputs of 36 motion energy
filters from each region of the input image. In the second stage, the operations of integration and selection are performed in parallel. The integration pathway combines
information from motion energy filters tuned to different directions and spatial and
temporal frequencies to compute the local evidence in favor of a particular velocity.
The selection pathway weights each region of the image according to the amount of
evidence for a particular velocity that region contains. In the third stage, the global
evidence for a visual target moving at a particular velocity V1: (t) is computed as a
sum over the product of the outputs of the integration and selection pathways:
V1:(t) =
L 11: (x, y, t)S1:(x, y, t)
(1)
Z:,lI
where 11:(x, y, t) is the local evidence for velocity k computed by the integration
pathway from region (x, y) at time t, and S1:(x, y, t) is the weight assigned by the
selection pathway to that region.
Filter Selection Model for Generating Visual Motion Signals
Integration
---
Motion Energy
Input
: : :: : ::: "--4*-:~::l1]
(64 x 64)
. .
Velocity
~nm
9 types
4 directions
(8 x 8)
Figure 1: Diagram of motion processing model. Processing proceeds
from left to right in the model, but the integration and selection stages
operate in parallel. Shading within the boxes indicates different levels of
activity at each stage. The responses shown in the diagram are intended
to be indicative of the responses at different stages of the model but do
not represent actual responses from the model.
2.1
LOCAL MOTION ESTIMATES
The first stage of processing is based on the motion energy model (Adelson and
Bergen 1985, Watson 1985). This model relies on the observation that an intensity
edge moving at a constant velocity produces a line at a particular orientation in
space-time. This means that an oriented space-time filter will respond most strongly
to objects moving at a particular velocity.1 A motion energy filter uses the squared
outputs of a quadrature pair (90 0 out of phase) of oriented filters to produce a
phase independent local velocity estimate. The motion energy model was selected
as a biologically plausible model of motion processing in mammalian VI, based
primarily on the similarity of responses of simple and complex cells in cat area VI
to the output of different stages of the motion energy model (Heeger 1992, Grywacz
and Yuille 1990, Emerson 1987).
The particular filters used in our model had spatial responses similar to a twodimensional Gabor filter, with the physiologically more plausible temporal responses
suggested by Adelson and Bergen (1985). The motion energy layer was divided into
a grid of 49 by 49 receptive field locations and at each grid location there were
filters tuned to four different directions of motion (up, down, left, and right). For
each direction of motion there were nine different filters representing combinations
of three spatial and three temporal frequencies. The filter center frequency spacings were 1 octave spatially and 1.5 octaves temporally. The filter parameters and
spacings were chosen to be physiologically realistic, and were fixed during training
of the model. In addition, there was a correspondence between the size of the filter
IThese filters actually respond most strongly to a narrow band of spatial frequencies
(SF) and temporal frequencies (TF), which represent a range of velocities, v = TF/SF.
371
372
Nowlan and Sejnowski
AftW--.., Local
Competition
? ??
Motion Energy
(49 x 49)
33 layers
(8 x 8)
Output
(33 units)
Figure 2: Diagram of integration and selection processing stages. Different shadings for units in the integration and output pools correspond
to different directions of motion. Only two of the selection layers are
shown and the backgrounds of these layers are shaded to match their
corresponding integration and output units. See text for description of
architecture.
receptive fields and the spatial frequency tuning of the filters with lower frequency
filters having larger spatial extent to their receptive fields. This is also similar to
what has been found in visual cortex (Maunsell and Newsome, 1987).
The input intensity image is first filtered with a difference of gaussians filter which
is a simplification of retinal processing and provides smoothing and contrast enhancement. Each motion energy filter is then convolved with the smoothed input
image producing 36 motion energy responses at each location in the receptive field
grid which serve as the input to the next stage of processing.
2.2
INTEGRATION AND SELECTION
The integration and selection pathways are both implemented as locally connected
networks with a single layer of weights. The integration pathway can be thought of
as a layer of units organized into a grid of 8 by 8 receptive field locations (figure 2).
Units at each receptive field location look at all 36 motion energy measurements
from each location within a 9 by 9 region of the motion energy receptive field
grid. Adjacent receptive field locations receive input from overlapping regions of
the motion energy layer.
At each receptive field location in the integration layer there is a pool of 33 integration units (9 units in one of these pools are shown in figure 2). These units represent
motion in 8 different directions with units representing four different speeds for each
direction plus a central unit indicating no motion. These units form a log polar representation of the local velocity at that receptive field location, since as one moves
out along any "arm" of the pool of units each unit represents a speed twice as large
as the preceding unit in that arm. All of the integration pools share a common set
Filter Selection Model for Generating Visual Motion Signals
of weights, so in the final Lrained model all compute the same function.
The activity of an integration unit (which lies between 0 and 1) represents the
amount of local support for the corresponding velocity. Local competition between
the units in each integration pool enforces the important constraint that each integration pool can only provide strong support for one velocity. The competition is
enforced using a softmax non-linearity: If I~ (x, y, t) represents the net input to unit
k in one of the integration pools, the state of that unit is computed as
h:(x,y,t)
= el~(~,y,t)/Lel;(~IYlt).
j
Note that the summation is performed over all units within a single pool, all of
which share the same (x, y) receptive field location.
The output of the model is also represented by a pool of 33 units, organized in the
same way as each pool of integration units. The state of each unit in the output
pool represents the global evidence within the entire image supporting a particular
velocity. The state of each of these output units Vk(t) is computed as the weighted
sum of the state of the corresponding integration unit in all 64 integration receptive
field locations (equation (1?. The weights assigned to each receptive field location
are computed by the state of the corresponding selection unit (figure 2). Although
the activity of output units can be treated as evidence for a particular velocity,
the activity across the entire pool of units forms a distributed representation of
a continuous range of velocities (i. e. activity split between two adjacent units
represents a velocity between the optimal velocities of those two units).
The selection units are also organized into a grid of 8 by 8 receptive field locations
which are in one to one correspondence with the integration receptive field locations
(figure 2). However, it is convenient to think of the selection units as being organized
not as a single layer of units but rather as 33 layers of units, one for each output unit.
In each layer of selection units, there is one unit for each receptive field location.
Two of the selection layers are shown in figure 2. The layer with the vertically
shaded background corresponds to the output unit for upward motion (also shaded
with vertical stripes) and states of units in this selection layer weight the states of
upward motion units in each integration pool (again shaded vertically).
There is global competition among all of the units in each selection layer. Again
this is implemented using a softmax non-linearity: If Sk(x, y, t) is the net input to
a selection unit in layer k, the state of that unit is computed as
Sk(X,y,t)
= eS~(~,y,t)/ L
eS~(~',y',t).
~',y'
Note that unlike the integration case, the summation in this case is performed over
all receptive field locations. This global competition enforces the second important
constraint in the model, that the total amount of support for each velocity across the
entire image cannot exceed one. This constraint, combined with the fact that the
integration unit outputs can never exceed 1 ensures that the states of the output
units are constrained to be between 0 and 1 and can be interpreted as the global
support within the image for each velocity, as stated earlier.
The combination of global competition in the selection layers and local competition
within the integration pools means that the only way to produce strong support for
373
374
Nowlan and Sejnowski
a particular output velocity is for the corresponding selection network to focus all
its support on regions that strongly support that velocity. This allows the selection
network to learn to estimate how useful information in different regions of an image
is for predicting velocities within the visual scene. The weights of both the selection
and integration networks are adapted in parallel as is discussed next.
2.3
OBJECTIVE FUNCTION AND TRAINING
The outputs of the integration and selection networks in the final trained model are
combined as in equation (I), so that the final outputs represent the global support
for each velocity within the image. During training of the system however, the
outputs of each pool of integration units are treated as if each were an independent
estimate of support for a particular velocity. If a training image sequence contains
an object moving at velocity VA: then the target for the corresponding output unit
is set to I, otherwise it is set to o. The system is then trained to maximize the
likelihood of generating the targets:
log L =
L: L: log (L: SA:(z, y, t) exp
t
A:
[-(VA: -
IA:(z, y, t))2])
(2)
z:,y
To optimize this objective, each integration output IA:(z, y, t) is compared to the
target VA: directly, and the outputs closest to the target value are assigned the most
responsibility for that target, and hence receive the largest error signal. At the
same time, the selection network states are trained to try and estimate from the
input alone (i. e. the local motion measurements), which integration outputs are
most accurate. This interpretation of the system during training is identical to the
interpretation given to the mixture of experts (Nowlan, 1990) and the same training
procedure was used. Each pool of integration units functions like an expert network,
and each layer of selection units functions like a gating network.
There are, however, two important differences between the current system and the
mixture of experts. First, this system uses multiple gating networks rather than a
single one, allowing the system to represent more than a single velocity within an
image. Second, in the mixture of experts, each expert network has an independent
set of weights and essentially learns to compute a different function (usually different
functions of the same input). In the current model, each pool of integration units
shares the same set of weights and is constrained to compute the same function.
The effect of the training procedure in this system is to bias the computations of
the integration pools to favor certain types of local image features (for example, the
integration stage may only make reliable velocity estimates in regions of shear or
discontinuities in velocity). The selection networks learn to identify which features
the integration stage is looking for, and to weight image regions most heavily which
contain these kinds of features.
3
RESULTS AND DISCUSSION
The system was trained using 500 image sequences containing 64 frames each. These
training image sequences were generated by randomly selecting one or two visual
Filter Selection Model for Generating Visual Motion Signals
targets for each sequence and moving these targets through randomly selected trajectories. The targets were rectangular patches that varied in size, texture, and
intensity. The motion trajectories all began with the objects stationary and then
one or both objects rapidly accelerated to constant velocities maintained for the remainder of the trajectory. Targets moved in one of 8 possible directions, at speeds
ranging between 0 and 2.5 pixels per unit of time. In training sequences containing
multiple targets, the targets were permitted to overlap (targets were assigned to
different depth planes at random) and the upper target was treated as opaque in
some cases and partially transparent in other cases. The system was trained using a conjugate gradient descent procedure until the response of the system on the
training sequences deviated by less than 1% on average from the desired response.
The performance of the trained system was tested using a separate set of 50 test
image sequences. These sequences contained 10 novel visual targets with random
trajectories generated in the same manner as the training sequences. The responses
on this test set remained within 2.5% of the desired response, with the largest errors
occurring at the highest velocities. Several of these test sequences were designed so
that targets contained edges oriented obliquely to the direction of motion, demonstrating the ability of the model to deal with aspects of the aperture problem. In
addition, only small, transient increases in error were observed when two moving
objects intersected, whether these objects were opaque or partially transparent.
A more challenging test of the system was provided by presenting the system with
"plaid patterns" consisting of two square wave gratings drifting in different directions (Adelson and Movshon, 1982). Human observers will sometimes see a single
coherent motion corresponding to the intersection of constraints (IOC) direction of
the two grating motions, and sometimes see the two grating motions separately,
as one grating sliding through the other. The percept reported can be altered by
changing the contrast of the regions where the two gratings intersect relative to the
contrast of the grating itself (Stoner et ai, 1990). We found that for most grating
patterns the model reliably reported a single motion in the IOC direction, but by
manipulating the intensity of the intersection regions it was possible to find regions
where the model would report the motion of the two gratings separately. Coherent
grating motion was reported when the model tended to select most strongly image
regions corresponding to the intersections of the gratings, while two motions were
reported when the regions between the grating intersections were strongly selected.
We also explored the response properties of selection and integration units in the
trained model using drifting sinusoidal gratings. These stimuli were chosen because
they have been used extensively in exploring the physiological response properties of visual motion neurons in cortical visual areas (Albright 1992, Maunsell and
Newsome 1987). Integration units tended to be tuned to a fairly narrow band of
velocities over a broad range of spatial frequencies, like many MT cells (Maunsell
and Newsome, 1987). The selection units had quite different response properties.
They responded primarily to velocity shear (neighboring regions of differing velocity) and to flicker (temporal frequency) rather than true velocity. Cells with many
of these properties are also common in MT (Maunsell and Newsome, 1987). A final
important difference between the integration and selection units is their response to
whole field motion . Integration units tend to have responses which are somewhat
enhanced by whole field motion in their preferred direction, while selection unit
375
376
Nowlan and Sejnowski
responses are generally suppressed by whole field motion. This difference is similar
to the recent observation that area MT contains two classes of cell, one whose responses are suppressed by whole field motion, while responses of the second class
are not suppressed (Born and Tootell, 1992).
Finally, the model that we have proposed is built on the premise of an active
mechanism for selecting subsets of unit responses to integrate over. While this is a
common aspect of many accounts of attentional phenomena, we suggest that active
selection may represent a fundamental aspect of cortical processing that occurs with
many pre-attentive phenomena, such as motion processing.
References
Adelson, E. H. and Bergen, J. R (1985) Spatiotemporal energy models for the perception
of motion. J. Opt. Soc. Am. A, 2, 284-299.
Adelson, M. and Movshon, J. A. (1982) Phenomenal coherence of moving visual patterns.
Nature, 300, 523-525.
Albright, T. D. (1992) Form-cue invariant motion processing in primate visual cortex.
Science. 255, 1141-1143.
Born, R. T. and Tootell, R B. H. (1992) Segregation of global and local motion processing
in primate middle temporal visual area. Nature, 357, 497-500.
Emerson, RC., Citron, M.C., Vaughn W.J., Klein, S.A. (1987) Nonlinear directionally
selective subunits in complex cells of cat striate cortex. J. Neurophys. 58, 33-65.
Grzywacz, N.M. and Yuille, A.L. (1990) A model for the estimate of local image velocity
by cells in the visual cortex. Proc. R. Soc. Lond. B 239, 129-161.
Heeger, D.J. (1987) Model for the extraction of image flow. J. Opt. Soc. Am. A 4,
1455-1471.
Heeger, D.J. (1992) Normalization of cell responses in cat striate cortex. Visual Neuroscience, in press.
Horn, B.K.P. and Schunk, B.G. (1981) Determining optical flow. Artificial Intelligence
17, 185-203.
Maunsell J .H.R. and Newsome, W.T. (1987) Visual processing in monkey extrastriate
cortex. Ann. Rev. Neurosci. 10, 363-401.
Nowlan, S.J. (1990) Competing experts: An experimental investigation of associative mixture models. Technical Report CRG-TR-90-5, Department of Computer Science, University of Toronto.
Nagel, H.H. (1987) On the estimation of optical flow: relations between different approaches and some new results. Artificial Intelligence 33, 299-324.
Stoner G.R, Albright T.D., Ramachandran V.S. (1990) Transparency and coherence in
human motion perception. Nature 344, 153-155.
Watson, A.B. and Ahumada, A.J. (1985) Model of human visual-motion sensing. J. Opt.
Soc. Am. A, 2, 322-342.
| 679 |@word middle:1 tr:1 shading:2 extrastriate:1 born:2 contains:3 selecting:3 tuned:4 current:3 neurophys:1 nowlan:7 must:2 realistic:1 motor:1 designed:1 mounting:1 alone:1 stationary:1 selected:3 cue:1 intelligence:2 indicative:1 plane:1 filtered:1 provides:1 location:16 toronto:1 simpler:1 rc:1 along:1 combine:2 pathway:8 manner:2 roughly:1 actual:1 provided:1 retinotopic:1 linearity:2 what:2 kind:1 interpreted:1 monkey:1 differing:1 temporal:6 control:1 unit:62 maunsell:6 schunk:2 producing:1 local:25 vertically:2 plus:1 twice:1 suggests:1 shaded:4 challenging:1 limited:1 range:3 horn:2 enforces:2 procedure:3 intersect:1 area:6 emerson:2 cascade:1 gabor:1 thought:1 convenient:1 pre:1 integrating:1 suggest:1 cannot:2 selection:34 twodimensional:1 tootell:2 optimize:1 center:1 focused:1 rectangular:1 grzywacz:2 enhanced:1 diego:2 target:23 heavily:1 us:2 velocity:49 mammalian:1 stripe:1 observed:1 steven:1 region:17 ensures:1 connected:2 highest:1 dynamic:1 trained:8 solving:1 yuille:3 serve:1 represented:2 cat:3 sejnowski:5 artificial:2 aggregate:1 quite:1 whose:1 larger:1 plausible:2 cnl:2 otherwise:1 favor:2 ability:1 think:2 itself:1 final:4 directionally:1 associative:1 sequence:10 net:2 propose:2 product:1 remainder:1 neighboring:1 rapidly:1 description:1 moved:1 obliquely:1 competition:7 enhancement:1 produce:4 generating:5 object:8 vaughn:1 sa:1 grating:12 soc:4 implemented:3 strong:2 direction:13 plaid:1 filter:22 human:4 transient:1 premise:1 transparent:2 investigation:1 opt:3 summation:2 crg:1 exploring:1 exp:1 estimation:2 proc:1 polar:1 sensitive:2 largest:2 tf:2 weighted:1 rather:4 varying:3 encode:1 focus:1 vk:1 indicates:1 likelihood:1 contrast:3 am:3 dependent:1 bergen:3 el:1 entire:3 relation:1 manipulating:1 selective:1 upward:2 pixel:1 among:1 orientation:3 spatial:11 integration:44 smoothing:1 softmax:2 constrained:2 field:24 fairly:1 never:1 having:1 extraction:1 ioc:2 identical:1 represents:6 broad:1 adelson:5 look:1 report:2 stimulus:2 primarily:4 distinguishes:1 oriented:3 randomly:2 intended:1 phase:2 consisting:1 organization:1 mixture:4 accurate:3 edge:3 phenomenal:1 desired:2 earlier:1 newsome:6 subset:3 reported:4 spatiotemporal:1 accomplish:1 synthetic:2 combined:2 fundamental:1 terrence:1 pool:19 squared:1 central:1 nm:1 again:2 containing:4 expert:6 li:1 account:1 converted:1 sinusoidal:1 retinal:1 inc:1 vi:4 performed:4 try:2 observer:1 responsibility:1 wave:1 parallel:5 square:1 responded:1 percept:1 yield:2 correspond:1 identify:1 conceptually:1 trajectory:4 tended:2 against:1 attentive:1 energy:16 frequency:12 gain:1 organized:4 actually:1 permitted:1 response:29 box:3 strongly:5 just:1 stage:14 until:1 ramachandran:1 nonlinear:1 overlapping:1 effect:1 contain:1 true:1 hence:1 assigned:4 spatially:1 deal:2 adjacent:2 during:3 maintained:1 octave:2 presenting:1 motion:62 l1:1 image:25 ranging:1 novel:1 began:1 common:3 shear:2 mt:6 discussed:1 interpretation:2 measurement:4 ai:1 tuning:1 grid:6 stoner:2 had:2 moving:12 cortex:8 synaptics:1 similarity:1 closest:1 recent:1 selectivity:1 certain:1 watson:2 somewhat:1 preceding:1 maximize:1 signal:6 sliding:1 multiple:4 transparency:2 technical:1 match:1 divided:2 va:3 essentially:1 sometimes:2 represent:7 normalization:1 cell:10 receive:3 background:3 schematically:1 separately:2 spacing:2 addition:2 diagram:3 operate:1 unlike:1 nagel:2 tend:1 flow:4 exceed:2 split:1 architecture:1 competing:1 whether:1 utility:1 movshon:2 nine:1 useful:1 generally:1 amount:3 locally:2 band:2 extensively:1 flicker:1 neuroscience:1 per:1 klein:1 four:2 demonstrating:1 intersected:1 changing:1 luminance:3 v1:2 sum:2 enforced:1 jose:1 respond:2 opaque:2 patch:1 coherence:2 layer:19 simplification:1 correspondence:2 deviated:1 activity:6 adapted:1 constraint:4 scene:2 aspect:3 speed:3 lond:1 optical:2 department:1 according:1 orchard:1 combination:2 conjugate:1 across:3 suppressed:3 rev:1 primate:6 s1:2 biologically:1 invariant:1 equation:2 segregation:1 mechanism:2 operation:1 gaussians:1 gate:1 convolved:1 drifting:2 instant:1 lel:1 move:1 objective:2 occurs:2 receptive:19 primary:1 striate:2 gradient:1 separate:1 attentional:1 extent:1 stated:1 reliably:1 allowing:1 upper:1 vertical:1 neuron:2 observation:2 descent:1 supporting:1 subunit:1 looking:1 frame:1 varied:1 smoothed:1 intensity:5 pair:1 coherent:2 narrow:2 discontinuity:1 address:1 suggested:1 proceeds:1 below:1 usually:1 pattern:3 perception:2 built:1 reliable:1 ia:2 overlap:1 treated:3 predicting:1 arm:2 representing:2 altered:1 eye:1 temporally:1 text:1 determining:1 relative:1 integrate:6 share:3 visua:1 bias:1 institute:2 distributed:1 depth:1 cortical:2 computes:1 san:3 skill:1 preferred:1 aperture:1 global:9 active:3 parkway:1 physiologically:2 continuous:1 sk:2 learn:2 nature:3 ca:3 ahumada:1 complex:3 neurosci:1 whole:4 profile:3 quadrature:1 salk:2 heeger:5 sf:2 lie:1 third:1 learns:2 down:1 remained:1 gating:2 sensing:1 explored:1 physiological:2 evidence:8 texture:1 occurring:1 easier:1 intersection:4 visual:27 contained:2 tracking:2 partially:2 corresponds:1 relies:1 marked:1 ann:1 total:1 albright:4 e:2 experimental:1 indicating:1 select:1 support:9 accelerated:1 tested:1 phenomenon:3 |
6,401 | 6,790 | Batch Renormalization: Towards Reducing
Minibatch Dependence in Batch-Normalized Models
Sergey Ioffe
Google
[email protected]
Abstract
Batch Normalization is quite effective at accelerating and improving the training
of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that
this is due to the dependence of model layer inputs on all the examples in the
minibatch, and different activations being produced between training and inference. We propose Batch Renormalization, a simple and effective extension to
ensure that the training and inference models generate the same outputs that depend on individual examples rather than the entire minibatch. Models trained with
Batch Renormalization perform substantially better than batchnorm when training
with small or non-i.i.d. minibatches. At the same time, Batch Renormalization retains the benefits of batchnorm such as insensitivity to initialization and training
efficiency.
1
Introduction
Batch Normalization (?batchnorm? [6]) has recently become a part of the standard toolkit for training deep networks. By normalizing activations, batch normalization helps stabilize the distributions
of internal activations as the model trains. Batch normalization also makes it possible to use significantly higher learning rates, and reduces the sensitivity to initialization. These effects help accelerate
the training, sometimes dramatically so. Batchnorm has been successfully used to enable state-ofthe-art architectures such as residual networks [5].
Batchnorm works on minibatches in stochastic gradient training, and uses the mean and variance
of the minibatch to normalize the activations. Specifically, consider a particular node in the deep
network, producing a scalar value for each input example. Given a minibatch B of m examples,
consider the values of this node, x1 . . . xm . Then batchnorm takes the form:
xi ? ?B
x
bi ?
?B
where ?B is the sample mean of x1 . . . xm , and ?B2 is the sample variance (in practice, a small
is added to it for numerical stability). It is clear that the normalized activations corresponding to
an input example will depend on the other examples in the minibatch. This is undesirable during
inference, and therefore the mean and variance computed over all training data can be used instead.
In practice, the model usually maintains moving averages of minibatch means and variances, and
during inference uses those in place of the minibatch statistics.
While it appears to make sense to replace the minibatch statistics with whole-data ones during
inference, this changes the activations in the network. In particular, this means that the upper layers
(whose inputs are normalized using the minibatch) are trained on representations different from
those computed in inference (when the inputs are normalized using the population statistics). When
the minibatch size is large and its elements are i.i.d. samples from the training distribution, this
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
difference is small, and can in fact aid generalization. However, minibatch-wise normalization may
have significant drawbacks:
For small minibatches, the estimates of the mean and variance become less accurate. These inaccuracies are compounded with depth, and reduce the quality of resulting models. Moreover, as each
example is used to compute the variance used in its own normalization, the normalization operation
is less well approximated by an affine transform, which is what is used in inference.
Non-i.i.d. minibatches can have a detrimental effect on models with batchnorm. For example, in a
metric learning scenario (e.g. [4]), it is common to bias the minibatch sampling to include sets of
examples that are known to be related. For instance, for a minibatch of size 32, we may randomly
select 16 labels, then choose 2 examples for each of those labels. Without batchnorm, the loss
computed for the minibatch decouples over the examples, and the intra-batch dependence introduced
by our sampling mechanism may, at worst, increase the variance of the minibatch gradient. With
batchnorm, however, the examples interact at every layer, which may cause the model to overfit to
the specific distribution of minibatches and suffer when used on individual examples.
The dependence of the batch-normalized activations on the entire minibatch makes batchnorm powerful, but it is also the source of its drawbacks. Several approaches have been proposed to alleviate
this. However, unlike batchnorm which can be easily applied to an existing model, these methods
may require careful analysis of nonlinearities [1] and may change the class of functions representable
by the model [2]. Weight normalization [11] presents an alternative, but does not offer guarantees
about the activations and gradients when the model contains arbitrary nonlinearities, or contains layers without such normalization. Furthermore, weight normalization has been shown to benefit from
mean-only batch normalization, which, like batchnorm, results in different outputs during training
and inference. Another alternative [10] is to use a separate and fixed minibatch to compute the normalization parameters, but this makes the training more expensive, and does not guarantee that the
activations outside the fixed minibatch are normalized.
In this paper we propose Batch Renormalization, a new extension to batchnorm. Our method ensures
that the activations computed in the forward pass of the training step depend only on a single example
and are identical to the activations computed in inference. This significantly improves the training
on non-i.i.d. or small minibatches, compared to batchnorm, without incurring extra cost.
2
Prior Work: Batch Normalization
We are interested in stochastic gradient optimization of deep networks. The task is to minimize the
loss, which decomposes over training examples:
? = arg min
?
N
1 X
`i (?)
N i=1
where `i is the loss incurred on the ith training example, and ? is the vector of model weights. At
each training step, a minibatch of m examples is used to compute the gradient
1 ?`i (?)
m ??
which the optimizer uses to adjust ?.
Consider a particular node x in a deep network. We observe that x depends on all the model parameters that are used for its computation, and when those change, the distribution of x also changes.
Since x itself affects the loss through all the layers above it, this change in distribution complicates
the training of the layers above. This has been referred to as internal covariate shift. Batch Normalization [6] addresses it by considering the values of x in a minibatch B = {x1...m }. It then
2
normalizes them as follows:
m
1 X
xi
m i=1
v
u
m
u1 X
(xi ? ?B )2 +
?B ? t
m i=1
?B ?
xi ? ?B
?B
yi ? ?b
xi + ? ? BN(xi )
x
bi ?
Here ? and ? are trainable parameters (learned using the same procedure, such as stochastic gradient
descent, as all the other model weights), and is a small constant. Crucially, the computation of the
sample mean ?B and sample standard deviation ?B are part of the model architecture, are themselves
functions of the model parameters, and as such participate in backpropagation. The backpropagation
formulas for batchnorm are easy to derive by chain rule and are given in [6].
When applying batchnorm to a layer of activations x, the normalization takes place independently
for each dimension (or, in the convolutional case, for each channel or feature map). When x is itself
a result of applying a linear transform W to the previous layer, batchnorm makes the model invariant
to the scale of W (ignoring the small ). This invariance makes it possible to not be picky about
weight initialization, and to use larger learning rates.
Besides the reduction of internal covariate shift, an intuition for another effect of batchnorm can be
obtained by considering the gradients with respect to different layers. Consider the normalized layer
b
x, whose elements all have zero mean and unit variance. For a thought experiment, let us assume
that the dimensions of b
x are independent. Further, let us approximate the loss `(b
x) as its first-order
?`
Taylor expansion: ` ? `0 + gT b
x, where g = ?b
.
It
then
follows
that
Var[`]
?
kgk2 in which the
x
left-hand side does not depend on the layer we picked. This means that the norm of the gradient w.r.t.
?`
a normalized layer k ?b
x k is approximately the same for different normalized layers. Therefore the
gradients, as they flow through the network, do not explode nor vanish, thus facilitating the training.
While the assumptions of independence and linearity do not hold in practice, the gradient flow is in
fact significantly improved in batch-normalized models.
During inference, the standard practice is to normalize the activations using the moving averages ?,
? 2 instead of minibatch mean ?B and variance ?B2 :
x??
??+?
?
which depends only on a single input example rather than requiring a whole minibatch.
yinference =
It is natural to ask whether we could simply use the moving averages ?, ? to perform the normalization during training, since this would remove the dependence of the normalized activations on
the other example in the minibatch. This, however, has been observed to lead to the model blowing
up. As argued in [6], such use of moving averages would cause the gradient optimization and the
normalization to counteract each other. For example, the gradient step may increase a bias or scale
the convolutional weights, in spite of the fact that the normalization would cancel the effect of these
changes on the loss. This would result in unbounded growth of model parameters without actually
improving the loss. It is thus crucial to use the minibatch moments, and to backpropagate through
them.
3
Batch Renormalization
With batchnorm, the activities in the network differ between training and inference, since the normalization is done differently between the two models. Here, we aim to rectify this, while retaining
the benefits of batchnorm.
Let us observe that if we have a minibatch and normalize a particular node x using either the minibatch statistics or their moving averages, then the results of these two normalizations are related by
an affine transform. Specifically, let ? be an estimate of the mean of x, and ? be an estimate of its
3
Input: Values of x over a training mini-batch B = {x1...m }; parameters ?, ?; current moving mean
? and standard deviation ?; moving average update rate ?; maximum allowed correction rmax ,
dmax .
Output: {yi = BatchRenorm(xi )}; updated ?, ?.
m
1 X
xi
m i=1
v
u
m
u
1 X
t
?B ? +
(xi ? ?B )2
m i=1
?
B
r ? stop gradient clip[1/rmax ,rmax ]
?
?B ? ?
d ? stop gradient clip[?dmax ,dmax ]
?
xi ? ?B
x
bi ?
?r+d
?B
yi ? ? x
bi + ?
?B ?
? := ? + ?(?B ? ?)
? := ? + ?(?B ? ?)
Inference:
y???
// Update moving averages
x??
+?
?
Algorithm 1: Training (top) and inference (bottom) with Batch Renormalization, applied to activation x over a mini-batch. During backpropagation, standard chain rule is used. The values marked
with stop gradient are treated as constant for a given training step, and the gradient is not
propagated through them.
standard deviation, computed perhaps as a moving average over the last several minibatches. Then,
we have:
xi ? ?
xi ? ?B
=
? r + d,
?
?B
where r =
?B ? ?
?B
, d=
?
?
If ? = E[?B ] and ? = E[?B ], then E[r] = 1 and E[d] = 0 (the expectations are w.r.t. a minibatch
B). Batch Normalization, in fact, simply sets r = 1, d = 0.
We propose to retain r and d, but treat them as constants for the purposes of gradient computation.
In other words, we augment a network, which contains batch normalization layers, with a perdimension affine transformation applied to the normalized activations. We treat the parameters r and
d of this affine transform as fixed, even though they were computed from the minibatch itself. It is
important to note that this transform is identity in expectation, as long as ? = E[?B ] and ? = E[?B ].
We refer to batch normalization augmented with this affine transform as Batch Renormalization: the
fixed (for the given minibatch) r and d correct for the fact that the minibatch statistics differ from
the population ones. This allows the above layers to observe the ?correct? activations ? namely,
the ones that would be generated by the inference model. We emphasize that, unlike the trainable
parameters ?, ? of batchnorm, the corrections r and d are not trained by gradient descent, and vary
across minibatches since they depend on the statistics of the current minibatch.
In practice, it is beneficial to train the model for a certain number of iterations with batchnorm alone,
without the correction, then ramp up the amount of allowed correction. We do this by imposing
bounds on r and d, which initially constrain them to 1 and 0, respectively, and then are gradually
relaxed.
4
Algorithm 1 presents Batch Renormalization. Unlike batchnorm, where the moving averages are
computed during training but used only for inference, Batch Renorm does use ? and ? during training to perform the correction. We use a fairly high rate of update ? for these averages, to ensure that
they benefit from averaging multiple batches but do not become stale relative to the model parameters. We explicitly update the exponentially-decayed moving averages ? and ?, and optimize the
rest of the model using gradient optimization, with the gradients calculated via backpropagation:
?`
?`
=
??
?b
xi
?yi
m
X
?r
?`
?`
=
? (xi ? ?B ) ? 2
??B
?b
x
?B
i
i=1
m
X ?` ?r
?`
=
?
??B
?b
xi ?B
i=1
?`
?`
r
?` xi ? ?B
?`
1
=
?
+
?
+
?
?xi
?b
xi ?B
??B
m?B
??B m
m
X
?`
?`
=
?x
bi
??
?yi
i=1
m
X ?`
?`
=
??
?yi
i=1
These gradient equations reveal another interpretation of Batch Renormalization. Because the loss
` is unaffected when all xi are shifted or scaled by the same amount, the functions
`({xi + t}) and
Pm
?`
`({xi ? (1 + t)}) are constant in t, and computing their derivatives at t = 0 gives i=1 ?x
= 0 and
i
?`
Pm
?`
=
0.
Therefore,
if
we
consider
the
m-dimensional
vector
x
(with
one
element
per
i=1 i ?xi
?xi
example
?`in the minibatch), and further consider two vectors p0 = (1, . . . , 1) and p1 = (x1 , . . . , xm ),
lies in the null-space of p0 and p1 . In fact, it is easy to see from the Batch Renorm
then ?x
i
?`
?`
backprop formulas that to compute the gradient ?x
from ?b
xi , we need to first scale the latter
i
by r/?B , then project it onto the null-space of p0 and p1 . For r = ??B , this is equivalent to the
backprop for the transformation x??
? , but combined with the null-space projection. In other words,
Batch Renormalization allows us to normalize using moving averages ?, ? in training, and makes it
work using the extra projection step in backprop.
Batch Renormalization shares many of the beneficial properties of batchnorm, such as insensitivity
to initialization and ability to train efficiently with large learning rates. Unlike batchnorm, our
method ensures that that all layers are trained on internal representations that will be actually used
during inference.
4
Results
To evaluate Batch Renormalization, we applied it to the problem of image classification. Our baseline model is Inception v3 [13], trained on 1000 classes from ImageNet training set [9], and evaluated
on the ImageNet validation data. In the baseline model, batchnorm was used after convolution and
before the ReLU [8]. To apply Batch Renorm, we simply swapped it into the model in place of
batchnorm. Both methods normalize each feature map over examples as well as over spatial locations. We fix the scale ? = 1, since it could be propagated through the ReLU and absorbed into the
next layer.
The training used 50 synchronized workers [3]. Each worker processed a minibatch of 32 examples
per training step. The gradients computed for all 50 minibatches were aggregated and then used
by the RMSProp optimizer [14]. As is common practice, the inference model used exponentiallydecayed moving averages of all model parameters, including the ? and ? computed by both batchnorm and Batch Renorm.
For Batch Renorm, we used rmax = 1, dmax = 0 (i.e. simply batchnorm) for the first 5000 training
steps, after which these were gradually relaxed to reach rmax = 3 at 40k steps, and dmax = 5 at 25k
5
(a)
(b)
Figure 1: (a) Validation top-1 accuracy of Inception-v3 model with batchnorm and its Batch Renorm
version, trained on 50 synchronized workers, each processing minibatches of size 32. The Batch
Renorm model achieves a marginally higher validation accuracy. (b) Validation accuracy for
models trained with either batchnorm or Batch Renorm, where normalization is performed for sets
of 4 examples (but with the gradients aggregated over all 50 ? 32 examples processed by the 50
workers). Batch Renorm allows the model to train faster and achieve a higher accuracy, although
normalizing sets of 32 examples performs better.
steps. These final values resulted in clipping a small fraction of rs, and none of ds. However, at the
beginning of training, when the learning rate was larger, it proved important to increase rmax slowly:
otherwise, occasional large gradients were observed to suddenly and severely increase the loss. To
account for the fact that the means and variances change as the model trains, we used relatively fast
updates to the moving statistics ? and ?, with ? = 0.01. Because of this and keeping rmax = 1 for a
relatively large number of steps, we did not need to apply initialization bias correction [7].
All the hyperparameters other than those related to normalization were fixed between the models
and across experiments.
4.1
Baseline
As a baseline, we trained the batchnorm model using the minibatch size of 32. More specifically,
batchnorm was applied to each of the 50 minibatches; each example was normalized using 32 examples, but the resulting gradients were aggregated over 50 minibatches. This model achieved the
top-1 validation accuracy of 78.3% after 130k training steps.
To verify that Batch Renorm does not diminish performance on such minibatches, we also trained
the model with Batch Renorm, see Figure 1(a). The test accuracy of this model closely tracked the
baseline, achieving a slightly higher test accuracy (78.5%) after the same number of steps.
4.2
Small minibatches
To investigate the effectiveness of Batch Renorm when training on small minibatches, we reduced
the number of examples used for normalization to 4. Each minibatch of size 32 was thus broken into
?microbatches? each having 4 examples; each microbatch was normalized independently, but the
loss for each minibatch was computed as before. In other words, the gradient was still aggregated
over 1600 examples per step, but the normalization involved groups of 4 examples rather than 32 as
in the baseline. Figure 1(b) shows the results.
The validation accuracy of the batchnorm model is significantly lower than the baseline that normalized over minibatches of size 32, and training is slow, achieving 74.2% at 210k steps. We obtain
a substantial improvement much faster (76.5% at 130k steps) by replacing batchnorm with Batch
Renorm, However, the resulting test accuracy is still below what we get when applying either batchnorm or Batch Renorm to size 32 minibatches. Although Batch Renorm improves the training with
small minibatches, it does not eliminate the benefit of having larger ones.
6
Figure 2: Validation accuracy when training on non-i.i.d. minibatches, obtained by sampling 2
images for each of 16 (out of total 1000) random labels. This distribution bias results not only in a
low test accuracy, but also low accuracy on the training set, with an eventual drop. This indicates
overfitting to the particular minibatch distribution, which is confirmed by the improvement when the
test minibatches also contain 2 images per label, and batchnorm uses minibatch statistics ?B , ?B
during inference. It improves further if batchnorm is applied separately to 2 halves of a training
minibatch, making each of them more i.i.d. Finally, by using Batch Renorm, we are able to just train
and evaluate normally, and achieve the same validation accuracy as we get for i.i.d. minibatches in
Fig. 1(a).
4.3
Non-i.i.d. minibatches
When examples in a minibatch are not sampled independently, batchnorm can perform rather poorly.
However, sampling with dependencies may be necessary for tasks such as for metric learning [4, 12].
We may want to ensure that images with the same label have more similar representations than
otherwise, and to learn this we require that a reasonable number of same-label image pairs can be
found within the same minibatch.
In this experiment (Figure 2), we selected each minibatch of size 32 by randomly sampling 16 labels
(out of the total 1000) with replacement, then randomly selecting 2 images for each of those labels.
When training with batchnorm, the test accuracy is much lower than for i.i.d. minibatches, achieving
only 67%. Surprisingly, even the training accuracy is much lower (72.8%) than the test accuracy in
the i.i.d. case, and in fact exhibits a drop that is consistent with overfitting. We suspect that this is
in fact what happens: the model learns to predict labels for images that come in a set, where each
image has a counterpart with the same label. This does not directly translate to classifying images
individually, thus producing a drop in the accuracy computed on the training data. To verify this,
we also evaluated the model in the ?training mode?, i.e. using minibatch statistics ?B , ?B instead
of moving averages ?, ?, where each test minibatch had size 50 and was obtained using the same
procedure as the training minibatches ? 25 labels, with 2 images per label. As expected, this does
much better, achieving 76.5%, though still below the baseline accuracy. Of course, this evaluation
scenario is usually infeasible, as we want the image representation to be a deterministic function of
that image alone.
We can improve the accuracy for this problem by splitting each minibatch into two halves of size
16 each, so that for every pair of images belonging to the same class, one image is assigned to the
first half-minibatch, and the other to the second. Each half is then more i.i.d., and this achieves a
much better test accuracy (77.4% at 140k steps), but still below the baseline. This method is only
7
applicable when the number of examples per label is small (since this determines the number of
microbatches that a minibatch needs to be split into).
With Batch Renorm, we simply trained the model with minibatch size of 32. The model achieved
the same test accuracy (78.5% at 120k steps) as the equivalent model on i.i.d. minibatches, vs.
67% obtained with batchnorm. By replacing batchnorm with Batch Renorm, we ensured that the
inference model can effectively classify individual images. This has completely eliminated the effect
of overfitting the model to image sets with a biased label distribution.
5
Conclusions
We have demonstrated that Batch Normalization, while effective, is not well suited to small or
non-i.i.d. training minibatches. We hypothesized that these drawbacks are due to the fact that the
activations in the model, which are in turn used by other layers as inputs, are computed differently
during training than during inference. We address this with Batch Renormalization, which replaces
batchnorm and ensures that the outputs computed by the model are dependent only on the individual
examples and not the entire minibatch, during both training and inference.
Batch Renormalization extends batchnorm with a per-dimension correction to ensure that the activations match between the training and inference networks. This correction is identity in expectation;
its parameters are computed from the minibatch but are treated as constant by the optimizer. Unlike
batchnorm, where the means and variances used during inference do not need to be computed until
the training has completed, Batch Renormalization benefits from having these statistics directly participate in the training. Batch Renormalization is as easy to implement as batchnorm itself, runs at
the same speed during both training and inference, and significantly improves training on small or
non-i.i.d. minibatches. Our method does have extra hyperparameters: the update rate ? for the moving averages, and the schedules for correction limits dmax , rmax . We have observed, however, that
stable training can be achieved even without this clipping, by using a saturating nonlinearity such as
min(ReLU(?), 6), and simply turning on renormalization after an initial warm-up using batchnorm
alone. A more extensive investigation of the effect of these parameters is a part of future work.
Batch Renormalization offers a promise of improving the performance of any model that would
normally use batchnorm. This includes Residual Networks [5]. Another application is Generative
Adversarial Networks [10], where the non-determinism introduced by batchnorm has been found to
be an issue, and Batch Renorm may provide a solution.
Finally, Batch Renormalization may benefit applications where applying batch normalization has
been difficult ? such as recurrent networks. There, batchnorm would require each timestep to be
normalized independently, but Batch Renormalization may make it possible to use the same running
averages to normalize all timesteps, and then update those averages using all timesteps. This remains
one of the areas that warrants further exploration.
References
[1] Devansh Arpit, Yingbo Zhou, Bhargava U Kota, and Venu Govindaraju. Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. arXiv
preprint arXiv:1603.01431, 2016.
[2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint
arXiv:1607.06450, 2016.
[3] Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous sgd. arXiv preprint arXiv:1604.00981, 2016.
[4] Jacob Goldberger, Sam Roweis, Geoff Hinton, and Ruslan Salakhutdinov. Neighbourhood
components analysis. In Advances in Neural Information Processing Systems 17, 2004.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016.
8
[6] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. In Proceedings of the 32nd International Conference on
Machine Learning (ICML-15), pages 448?456, 2015.
[7] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR,
abs/1412.6980, 2014.
[8] Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, pages 807?814. Omnipress, 2010.
[9] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng
Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li FeiFei. ImageNet Large Scale Visual Recognition Challenge, 2014.
[10] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training gans. In Advances in Neural Information Processing Systems,
pages 2226?2234, 2016.
[11] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization
to accelerate training of deep neural networks. In Advances in Neural Information Processing
Systems, pages 901?901, 2016.
[12] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for
face recognition and clustering. CoRR, abs/1503.03832, 2015.
[13] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 2818?2826, 2016.
[14] T. Tieleman and G. Hinton. Lecture 6.5 - rmsprop. COURSERA: Neural Networks for Machine Learning, 2012.
9
| 6790 |@word version:1 norm:1 nd:1 r:1 crucially:1 bn:1 jacob:1 p0:3 sgd:1 moment:1 reduction:1 initial:1 contains:3 selecting:1 existing:1 current:2 com:1 activation:19 goldberger:1 diederik:2 numerical:1 christian:2 hypothesize:1 remove:1 drop:3 update:7 v:1 alone:3 half:4 selected:1 generative:1 alec:1 beginning:1 ith:1 node:4 location:1 zhang:1 unbounded:1 become:3 expected:1 themselves:1 nor:1 p1:3 kiros:1 salakhutdinov:1 considering:2 project:1 moreover:1 linearity:1 null:3 what:3 rmax:8 substantially:1 unified:1 transformation:2 guarantee:2 every:2 growth:1 zaremba:1 decouples:1 scaled:1 ensured:1 unit:2 normally:2 producing:2 before:2 treat:2 limit:1 severely:1 approximately:1 initialization:5 bi:5 practice:6 implement:1 backpropagation:4 procedure:2 area:1 significantly:5 thought:1 projection:2 word:3 spite:1 get:2 onto:1 undesirable:1 andrej:1 applying:4 optimize:1 equivalent:2 map:2 deterministic:1 demonstrated:1 independently:4 jimmy:2 splitting:1 rule:2 shlens:1 reparameterization:1 stability:1 population:2 embedding:1 updated:1 us:4 samy:1 goodfellow:1 element:3 approximated:1 expensive:1 recognition:5 observed:3 bottom:1 preprint:3 worst:1 revisiting:1 ensures:3 coursera:1 sun:1 substantial:1 intuition:1 broken:1 rmsprop:2 kalenichenko:1 yingbo:1 trained:10 depend:5 efficiency:1 completely:1 accelerate:2 easily:1 differently:2 geoff:1 train:6 fast:1 effective:3 vicki:1 outside:1 quite:1 whose:2 larger:3 ramp:1 otherwise:2 ability:1 statistic:10 transform:6 itself:4 final:1 propose:3 jamie:1 translate:1 poorly:1 insensitivity:2 achieve:2 roweis:1 normalize:6 adam:1 help:2 derive:1 batchnorm:50 recurrent:1 tim:2 come:1 synchronized:2 differ:2 drawback:3 correct:2 closely:1 stochastic:4 exploration:1 enable:1 backprop:3 require:3 argued:1 fix:1 generalization:1 alleviate:1 investigation:1 ryan:1 extension:2 correction:9 hold:1 diminish:1 predict:1 optimizer:3 vary:1 achieves:2 purpose:1 ruslan:1 diminishes:1 applicable:1 schroff:1 label:14 individually:1 successfully:1 aim:1 rather:4 zhou:1 improvement:2 devansh:1 indicates:1 adversarial:1 baseline:9 sense:1 inference:24 dependent:1 entire:3 eliminate:1 initially:1 interested:1 arg:1 classification:1 issue:1 augment:1 retaining:1 jianmin:1 art:1 spatial:1 fairly:1 having:3 beach:1 sampling:5 eliminated:1 identical:1 cancel:1 icml:2 warrant:1 jon:1 future:1 randomly:3 resulted:1 individual:4 replacement:1 ab:2 investigate:1 intra:1 evaluation:1 adjust:1 chain:2 accurate:1 worker:4 necessary:1 taylor:1 complicates:1 instance:1 classify:1 retains:1 clipping:2 cost:1 deviation:3 dependency:1 combined:1 st:1 decayed:1 international:1 sensitivity:1 retain:1 michael:1 gans:1 sanjeev:1 rafal:1 choose:1 slowly:1 huang:1 derivative:1 wojciech:1 li:1 szegedy:2 account:1 nonlinearities:2 b2:2 stabilize:1 includes:1 explicitly:1 depends:2 performed:1 picked:1 philbin:1 maintains:1 kgk2:1 jia:1 minimize:1 accuracy:21 convolutional:2 variance:11 efficiently:1 ofthe:1 vincent:1 produced:1 marginally:1 none:1 ren:1 confirmed:1 rectified:1 russakovsky:1 unaffected:1 reach:1 involved:1 james:1 propagated:2 stop:3 sampled:1 proved:1 govindaraju:1 ask:1 improves:4 schedule:1 sean:1 blowing:1 actually:2 appears:1 higher:4 improved:2 done:1 though:2 evaluated:2 furthermore:1 inception:3 just:1 until:1 overfit:1 hand:1 d:1 replacing:2 su:1 propagation:1 google:2 minibatch:51 mode:1 quality:1 reveal:1 perhaps:1 lei:1 stale:1 usa:1 effect:6 hypothesized:1 normalized:16 requiring:1 verify:2 counterpart:1 contain:1 assigned:1 during:16 performs:1 omnipress:1 zhiheng:1 image:17 wise:1 recently:1 common:2 tracked:1 exponentially:1 interpretation:1 he:1 significant:1 refer:1 jozefowicz:1 imposing:1 pm:2 nonlinearity:1 had:1 rectify:1 toolkit:1 moving:16 stable:1 gt:1 own:1 scenario:2 certain:1 yi:6 relaxed:2 florian:1 deng:1 feifei:1 aggregated:4 v3:2 xiangyu:1 multiple:1 reduces:1 compounded:1 faster:2 match:1 offer:2 long:2 vision:3 metric:2 expectation:3 arxiv:6 iteration:1 normalization:33 sergey:3 sometimes:1 monga:1 achieved:3 want:2 separately:1 krause:1 source:1 jian:1 crucial:1 extra:3 rest:1 unlike:5 swapped:1 biased:1 suspect:1 flow:2 effectiveness:2 bernstein:1 split:1 easy:3 bengio:1 vinod:1 affect:1 independence:1 relu:3 timesteps:2 architecture:3 reduce:1 arpit:1 shift:4 synchronous:1 whether:1 accelerating:2 suffer:1 shaoqing:1 cause:2 deep:9 dramatically:1 clear:1 karpathy:1 picky:1 amount:2 clip:2 processed:2 reduced:1 generate:1 shifted:1 per:7 promise:1 group:1 achieving:4 timestep:1 fraction:1 counteract:1 run:1 powerful:1 place:3 extends:1 reasonable:1 layer:19 bound:1 replaces:1 activity:1 constrain:1 explode:1 kota:1 u1:1 speed:1 min:2 relatively:2 representable:1 belonging:1 across:2 beneficial:2 slightly:1 sam:1 making:1 happens:1 invariant:1 gradually:2 restricted:1 equation:1 remains:1 dmax:6 turn:1 mechanism:1 operation:1 incurring:1 apply:2 observe:3 occasional:1 salimans:2 neighbourhood:1 batch:59 alternative:2 top:3 running:1 ensure:4 include:1 completed:1 clustering:1 suddenly:1 added:1 parametric:1 dependence:5 exhibit:1 gradient:27 detrimental:1 separate:1 venu:1 rethinking:1 participate:2 besides:1 mini:2 difficult:1 hao:1 ba:2 wojna:1 zbigniew:1 boltzmann:1 satheesh:1 perform:4 upper:1 convolution:1 descent:2 hinton:4 arbitrary:1 introduced:2 namely:1 pair:2 extensive:1 imagenet:3 learned:1 kingma:2 inaccuracy:1 nip:1 address:2 able:1 usually:2 below:3 xm:3 pattern:2 challenge:1 including:1 natural:1 treated:2 warm:1 turning:1 residual:3 bhargava:1 improve:2 prior:1 relative:1 loss:10 lecture:1 var:1 geoffrey:2 validation:8 incurred:1 vanhoucke:1 affine:5 consistent:1 classifying:1 share:1 normalizes:1 course:1 surprisingly:1 last:1 keeping:1 infeasible:1 bias:4 side:1 face:1 determinism:1 benefit:7 distributed:1 depth:1 dimension:3 calculated:1 forward:1 approximate:1 emphasize:1 dmitry:1 overfitting:3 ioffe:3 xi:25 decomposes:1 khosla:1 channel:1 learn:1 ca:1 ignoring:1 improving:3 interact:1 expansion:1 did:1 whole:2 hyperparameters:2 allowed:2 facilitating:1 x1:5 augmented:1 fig:1 referred:1 renormalization:21 slow:1 aid:1 lie:1 vanish:1 learns:1 ian:1 formula:2 removing:1 specific:1 covariate:4 normalizing:2 consist:1 effectively:1 corr:2 chen:2 backpropagate:1 suited:1 simply:6 absorbed:1 visual:1 aditya:1 saturating:1 kaiming:1 scalar:1 radford:1 tieleman:1 determines:1 minibatches:28 nair:1 ma:1 marked:1 identity:2 cheung:1 careful:1 towards:1 eventual:1 replace:1 change:7 specifically:3 reducing:2 averaging:1 olga:1 total:2 pas:1 invariance:1 select:1 berg:1 internal:6 latter:1 jonathan:1 alexander:1 rajat:1 facenet:1 evaluate:2 trainable:2 |
6,402 | 6,791 | Generating steganographic images via adversarial
training
Jamie Hayes
University College London
[email protected]
George Danezis
University College London
The Alan Turing Institute
[email protected]
Abstract
Adversarial training has proved to be competitive against supervised learning
methods on computer vision tasks. However, studies have mainly been confined
to generative tasks such as image synthesis. In this paper, we apply adversarial
training techniques to the discriminative task of learning a steganographic algorithm. Steganography is a collection of techniques for concealing the existence
of information by embedding it within a non-secret medium, such as cover texts
or images. We show that adversarial training can produce robust steganographic
techniques: our unsupervised training scheme produces a steganographic algorithm
that competes with state-of-the-art steganographic techniques. We also show that
supervised training of our adversarial model produces a robust steganalyzer, which
performs the discriminative task of deciding if an image contains secret information.
We define a game between three parties, Alice, Bob and Eve, in order to simultaneously train both a steganographic algorithm and a steganalyzer. Alice and Bob
attempt to communicate a secret message contained within an image, while Eve
eavesdrops on their conversation and attempts to determine if secret information is
embedded within the image. We represent Alice, Bob and Eve by neural networks,
and validate our scheme on two independent image datasets, showing our novel
method of studying steganographic problems is surprisingly competitive against
established steganographic techniques.
1
Introduction
Steganography and cryptography both provide methods for secret communication. Authenticity
and integrity of communications are central aims of modern cryptography. However, traditional
cryptographic schemes do not aim to hide the presence of secret communications. Steganography
conceals the presence of a message by embedding it within a communication the adversary does
not deem suspicious. Recent details of mass surveillance programs have shown that meta-data of
communications can lead to devastating privacy leakages1 . NSA officials have stated that they ?kill
people based on meta-data? [8]; the mere presence of a secret communication can have life or death
consequences even if the content is not known. Concealing both the content as well as the presence
of a message is necessary for privacy sensitive communication.
Steganographic algorithms are designed to hide information within a cover message such that the
cover message appears unaltered to an external adversary. A great deal of effort is afforded to
designing steganographic algorithms that minimize the perturbations within a cover message when
a secret message is embedded within, while allowing for recovery of the secret message. In this
work we ask if a steganographic algorithm can be learned in an unsupervised manner, without
1
See EFF?s guide:
https://www.eff.org/files/2014/05/29/unnecessary_and_
disproportionate.pdf.
human domain knowledge. Note that steganography only aims to hide the presence of a message.
Thus, it is nearly always the case that the message is encrypted prior to embedding using a standard
cryptographic scheme; the embedded message is therefore indistinguishable from a random string.
The receiver of the steganographic image will then decode to reveal the ciphertext of the message and
then decrypt using an established shared key.
For the unsupervised design of steganographic techniques, we leverage ideas from the field of
adversarial training [7]. Typically, adversarial training is used to train generative models on tasks
such as image generation and speech synthesis. We design a scheme that aims to embed a secret
message within an image. Our task is discriminative, the embedding algorithm takes in a cover image
and produces a steganographic image, while the adversary tries to learn weaknesses in the embedding
algorithm, resulting in the ability to distinguish cover images from steganographic images.
The success of a steganographic algorithm or a steganalysis technique over one another amounts
to ability to model the cover distribution correctly [5]. So far, steganographic schemes have used
human-based rules to ?learn? this distribution and perturb it in a way that disrupts it least. However,
steganalysis techniques commonly use machine learning models to learn the differences in distributions between the cover and steganographic images. Based on this insight we pursue the following
hypothesis:
Hypothesis: Machine learning is as capable as human-based rules for the task of modeling the cover
distribution, and so naturally lends itself to the task of designing steganographic algorithms, as well
as performing steganalysis.
In this paper, we introduce the first steganographic algorithm produced entirely in an unsupervised
manner, through a novel adversarial training scheme. We show that our scheme can be successfully
implemented in practice between two communicating parties, and additionally that with supervised
training, the steganalyzer, Eve, can compete against state-of-the-art steganalysis methods. To the best
of our knowledge, this is one of the first real-world applications of adversarial training, aside from
traditional adversarial learning applications such as image generation tasks.
2
2.1
Related work
Adversarial learning
Two recent designs have applied adversarial training to cryptographic and steganographic problems.
Abadi and Andersen [2] used adversarial training to teach two neural networks to encrypt a short
message, that fools a discriminator. However, it is hard to offer an evaluation to show that the
encryption scheme is computationally difficult to break, nor is there evidence that this encryption
scheme is competitive against readily available public key encryption schemes. Adversarial training
has also been applied to steganography [4], but in a different way to our scheme. Whereas we seek to
train a model that learns a steganographic technique by itself, Volkhonskiy et al?s. work augments the
original GAN process to generate images which are more susceptible to established steganographic
algorithms. In addition to the normal GAN discriminator, they introduce a steganalyzer that receives
examples from the generator that may or may not contain secret messages. The generator learns to
generate realistic images by fooling the discriminator of the GAN, and learns to be a secure container
by fooling the steganalyzer. However, they do not measure performance against state-of-the-art
steganographic techniques making it difficult to estimate the robustness of their scheme.
2.2
Steganography
Steganography research can be split into two subfields: the study of steganographic algorithms and
the study of steganalyzers. Research into steganographic algorithms concentrates on finding methods
to embed secret information within a medium while minimizing the perturbations within that medium.
Steganalysis research seeks to discover methods to detect such perturbations. Steganalysis is a binary
classification task: discovering whether or not secret information is present with a message, and so
machine learning classifiers are commonly used as steganalyzers.
Least significant bit (LSB) [16] is a simple steganographic algorithm used to embed a secret message
within a cover image. Each pixel in an image is made up of three RGB color channels (or one for
grayscale images), and each color channel is represented by a number of bits. For example, it is
2
(1)
C
M
C
M
Eve
Alice
C
Alice
Eve
p
Bob
M0
(2)
C0
C
M
Alice
Eve
p
Bob
M0
C0
p
(3)
0
Alice
Bob
Bob
M0
(a)
(b)
Figure 1: (a) Diagram of the training game. (b) How two parties, Carol and David, use the scheme in practice:
(1) Two parties establish a shared key. (2) Carol trains the scheme on a set of images. Information about model
weights, architecture and the set of images used for training is encrypted under the shared key and sent to David,
who decrypts to create a local copy of the models. (3) Carol then uses the Alice model to embed a secret
encrypted message, creating a steganographic image. This is sent to David, who uses the Bob model to decode
the encrypted message and subsequently decrypt.
common to represent a pixel in a grayscale image with an 8-bit binary sequence. The LSB technique
then replaces the least significant bits of the cover image by the bits of the secret message. By only
manipulating the least significant bits of the cover image, the variation in color of the original image
is minimized. However, information from the original image is always lost when using the LSB
technique, and is known to be vulnerable to steganalysis [6].
Most steganographic schemes for images use a distortion function that forces the embedding process
to be localized to parts of the image that are considered noisy or difficult to model. Advanced
steganographic algorithms attempt to minimize the distortion function between a cover image, C,
and a steganographic image, C 0 ,
d(C, C 0 ) = f (C, C 0 ) ? |C ? C 0 |
It is the choice of the function f , the cost of distorting a pixel, which changes for different steganographic algorithms.
HUGO [18] is considered to be one of the most secure steganographic techniques. It defines a
distortion function domain by assigning costs to pixels based on the effect of embedding some
information within a pixel, the space of pixels is condensed into a feature space using a weighted
norm function. WOW (Wavelet Obtained Weights) [9] is another advanced steganographic method
that embeds information into a cover image according to regions of complexity. If a region of an
image is more texturally complex than another, the more pixel values within that region will be
modified. Finally, S-UNIWARD [10] proposes a universal distortion function that is agnostic to the
embedding domain. However, the end goal is much the same: to minimize this distortion function,
and embed information in noisy regions or complex textures, avoiding smooth regions of the cover
images. In Section 4.2, we compare out results against a state-of-the-art steganalyzer, ATS [13]. ATS
uses labeled data to build artificial training sets of cover and steganographic images, and is trained
using an SVM with a Gaussian kernel. They show that this technique outperforms other popular
steganalysis tools.
3
Steganographic adversarial training
This section discusses our steganographic scheme, the models we use and the information each
party wishes to conceal or reveal. After laying this theoretical groundwork, we present experiments
supporting our claims.
3.1
Learning objectives
Our training scheme involves three parties: Alice, Bob and Eve. Alice sends a message to Bob, Eve
can eavesdrop on the link between Alice and Bob and would like to discover if there is a secret
message embedded within their communication. In classical steganography, Eve (the Steganalyzer)
is passed both unaltered images, called cover images, and images with secret messages embedded
3
within, called steganographic images. Given an image, Eve places a confidence score of how likely
this is a cover or steganographic image. Alice embeds a secret message within the cover image,
producing a steganographic image, and passes this to Bob. Bob knows the embedding process and so
can recover the message. In our scheme, Alice, Bob and Eve are neural networks. Alice is trained to
learn to produce a steganographic image such that Bob can recover the secret message, and such that
Eve can do no better than randomly guess if a sample is a cover or steganographic image.
The full scheme is depicted in Figure 1a: Alice receives a cover image, C, and a secret encrypted
message, M , as inputs. Alice outputs a steganographic image, C 0 , which is given to both Bob and
Eve. Bob outputs M 0 , the secret message he attempts to recover from C 0 . We say Bob performs
perfectly if M = M 0 . In addition to the steganographic images, Eve also receives the cover images.
Given an input X, Eve outputs the probability, p, that X = C. Alice tries to learn an embedding
scheme such that Eve always outputs p = 21 . We do not train Eve to maximize her prediction error,
since she can then simply flip her decision and perform with perfect classification accuracy. Figure 1b
shows how the scheme should be used in pratice if two people wish to communicate a steganographic
message using our scheme. The cost of sending the encrypted model information from Carol to David
is low, with an average of 70MB. Note that in Figure 1b, steps (1) and (2), the set-up of the shared key
and sharing of model information, is perfomed offline. We assume, as is common in cryptographic
research, that this initial set-up phase is not visible to an adversary.
At the beginning of training, a human can easily separate cover images from steganographic images,
as Alice has not learned yet how to embed the secret message such that there is no visible difference
in the cover image. However, we train Eve much like a discriminator in a GAN, where we tie her
predictive power to the embedding capacity of Alice. When Alice produces a steganographic image
that does not resemble the cover image, Eve does not have the ability to perfectly separate cover from
steganographic images. As training continues, Eve becomes better at her task, but then so does Alice
as her weights are updated, in part, based on the loss of Eve.
Similarly to Abadi and Andersen [2], we let ?A , ?B , ?C denote the parameters of Alice, Bob and Eve,
respectively. We write A(?A , C, M ) for Alice?s output on C and M , B(?b , C 0 ) for Bob?s output on
C 0 , and E(?E , C, C 0 ) for Eve?s output on C and C 0 . Let LA , LB , LC denote the loss of Alice, Bob
and Eve, respectively. Then, we have the following relations:
B(?b , C 0 ) = B(?b , A(?A , C, M ))
E(?E , C, C 0 ) = E(?E , C, A(?A , C, M ))
We set Bob?s loss (the secret message reconstruction loss), to be the Euclidean distance between M
and M 0 :
LB (?A , ?B , M, C) = d(M, B(?b , C 0 ))
= d(M, B(?b , A(?A , C, M ))
= d(M, M 0 )
As is common with GAN discriminator implementations, we set the Eve?s loss to be sigmoid cross
entropy loss:
LE (?E , C, C 0 ) = ?y ? log(E(?E , x))
? (1 ? y) ? log(1 ? E(?E , x)),
where y = 0 if x = C 0 and y = 1 if x = C. Alice?s loss is given as a weighted sum of Bob?s loss,
Eve?s loss on steganographic images, and an additional reconstructive loss term:
LA (?A , C, M ) = ?A ? d(C, C 0 ) + ?B ? LB
+ ?E ? LE (?E , C, C 0 ),
where d(C, C 0 ) is the Euclidean distance between the cover image and the steganographic image,
and ?A , ?B , ?E ? R define the weight given to each respective loss term.
Our goal is not only to explore whether neural networks can produce steganographic embedding
algorithms in an unsupervised manner, but whether they are competitive against steganographic
algorithms like HUGO, WOW and S-UNIWARD, that have been designed by steganography experts.
We did not intend to encode a specific algorithm within the neural network, rather we would like to
give the networks the opportunity to devise their own.
4
(a) Cover images.
(b)
Steganographic
(0.1bpp).
(c)
Steganographic
(0.4bpp).
images
images
Figure 2: Cover and steganographic images from the celebA dataset, with embedding rates of 0.1bpp and
0.4bpp.
2.0
Alice
Bob
Eve
1.5
1.4
1.2
Loss
Loss
1.0
1.0
0.8
0.6
Alice
Bob
Eve
0.4
0.5
0.2
0
20
40
60
80
100
Steps
120
140
0
100
80
60
40
20
0
0
20
40
60
80
Steps
100
120
40
60
80
Steps
100
120
140
(b) Alice, Bob and Eve error for 0.4bpp.
Number of bits correctly decoded (out of 400)
Number of bits correctly decoded (out of 100)
(a) Alice, Bob and Eve error for 0.1bpp.
20
140
(c) Bob decoding success for 0.1bpp.
400
350
300
250
200
150
100
50
0
0
20
40
60
80
Steps
100
120
140
(d) Bob decoding success for 0.4bpp.
Figure 3: Results on celebA dataset at an embedding rate of 0.1bpp and 0.4bpp.
3.2
Architecture
Alice (Steganographic Encoder). Alice accepts as input a cover image and a random n-bit binary
message. The cover image is flattened and concatenated with the secret message, which is then fed
into the neural network. We would like the locality of the embedding process to be learned instead of
fixed; the first layer is fully connected to the next, allowing the secret message to be combined with
any region of the cover image. We then reshape to 4 ? 4 ? I ? 8, where I is width of the cover image.
Each of the next four layers are a sequence of upsampling fractionally-strided convolutions, batch
normalization and ReLU [17], except for the final layer where tanh [12] is used as the activation
function.
Bob (Steganographic Decoder). Bob accepts as input the steganographic image output by Alice.
Bob uses a deep convolutional architecture to extract the important bits from the output of Alice.
It is made up of four fractionally-strided convolutions and one final fully connected layer. Leaky
ReLU [15] is used in all but the final layer, where tanh is used. Batch normalization is applied at each
layer.
Eve (Steganalyzer). Eve accepts as input both cover images and steganographic images. Eve has a
similar architecture to Bob but uses the sigmoid activation function instead of tanh at the final layer
to output probabilities from the logits.
5
4
Experiments
As a proof of concept, we implemented our adversarial training scheme on two image datasets:
celebrity faces in the wild (celebA) [14] and a standard steganography research dataset, BOSS2 .
We concatenate an n-bit random message, with each sample of each dataset. We vary the size
of the message, n, to test the limits of the amount of information that can be effectively hidden
inside a cover image. This corresponds to the number of bits per pixel (bpp) that can be hidden by
a steganographic algorithm, a common measure of the success of a steganographic research. On
average, steganographic algorithms can successfully hide approximately 0.4bpp. Our datasets are
formed of 32 ? 32 pixel images, and so we vary n from 100 to 400, this corresponds to hiding
between approximately 0.1bpp to 0.4bpp. We intentionally randomize the secret message so that the
network does not learn a specific function associated with a specific group of messages. We want to
be sure that our model can generalize to any message that is less than n bits3 . For each experiment,
we performed grid search to find the optimum loss weights, ?A , ?B , ?E , for Alice. All experiments
in this section were performed in TensorFlow [1, 3], on a workstation with a Tesla K40 GPU card.
4.1
CelebA dataset
We first motivate our work by showing our model can produce steganographic images that are almost
indistinguishable from the cover images. We vary the size of the embedded secret message, to explore
how many bits may be stored in a cover image while remaining undetected by a steganalyzer. As
expected, we find that a larger perturbation in the cover image results in more communicated bits but
carries a greater risk of detection.
The celebA dataset consists of 202, 599 images of celebrity faces from 10, 177 unique identities. We
train in batches of 32, and use the Adam optimizer [11] with a learning rate of 2 ? 10?4 . At each
batch we alternate training either Alice and Bob, or Eve.
Figure 2b and Figure 2c show the steganographic images output by Alice after 150 training steps, for
embedding rates of 0.1bpp and 0.4bpp, respectively. Our scheme easily copes with an embedding
rate of 0.1bpp for a 32 ? 32 image, which is equivalent to communicating a short 12 character
ASCII message. Figure 3a and Figure 3c shows the training loss of each model, and the success
of Bob decoding the secret message over time, respectively. In the first few rounds of training, the
visual quality of Alice?s output is low and Bob is essentially randomly guessing the message. After
approximately 30 steps, Alice learns to successfully embed information in a cover image such that
Eve is fooled, and Bob learns to correctly decode the message. After 150 training steps, Bob decodes
the message with an average success of 100%, while Eve?s loss remains the same. Figure 3b and
Figure 3d show similar results, with both sample quality and success of decoding both high by the end
of training. Concealing 0.4bpp corresponds to successfully hiding a message of 51 ASCII characters.
Due to the larger amount of information communicated between Alice and Bob, it takes a longer
period of time for Alice to discover an effective embedding scheme, and for Bob to correctly discover
the secret message. After 150 training steps, Bob can successfully decode 100% of the message.
There is no noticeable image quality decrease between 0.1bpp and 0.4bpp, however we found after
0.4bpp, image quality suffered and it was no longer possible to balance realistic image creation
capable of fooling Eve, and convergence of message decoding. For both embedding rates of 0.1bpp
and 0.4bpp, Eve?s loss remained constant at approximately 1.4. Eve?s loss is the sum of losses on
both steganographic images and cover images, which both remained steady at approximately 0.69.
Due to the logarithmic term in the loss function, this is equivalent to randomly guessing if an image
carries a secret message.
4.2
BOSS dataset
In addition to our experiments on the celebA dataset, we trained our steganographic scheme on the
BOSS image dataset, which is commonly used as a benchmark in steganography research. BOSS
is made up of 10, 000 grayscale images depicting a variety of scenes such as landscapes, buildings
and animals. We expected our scheme to perform worse than on the celebA dataset, since the cover
2
3
http://agents.fel.cvut.cz/boss/index.php?mode=VIEW&tmpl=materials
This ensures our scheme can embed ciphertexts of messages, which appear as random strings.
6
(b) Steganographic images (0.1bpp).
Number of bits correctly decoded (out of 100)
(a) Cover images of buildings, birds, skies and the
ocean.
1.4
1.2
Loss
1.0
Alice
Bob
Eve
0.8
0.6
0.4
0.2
0
20
40
60
80
Steps
100
120
140
(c) Alice, Bob and Eve error for 0.1bpp.
100
80
60
40
20
0
0
20
40
60
80
Steps
100
120
140
(d) Bob decoding success for 0.1bpp.
Figure 4: Results on BOSS dataset at an embedding rate of 0.1bpp.
images do not come from a single distribution. However, we found our scheme is still capable of
embedding secret information successfully.
Figure 4b shows the sample quality of steganographic images with an embedding rate of 0.1bpp,
while Figure 4c and Figure 4d show the error rates of Alice, Bob and Eve, and the success of Bob
decoding the secret message, respectively. While image quality suffers slightly more than on the
celebA dataset, our scheme is still able to learn a steganographic algorithm. Our scheme is output
samples that are not dissimilar from the original dataset, while Bob is able to learn to successfully
decode the message. Alice and Bob both learn their respective tasks in a relatively short period of
time, after which there is not much improvement in terms of hiding or recovering the secret message.
At the end of training, Bob is able to successfully decode the secret message with 99.8% accuracy.
4.3
Comparison with related work
Fooling a steganalyzer, Eve, is easy by design, since we train in such a way that Eve never has a
significant competitive advantage. Thus, we additionally show that the resultant trained steganographic model, Alice, can fool an independent steganalyzer. We compare our scheme against
both state-of-the-art steganographic algorithms and steganalysis tools and show that it performs
competitively.
For both BOSS and CelebA, we compare our scheme against steganographic algorithms HUGO,
WOW and S-UNIWARD. Additionally, we implement the Eve model using supervised training and
compare against the steganalyzer ATS in Table 1. By design, Eve only performs slightly better than
random. One may wonder whether the choice of model for Eve is wise; why not use an established
steganalyzer in place of the Eve model? By training Eve in a supervised fashion, we show that Eve
has the capacity to become a strong steganalyzer, competing against established techniques like ATS,
and so is a good choice for the steganalyzer. Furthermore, Eve does not require a feature extraction
preprocessing step as with ATS, and, from our experiments, is an order of magnitude quicker to
train. For both the BOSS and CelebA datasets, we use 10, 000 samples and split in half, creating
a training set and a test set. Alice was then trained on the 5000 samples from the training set. We
then created an additional 10, 000 steganographic images for each steganographic algorithm (Alice,
HUGO, WOW and S-UNIWARD). Now each steganographic algorithm has an associated training
7
Table 1: Accuracy of distinguishing between cover and steganographic images for the steganalyzers, Eve and
ATS, on the BOSS and CelebA datasets at an embedding rate of 0.4bpp.
S TEGANOGRAPHIC A LGORITHM
A LICE
HUGO
WOW
S-UNIWARD
BOSS
S TEGANALYZER
ATS
E VE
0.83
0.79
0.66
0.59
0.75
0.74
0.77
0.72
C ELEBA
S TEGANALYZER
ATS
E VE
0.95
0.90
0.94
0.89
0.89
0.85
0.91
0.84
set and test set, each consisting of 5000 cover images and 5000 steganographic images. For each
steganographic algorithm we train both ATS and Eve on the associated training set, and then report
accuracy of the steganalyzer on the test set. From Table 1, Eve performs competitively against the
steganalyzer, ATS, and Alice also performs well against other steganographic techniques. While
our scheme does not substantially improve on current popular steganographic methods, it is clear
that it does not perform significantly worse, and that unsupervised training methods are capable of
competing with expert domain knowledge.
4.4
Evaluating robust decryption
Due to the non-convexity of the models in the training scheme, we cannot guarantee that two separate
parties training on the same images will converge to the same model weights, and so learn the same
embedding and decoding algorithms. Thus, prior to steganographic communication, we require
one of the communicating parties to train the scheme locally, encrypt model information and pass
it to the other party along with information about the set of training images. This ensures both
parties learn the same model weights. To validate the practicality of our idea, we trained the scheme
locally (Machine A) and then sent model information to another workstation (Machine B) that
reconstructed the learned models. We then passed steganographic images, embedded by the Alice
model from Machine A, to Machine B, who used the Bob model to recover the secret messages.
Using messages of length corresponding to hiding 0.1bpp, and randomly selecting 10% of the CelebA
dataset, Machine B was able to recover 99.1% of messages sent by Machine A, over 100 trials; our
scheme can successfully decode the secret encrypted message from the steganographic image. Note
that our scheme does not require perfect decoding accuracy to subsequently decrypt the message.
A receiver of a steganographic message can successfully decode and decrypt the secret message if
the mode of encryption can tolerate errors. For example, using a stream cipher such as AES-CTR
guarantees that incorrectly decoded bits will not affect the ability to decrypt the rest of the message.
5
Discussion & conclusion
We have offered substantial evidence that our hypothesis is correct and machine learning can be
used effectively for both steganalysis and steganographic algorithm design. In particular, it is
competitive against designs using human-based rules. By leveraging adversarial training games,
we confirm that neural networks are able to discover steganographic algorithms, and furthermore,
these steganographic algorithms perform well against state-of-the-art techniques. Our scheme does
not require domain knowledge for designing steganographic schemes. We model the attacker as
another neural network and show that this attacker has enough expressivity to perform well against a
state-of-the-art steganalyzer.
We expect this work to lead to fruitful avenues of further research. Finding the balance between
cover image reconstruction loss, Bob?s loss and Eve?s loss to discover an effective embedding scheme
is currently done via grid search, which is a time consuming process. Discovering a more refined
method would greatly improve the efficiency of the training process. Indeed, discovering a method
to quickly check whether the cover image has the capacity to accept a secret message would be a
great improvement over the trial-and-error approach currently implemented. It also became clear
that Alice and Bob learn their tasks after a relatively small number of training steps, further research
is needed to explore if Alice and Bob fail to improve due to limitations in the model or because of
shortcomings in the training scheme.
8
6
Acknowledgements
The authors would like to acknowledge financial support from the UK Government Communications
Headquarters (GCHQ), as part of University College London?s status as a recognised Academic
Centre of Excellence in Cyber Security Research. Jamie Hayes is supported by a Google PhD
Fellowship in Machine Learning. We thank the anonymous reviewers for their comments.
References
[1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale
machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467,
2016.
[2] Mart?n Abadi and David G Andersen. Learning to protect communications with adversarial
neural cryptography. arXiv preprint arXiv:1610.06918, 2016.
[3] Mart?n Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu
Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for
large-scale machine learning. 2016.
[4] Boris Borisenko Denis Volkhonskiy and Evgeny Burnaev. Generative adversarial networks for
image steganography. ICLR 2016 Open Review, 2016.
[5] Tom?? Filler, Andrew D Ker, and Jessica Fridrich. The square root law of steganographic capacity for markov covers. In IS&T/SPIE Electronic Imaging, pages 725408?725408. International
Society for Optics and Photonics, 2009.
[6] Jessica Fridrich, Miroslav Goljan, and Rui Du. Detecting lsb steganography in color, and
gray-scale images. IEEE multimedia, 8(4):22?28, 2001.
[7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani,
M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural
Information Processing Systems 27, pages 2672?2680. Curran Associates, Inc., 2014.
[8] M. Hayden. The price of privacy: Re-evaluating the nsa, 2014.
[9] Vojtech Holub and Jessica Fridrich. Designing steganographic distortion using directional filters.
In Information Forensics and Security (WIFS), 2012 IEEE International Workshop on, pages
234?239. IEEE, 2012.
[10] Vojt?ech Holub, Jessica Fridrich, and Tom?? Denemark. Universal distortion function for
steganography in an arbitrary domain. EURASIP Journal on Information Security, 2014(1):1,
2014.
[11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[12] Yann A LeCun, L?on Bottou, Genevieve B Orr, and Klaus-Robert M?ller. Efficient backprop.
In Neural networks: Tricks of the trade, pages 9?48. Springer, 2012.
[13] Daniel Lerch-Hostalot and David Meg?as. Unsupervised steganalysis based on artificial training
sets. Eng. Appl. Artif. Intell., 50(C):45?59, April 2016.
[14] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in
the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages
3730?3738, 2015.
[15] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural
network acoustic models. In Proc. ICML, volume 30, 2013.
[16] Jarno Mielikainen. Lsb matching revisited. IEEE signal processing letters, 13(5):285?287,
2006.
9
[17] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines.
In Proceedings of the 27th international conference on machine learning (ICML-10), pages
807?814, 2010.
[18] Tom?? Pevn`y, Tom?? Filler, and Patrick Bas. Using high-dimensional image models to perform
highly undetectable steganography. In International Workshop on Information Hiding, pages
161?177. Springer, 2010.
10
| 6791 |@word trial:2 unaltered:2 norm:1 c0:2 open:1 seek:2 rgb:1 eng:1 carry:2 initial:1 liu:1 contains:1 score:1 selecting:1 daniel:1 groundwork:1 outperforms:1 current:1 luo:1 activation:2 assigning:1 yet:1 diederik:1 readily:1 gpu:1 devin:2 realistic:2 visible:2 concatenate:1 designed:2 aside:1 generative:4 discovering:3 guess:1 half:1 isard:1 beginning:1 short:3 detecting:1 revisited:1 denis:1 org:1 along:1 undetectable:1 become:1 suspicious:1 abadi:5 consists:1 wild:2 inside:1 decrypt:5 introduce:2 manner:3 privacy:3 excellence:1 secret:39 indeed:1 expected:2 disrupts:1 nor:1 steganography:15 deem:1 becomes:1 hiding:5 discover:6 competes:1 medium:3 mass:1 agnostic:1 string:2 pursue:1 substantially:1 finding:2 guarantee:2 sky:1 tie:1 classifier:1 uk:3 sherjil:1 unit:1 appear:1 producing:1 danezis:2 local:1 limit:1 consequence:1 approximately:5 bird:1 alice:51 appl:1 subfields:1 unique:1 lecun:1 practice:2 lost:1 implement:1 communicated:2 ker:1 universal:2 significantly:1 matching:1 confidence:1 cannot:1 risk:1 www:1 equivalent:2 fruitful:1 reviewer:1 dean:2 jimmy:1 recovery:1 matthieu:2 pouget:1 communicating:3 rule:3 insight:1 financial:1 embedding:25 variation:1 updated:1 decode:8 us:5 designing:4 hypothesis:3 distinguishing:1 goodfellow:1 curran:1 associate:1 trick:1 continues:1 labeled:1 quicker:1 preprint:3 wang:1 region:6 ensures:2 connected:2 k40:1 decrease:1 trade:1 substantial:1 convexity:1 complexity:1 warde:1 trained:6 motivate:1 predictive:1 creation:1 efficiency:1 easily:2 xiaoou:1 represented:1 train:11 ech:1 effective:2 london:3 reconstructive:1 shortcoming:1 artificial:2 klaus:1 refined:1 jean:1 larger:2 distortion:7 say:1 encoder:1 ability:4 itself:2 noisy:2 final:4 sequence:2 advantage:1 net:1 ucl:2 reconstruction:2 jamie:2 mb:1 validate:2 fel:1 convergence:1 optimum:1 produce:8 generating:1 perfect:2 adam:2 encryption:4 boris:1 andrew:3 ac:2 noticeable:1 strong:1 implemented:3 c:1 disproportionate:1 involves:1 resemble:1 come:1 recovering:1 concentrate:1 correct:1 attribute:1 filter:1 subsequently:2 stochastic:1 human:5 eff:2 public:1 material:1 backprop:1 require:4 government:1 anonymous:1 awni:1 considered:2 normal:1 deciding:1 great:2 lawrence:1 claim:1 m0:3 vary:3 optimizer:1 proc:1 condensed:1 tanh:3 currently:2 sensitive:1 cipher:1 create:1 successfully:10 tool:2 weighted:2 always:3 gaussian:1 aim:4 modified:1 rather:1 surveillance:1 louse:1 encode:1 she:1 improvement:2 check:1 fooled:1 mainly:1 greatly:1 secure:2 adversarial:20 detect:1 bos:9 typically:1 accept:1 her:5 relation:1 manipulating:1 hidden:2 pixel:9 headquarters:1 classification:2 jianmin:1 proposes:1 animal:1 art:7 jarno:1 field:1 never:1 extraction:1 ng:1 devastating:1 unsupervised:7 nearly:1 icml:2 celeba:12 minimized:1 report:1 mirza:1 yoshua:1 few:1 strided:2 modern:1 randomly:4 simultaneously:1 ve:2 intell:1 phase:1 consisting:1 jeffrey:2 attempt:4 jessica:4 detection:1 message:64 highly:1 evaluation:1 genevieve:1 weakness:1 nsa:2 photonics:1 farley:1 andy:2 capable:4 necessary:1 respective:2 euclidean:2 re:1 theoretical:1 miroslav:1 modeling:1 cover:47 cost:3 wonder:1 stored:1 combined:1 international:5 decoding:9 michael:1 synthesis:2 quickly:1 ashish:1 ctr:1 andersen:3 central:1 decryption:1 fridrich:4 worse:2 external:1 creating:2 expert:2 nonlinearities:1 orr:1 inc:1 stream:1 performed:2 try:2 break:1 view:1 root:1 competitive:6 recover:5 minimize:3 square:1 formed:1 greg:1 accuracy:5 convolutional:1 who:3 php:1 became:1 landscape:1 ciphertext:1 directional:1 generalize:1 decodes:1 produced:1 craig:1 mere:1 steganographic:90 rectified:1 bob:53 ping:1 suffers:1 sharing:1 against:16 intentionally:1 naturally:1 proof:1 associated:3 resultant:1 workstation:2 spie:1 proved:1 dataset:14 popular:2 ask:1 conversation:1 knowledge:4 color:4 bpp:30 holub:2 appears:1 tolerate:1 forensics:1 supervised:5 tom:4 april:1 done:1 furthermore:2 receives:3 mehdi:1 celebrity:2 google:1 defines:1 mode:2 quality:6 reveal:2 gray:1 artif:1 building:2 effect:1 contain:1 concept:1 logits:1 lgorithm:1 death:1 deal:1 round:1 indistinguishable:2 game:3 width:1 irving:1 davis:2 steady:1 pdf:1 recognised:1 performs:6 image:108 wise:1 novel:2 common:4 sigmoid:2 hugo:5 volume:1 he:1 significant:4 grid:2 similarly:1 centre:1 longer:2 patrick:1 integrity:1 own:1 hide:4 recent:2 encrypt:2 meta:2 binary:3 success:9 life:1 devise:1 george:1 additional:2 greater:1 determine:1 maximize:1 period:2 converge:1 corrado:1 ller:1 signal:1 full:1 alan:1 smooth:1 academic:1 offer:1 cross:1 prediction:1 heterogeneous:1 vision:2 essentially:1 arxiv:6 represent:2 kernel:1 normalization:2 cz:1 confined:1 encrypted:7 agarwal:1 whereas:1 addition:3 want:1 fellowship:1 lsb:5 diagram:1 sends:1 suffered:1 container:1 rest:1 file:1 pass:1 sure:1 cyber:1 comment:1 sent:4 leveraging:1 eve:56 presence:5 leverage:1 split:2 easy:1 enough:1 bengio:1 variety:1 affect:1 relu:2 lerch:1 vinod:1 architecture:4 perfectly:2 competing:2 idea:2 barham:2 avenue:1 whether:5 distorting:1 passed:2 effort:1 speech:1 deep:2 fool:2 clear:2 amount:3 locally:2 augments:1 http:2 generate:2 correctly:6 per:1 kill:1 write:1 goljan:1 group:1 key:5 four:2 fractionally:2 imaging:1 sum:2 fooling:4 concealing:3 compete:1 turing:1 letter:1 communicate:2 place:2 almost:1 electronic:1 yann:1 decision:1 bit:16 entirely:1 layer:7 distinguish:1 courville:1 replaces:1 xiaogang:1 optic:1 afforded:1 decrypts:1 scene:1 performing:1 relatively:2 according:1 alternate:1 slightly:2 character:2 making:1 restricted:1 computationally:1 remains:1 bing:1 discus:1 hannun:1 fail:1 needed:1 know:1 flip:1 fed:1 end:3 sending:1 studying:1 available:1 brevdo:1 competitively:2 apply:1 reshape:1 ocean:1 batch:4 robustness:1 weinberger:1 existence:1 original:4 remaining:1 gan:5 conceal:1 opportunity:1 concatenated:1 perturb:1 build:1 establish:1 practicality:1 classical:1 society:1 ghahramani:1 objective:1 intend:1 randomize:1 traditional:2 guessing:2 lends:1 cvut:1 distance:2 link:1 separate:3 card:1 capacity:4 upsampling:1 decoder:1 thank:1 aes:1 laying:1 ozair:1 meg:1 length:1 index:1 minimizing:1 balance:2 difficult:3 susceptible:1 robert:1 teach:1 stated:1 ba:2 ziwei:1 design:7 implementation:1 cryptographic:4 boltzmann:1 perform:6 allowing:2 attacker:2 convolution:2 datasets:5 markov:1 benchmark:1 acknowledge:1 supporting:1 incorrectly:1 hinton:1 communication:11 perturbation:4 lb:3 arbitrary:1 david:7 discriminator:5 security:3 acoustic:1 learned:4 accepts:3 tensorflow:3 established:5 expressivity:1 protect:1 kingma:1 wow:5 adversary:4 able:5 sanjay:1 program:1 power:1 force:1 undetected:1 advanced:2 scheme:44 improve:5 created:1 extract:1 vojtech:1 text:1 prior:2 eugene:1 conceals:1 acknowledgement:1 review:1 law:1 embedded:7 loss:24 fully:2 expect:1 generation:2 limitation:1 geoffrey:2 localized:1 generator:2 agent:1 offered:1 editor:1 ascii:2 maas:1 surprisingly:1 supported:1 copy:1 hayden:1 offline:1 guide:1 institute:1 face:3 leaky:1 distributed:1 world:1 evaluating:2 author:1 collection:1 commonly:3 made:3 preprocessing:1 eavesdrop:1 party:10 far:1 cope:1 welling:1 reconstructed:1 status:1 confirm:1 hayes:3 receiver:2 consuming:1 discriminative:3 evgeny:1 grayscale:3 search:2 why:1 table:3 additionally:3 learn:12 channel:2 robust:3 depicting:1 du:1 bottou:1 complex:2 domain:6 official:1 did:1 paul:2 pevn:1 tesla:1 cryptography:3 xu:1 fashion:1 embeds:2 lc:1 decoded:4 wish:2 learns:5 wavelet:1 zhifeng:2 ian:1 tang:1 remained:2 embed:8 specific:3 rectifier:1 showing:2 ghemawat:1 abadie:1 svm:1 cortes:1 evidence:2 workshop:2 effectively:2 flattened:1 vojt:1 texture:1 magnitude:1 phd:1 rui:1 chen:3 locality:1 entropy:1 depicted:1 logarithmic:1 simply:1 likely:1 explore:3 visual:1 contained:1 vulnerable:1 springer:2 corresponds:3 mart:3 nair:1 goal:2 identity:1 shared:4 price:1 content:2 hard:1 change:1 eurasip:1 except:1 called:2 multimedia:1 pas:1 la:2 iclr:1 citro:1 aaron:1 college:3 people:2 support:1 carol:4 dissimilar:1 filler:2 steganalysis:11 authenticity:1 avoiding:1 |
6,403 | 6,792 | Near-linear time approximation algorithms for
optimal transport via Sinkhorn iteration
Jason Altschuler
MIT
[email protected]
Jonathan Weed
MIT
[email protected]
Philippe Rigollet
MIT
[email protected]
Abstract
Computing optimal transport distances such as the earth mover?s distance is a
fundamental problem in machine learning, statistics, and computer vision. Despite
the recent introduction of several algorithms with good empirical performance,
it is unknown whether general optimal transport distances can be approximated
in near-linear time. This paper demonstrates that this ambitious goal is in fact
achieved by Cuturi?s Sinkhorn Distances. This result relies on a new analysis
of Sinkhorn iterations, which also directly suggests a new greedy coordinate
descent algorithm G REENKHORN with the same theoretical guarantees. Numerical
simulations illustrate that G REENKHORN significantly outperforms the classical
S INKHORN algorithm in practice.
Dedicated to the memory of Michael B. Cohen
1
Introduction
Computing distances between probability measures on metric spaces, or more generally between point
clouds, plays an increasingly preponderant role in machine learning [SL11, MJ15, LG15, JSCG16,
ACB17], statistics [FCCR16, PZ16, SR04, BGKL17] and computer vision [RTG00, BvdPPH11,
SdGP+ 15]. A prominent example of such distances is the earth mover?s distance introduced
in [WPR85] (see also [RTG00]), which is a special case of Wasserstein distance, or optimal transport
(OT) distance [Vil09].
While OT distances exhibit a unique ability to capture geometric features of the objects at hand, they
suffer from a heavy computational cost that had been prohibitive in large scale applications until the
recent introduction to the machine learning community of Sinkhorn Distances by Cuturi [Cut13].
Combined with other numerical tricks, these recent advances have enabled the treatment of large
point clouds in computer graphics such as triangle meshes [SdGP+ 15] and high-resolution neuroimaging data [GPC15]. Sinkhorn Distances rely on the idea of entropic penalization, which has
been implemented in similar problems at least since Schr?dinger [Sch31, Leo14]. This powerful
idea has been successfully applied to a variety of contexts not only as a statistical tool for model
selection [JRT08, RT11, RT12] and online learning [CBL06], but also as an optimization gadget in
first-order optimization methods such as mirror descent and proximal methods [Bub15].
Related work. Computing an OT distance amounts to solving the following linear system:
min hP, Ci ,
Ur,c := P ? IRn?n
: P 1 = r , P >1 = c ,
+
P ?Ur,c
(1)
where 1 is the all-ones vector in IRn , C ? IRn?n
is a given cost matrix, and r ? IRn , c ? IRn are
+
given vectors with positive entries that sum to one. Typically C is a matrix containing pairwise
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
distances (and is thus dense), but in this paper we allow C to be an arbitrary non-negative dense
matrix with bounded entries since our results are more general. For brevity, this paper focuses on
square matrices C and P , since extensions to the rectangular case are straightforward.
This paper is at the intersection of two lines of research: a theoretical one that aims at finding (near)
linear time approximation algorithms for simple problems that are already known to run in polynomial
time and a practical one that pursues fast algorithms for solving optimal transport approximately for
large datasets.
Noticing that (1) is a linear program with O(n) linear constraints and certain graphical structure, one
e 2.5 ) [LS14], improving over
can use the recent Lee-Sidford linear solver to find a solution in time O(n
3.5
the previous standard of O(n ) [Ren88]. While no practical implementation of the Lee-Sidford
algorithm is known, it provides a theoretical benchmark for our methods. Their result is part of a long
line of work initiated by the seminal paper of Spielman and Teng [ST04] on solving linear systems
of equations, which has provided a building block for near-linear time approximation algorithms
in a variety of combinatorially structured linear problems. A separate line of work has focused on
obtaining faster algorithms for (1) by imposing additional assumptions. For instance, [AS14] obtain
approximations to (1) when the cost matrix C arises from a metric, but their running times are not
truly near-linear. [SA12, ANOY14] develop even faster algorithms for (1), but require C to arise from
a low-dimensional `p metric.
Practical algorithms for computing OT distances include Orlin?s algorithm for the Uncapacitated
Minimum Cost Flow problem via a standard reduction. Like interior point methods, it has a provable
complexity of O(n3 log n). This dependence on the dimension is also observed in practice, thereby
preventing large-scale applications. To overcome the limitations of such general solvers, various
ideas ranging from graph sparsification [PW09] to metric embedding [IT03, GD04, SJ08] have been
proposed over the years to deal with particular cases of OT distance.
Our work complements both lines of work, theoretical and practical, by providing the first near-linear
time guarantee to approximate (1) for general non-negative cost matrices. Moreover we show that
this performance is achieved by algorithms that are also very efficient in practice. Central to our
contribution are recent developments of scalable methods for general OT that leverage the idea of
entropic regularization [Cut13, BCC+ 15, GCPB16]. However, the apparent practical efficacy of these
approaches came without theoretical guarantees. In particular, showing that this regularization yields
an algorithm to compute or approximate general OT distances in time nearly linear in the input size
n2 was an open question before this work.
Our contribution. The contribution of this paper is twofold. First we demonstrate that, with an
appropriate choice of parameters, the algorithm for Sinkhorn Distances introduced in [Cut13] is
in fact a near-linear time approximation algorithm for computing OT distances between discrete
measures. This is the first proof that such near-linear time results are achievable for optimal transport.
We also provide previously unavailable guidance for parameter tuning in this algorithm. Core to
our work is a new and arguably more natural analysis of the Sinkhorn iteration algorithm, which we
show converges in a number of iterations independent of the dimension n of the matrix to balance. In
particular, this analysis directly suggests a greedy variant of Sinkhorn iteration that also provably
runs in near-linear time and significantly outperforms the classical algorithm in practice. Finally,
while most approximation algorithms output an approximation of the optimum value of the linear
program (1), we also describe a simple, parallelizable rounding algorithm that provably outputs a
feasible solution to (1). Specifically, for any ? > 0 and bounded, non-negative cost matrix C, we
e 2 /?3 ) and outputs P? ? Ur,c such that
describe an algorithm that runs in time O(n
hP? , Ci ? min hP, Ci + ?
P ?Ur,c
We emphasize that our analysis does not require the cost matrix C to come from an underlying metric;
we only require C to be non-negative. This implies that our results also give, for example, near-linear
time approximation algorithms for Wasserstein p-distances between discrete measures.
Notation. We denote non-negative real numbers by
+ , the set of integers {1, . . . , n} by [n], and
PIR
n
the n-dimensional simplex by ?n := {x ? IRn+ :
i=1 xi = 1}. For two probability distributions
p, q ? ?n such that p is absolutely continuous w.r.t. q, we define the entropy H(p) of p and the
2
Kullback-Leibler divergence K(pkq) between p and q respectively by
H(p) =
n
X
i=1
pi log
1
pi
K(pkq) :=
,
n
X
i=1
pi log
pi
qi
.
P
1
Similarly, for a matrix P ? IRn?n
+ , we define the entropy H(P ) entrywise as
ij Pij log Pij . We
use 1 and 0 to denote the all-ones and all-zeroes vectors in IRn . For a matrix A = (Aij ), we denote
by exp(A) the matrix with entries (eAij ). For A ? IRn?n , we denote its row and columns sums
by r(A) := A1 ? IRn and c(A) := A> 1 ? IRn , respectively. The coordinates ri (A) and cj (A)
denote the
Pith row sum and jth column sum of A, respectively. We write kAk? = maxij |Aij | and
kAk1 = ij |Aij |. For two matrices of the same dimension, we denote the Frobenius inner product
P
of A and B by hA, Bi = ij Aij Bij . For a vector x ? IRn , we write D(x) ? IRn?n to denote the
diagonal matrix with entries (D(x))ii = xi . For any two nonnegative sequences (un )n , (vn )n , we
e n ) if there exist positive constants C, c such that un ? Cvn (log n)c . For any two
write un = O(v
real numbers, we write a ? b = min(a, b).
2
Optimal Transport in near-linear time
In this section, we describe the main algorithm studied in this paper. Pseudocode appears in
Algorithm 1.
The core of our algorithm is the computation of Algorithm 1 A PPROX OT(C, r, c, ?)
an approximate Sinkhorn projection of the matrix
n
?
0
? ? 4 log
A = exp(??C) (Step 1), details for which will
? , ? ? 8kCk?
\\ Step 1: Approximately project onto Ur,c
be given in Section 3. Since our approximate
Sinkhorn projection is not guaranteed to lie in 1: A ? exp(??C)
the feasible set, we round our approximation to 2: B ? P ROJ(A, Ur,c , ?0 )
ensure that it lies in Ur,c (Step 2). Pseudocode
\\ Step 2: Round to feasible point in Ur,c
for a simple, parallelizable rounding procedure is
given in Algorithm 2.
3: Output P? ? ROUND(B, Ur,c )
Algorithm 1 hinges on two subroutines: P ROJ
and ROUND. We give two algorithms for P ROJ:
S INKHORN and G REENKHORN. We devote Sec- Algorithm 2 ROUND(F, Ur,c )
tion 3 to their analysis, which is of independent 1: X ? D(x) with xi = ri ? 1
ri (F )
interest. On the other hand, ROUND is fairly sim2: F 0 ? XF
ple. Its analysis is postponed to Section 4.
c
3: Y ? D(y) with yj = cj (Fj 0 ) ? 1
Our main theorem about Algorithm 1 is the follow- 4: F 00 ? F 0 Y
ing accuracy and runtime guarantee. The proof 5: errr ? r ? r(F 00 ), errc ? c ? c(F 00 )
is postponed to Section 4, since it relies on the 6: Output G ? F 00 + err err> /kerr k
r
r 1
c
analysis of P ROJ and ROUND.
Theorem 1. Algorithm 1 returns a point P? ? Ur,c satisfying
hP? , Ci ? min hP, Ci + ?
P ?Ur,c
in time O(n2 + S), where S is the running time of the subroutine P ROJ(A, Ur,c , ?0 ). In particular,
if kCk? ? L, then S can be O(n2 L3 (log n)??3 ), so that Algorithm 1 runs in O(n2 L3 (log n)??3 )
time.
Remark 1. The time complexity in the above theorem reflects only elementary arithmetic operations.
In the interest of clarity, we ignore questions of bit complexity that may arise from taking exponentials.
The effect of this simplification is marginal since it can be easily shown [KLRS08] that the maximum
bit complexity throughout the iterations of our algorithm is O(L(log n)/?). As a result, factoring in
bit complexity leads to a runtime of O(n2 L4 (log n)2 ??4 ), which is still truly near-linear.
3
3
Linear-time approximate Sinkhorn projection
The core of our OT algorithm is the entropic penalty proposed by Cuturi [Cut13]:
P? := argmin hP, Ci ? ? ?1 H(P ) .
(2)
P ?Ur,c
The solution to (2) can be characterized explicitly by analyzing its first-order conditions for optimality.
Lemma 1. [Cut13] For any cost matrix C and r, c ? ?n , the minimization program (2) has a
unique minimum at P? ? Ur,c of the form P? = XAY , where A = exp(??C) and X, Y ? IRn?n
+
are both diagonal matrices. The matrices (X, Y ) are unique up to a constant factor.
We call the matrix P? appearing in Lemma 1 the Sinkhorn projection of A, denoted ?S (A, Ur,c ),
after Sinkhorn, who proved uniqueness in [Sin67]. Computing ?S (A, Ur,c ) exactly is impractical, so
we implement instead an approximate version P ROJ(A, Ur,c , ?0 ), which outputs a matrix B = XAY
that may not lie in Ur,c but satisfies the condition kr(B) ? rk1 + kc(B) ? ck1 ? ?0 . We stress that
this condition is very natural from a statistical standpoint, since it requires that r(B) and c(B) are
close to the target marginals r and c in total variation distance.
3.1
The classical Sinkhorn algorithm
Given a matrix A, Sinkhorn proposed a simple iterative algorithm to approximate the Sinkhorn
projection ?S (A, Ur,c ), which is now known as the Sinkhorn-Knopp algorithm or RAS method.
Despite the simplicity of this algorithm and its good performance in practice, it has been difficult
to analyze. As a result, recent work showing that ?S (A, Ur,c ) can be approximated in near-linear
time [AZLOW17, CMTV17] has bypassed the Sinkhorn-Knopp algorithm entirely1 . In our work, we
obtain a new analysis of the simple and practical Sinkhorn-Knopp algorithm, showing that it also
approximates ?S (A, Ur,c ) in near-linear time.
Pseudocode for the Sinkhorn-Knopp algorithm apAlgorithm 3 S INKHORN(A, Ur,c , ?0 )
pears in Algorithm 3. In brief, it is an alternating
k?0
projection procedure which renormalizes the rows 1: Initialize
(0)
0
0
and columns of A in turn so that they match the de- 2: A ? A/kAk1 , x ? 0, y ? 0
(k)
0
3: while dist(A , Ur,c ) > ? do
sired row and column marginals r and c. At each
k ?k+1
step, it prescribes to either modify all the rows by 4:
if k odd then
multiplying row i by ri /ri (A) for i ? [n], or to 5:
i
xi ? log ri (Ar(k?1)
for i ? [n]
do the analogous operation on the columns. (We 6:
)
k
k?1
k
interpret the quantity 0/0 as 1 in this algorithm if 7:
x ?x
+ x, y ? y k?1
ever it occurs.) The algorithm terminates when the 8:
else
cj
matrix A(k) is sufficiently close to the polytope
9:
y ? log cj (A(k?1)
for j ? [n]
)
Ur,c .
k
k?1
k
10:
y ?y
+ y, x ? xk?1
(k)
11:
A = D(exp(xk ))AD(exp(y k ))
3.2 Prior work
12: Output B ? A(k)
Before this work, the best analysis of Algorithm 3
e 0 )?2 ) iterations suffice to obtain a matrix close to Ur,c in `2 distance:
showed that O((?
Proposition 1. [KLRS08] Let A be a strictly positive matrix. Algorithm 3 with dist(A, Ur,c ) =
kr(A) ? rk2 + kc(A)? ck2 outputs a matrix P
B satisfying kr(B) ? rk2 + kc(B) ? ck2 ? ?0
0 ?2
in O ?(? ) log(s/`) iterations, where s =
ij Aij , ` = minij Aij , and ? > 0 is such that
ri , ci ? ? for all i ? [n].
Unfortunately, this analysis is not strong enough to obtain a true near-linear time guarantee. Indeed,
the `2 norm is not an appropriate measure of closeness between probability vectors, since very
different distributions on large alphabets can nevertheless have small `2 distance:
for example,
p
(n?1 , . . . , n?1 , 0, . . . , 0) and (0, . . . , 0, n?1 , . . . , n?1 ) in ?2n have `2 distance 2/n even though
1
Replacing the P ROJ step in Algorithm 1 with the matrix-scaling algorithm developed in [CMTV17] results
in a runtime that is a single factor of ? faster than what we present in Theorem 1. The benefit of our approach is
that it is extremely easy to implement, whereas the matrix-scaling algorithm of [CMTV17] relies heavily on
near-linear time Laplacian solver subroutines, which are not implementable in practice.
4
they have disjoint support. As noted above, for statistical problems, including computation of the OT
distance, it is more natural to measure distance in `1 norm.
The following Corollary gives the best `1 guarantee available from Proposition 1.
Corollary 1. Algorithm 3 with dist(A, Ur,c ) = kr(A) ? rk2 + kc(A)
? ck2 outputs a matrix B
satisfying kr(B) ? rk1 + kc(B) ? ck1 ? ?0 in O n?(?0 )?2 log(s/`) iterations.
The extra factor of n in the runtime of Corollary 1 is the price to pay to convert an `2 bound to an `1
bound. Note that ? ? 1/n, so n? is always larger than 1. If r = c = 1n /n are uniform distributions,
then n? = 1 and no dependence on the dimension appears. However, in the extreme where r or c
contains an entry of constant size, we get n? = ?(n).
3.3
New analysis of the Sinkhorn algorithm
Our new analysis allows us to obtain a dimension-independent bound on the number of iterations
beyond the uniform case.
Theorem 2. Algorithm 3 with dist(A, Ur,c ) = kr(A) ? rk1 + kc(A)
? ck1 outputs a matrix
B
P
satisfying kr(B) ? rk1 + kc(B) ? ck1 ? ?0 in O (?0 )?2 log(s/`) iterations, where s = ij Aij
and ` = minij Aij .
Comparing our result with Corollary 1, we see what our bound is always stronger, by up to a factor
of n. Moreover, our analysis is extremely short. Our improved results and simplified proof follow
directly from the fact that we carry out the analysis entirely with respect to the Kullback?Leibler
divergence, a common measure of statistical distance. This measure possesses a close connection
to the total-variation distance via Pinsker?s inequality (Lemma 4, below), from which we obtain the
desired `1 bound. Similar ideas can be traced back at least to [GY98] where an analysis of Sinkhorn
iterations for bistochastic targets is sketched in the context of a different problem: detecting the
existence of a perfect matching in a bipartite graph.
We first define some notation. Given a matrix A and desired row and column sums r and c, we define
the potential (Lyapunov) function f : IRn ? IRn ? IR by
X
f (x, y) =
Aij exi +yj ? hr, xi ? hc, yi .
ij
This auxiliary function has appeared in much of the literature on Sinkhorn projections [KLRS08,
CMTV17, KK96, KK93]. We call the vectors x and y scaling vectors. It is easy to check that
a minimizer (x? , y ? ) of f yields the Sinkhorn projection of A: writing X = D(exp(x? )) and
Y = D(exp(y ? )), first order optimality conditions imply that XAY lies in Ur,c , and therefore
XAY = ?S (A, Ur,c ).
The following lemma exactly characterizes the improvement in the potential function f from an
iteration of Sinkhorn, in terms of our current divergence to the target marginals.
Lemma 2. If k ? 2, then f (xk?1 , y k?1 ) ? f (xk , y k ) = K(rkr(A(k?1) )) + K(ckc(A(k?1) )) .
Proof. Assume without loss of generality that k is odd, so that c(A(k?1) ) = c and r(A(k) ) = r. (If
k is even, interchange the roles of r and c.) By definition,
X (k?1)
(k)
f (xk?1 , y k?1 ) ? f (xk , y k ) =
Aij
? Aij + hr, xk ? xk?1 i + hc, y k ? y k?1 i
ij
=
X
ri (xki ? xk?1
) = K(rkr(A(k?1) ) + K(ckc(A(k?1) ) ,
i
i
where we have used that: kA
k1 = kA(k) k1 = 1 and Y (k) = Y (k?1) ; for all i, ri (xki ? xk?1
)=
i
ri
(k?1)
ri log ri (A(k?1) ) ; and K(ckc(A
)) = 0 since c = c(A(k?1) ).
(k?1)
The next lemma has already appeared in the literature and we defer its proof to the supplement.
Lemma 3. If A is a positive matrix with kAk1 ? s and smallest entry `, then
s
f (x1 , y 1 ) ? min f (x, y) ? f (0, 0) ? min f (x, y) ? log .
x,y?IR
x,y?IR
`
5
Lemma 4 (Pinsker?s Inequality). For any probability measures p and q, kp ? qk1 ?
?
p
2K(pkq).
?
Proof of Theorem 2. Let k ? be the first iteration such that kr(A(k ) ) ? rk1 + kc(A(k ) ) ? ck1 ? ?0 .
Pinsker?s inequality implies that for any k < k ? , we have
?02 < (kr(A(k) ) ? rk1 + kc(A(k) ) ? ck1 )2 ? 4(K(rkr(A(k) ) + K(ckc(A(k) )) ,
so Lemmas 2 and 3 imply that we terminate in k ? ? 4?0?2 log(s/`) steps, as claimed.
3.4
Greedy Sinkhorn
In addition to a new analysis of S INKHORN, we propose a new algorithm G REENKHORN which enjoys
the same convergence guarantee but performs better in practice. Instead of performing alternating
updates of all rows and columns of A, the G REENKHORN algorithm updates only a single row or
column at each step. Thus G REENKHORN updates only O(n) entries of A per iteration, rather than
O(n2 ).
In this respect, G REENKHORN is similar to the stochastic algorithm for Sinkhorn projection proposed
by [GCPB16]. There is a natural interpretation of both algorithms as coordinate descent algorithms
in the dual space corresponding to row/column violations. Nevertheless, our algorithm differs from
theirs in several key ways. Instead of choosing a row or column to update randomly, G REENKHORN
chooses the best row or column to update greedily. Additionally, G REENKHORN does an exact line
search on the coordinate in question since there is a simple closed form for the optimum, whereas the
algorithm proposed by [GCPB16] updates in the direction of the average gradient. Our experiments
establish that G REENKHORN performs better in practice; more details appear in the Supplement.
We emphasize that although this algorithm is an extremely natural modification of S INKHORN, previous analyses of S INKHORN cannot be modified to extract any meaningful performance guarantees
on G REENKHORN. On the other hand, our new analysis of S INKHORN from Section 3.3 applies to
G REENKHORN with only trivial modifications.
Algorithm 4 G REENKHORN(A, Ur,c , ?0 )
Pseudocode for G REENKHORN appears in Algo(0)
A/kAk1 , x ? 0, y ? 0.
rithm 4. We let dist(A, Ur,c ) = kr(A) ? rk1 + 1: A ?(0)
2:
A
?
A
kc(A) ? ck1 and define the distance function
3: while dist(A, Ur,c ) > ? do
? : IR+ ? IR+ ? [0, +?] by
4:
I ? argmaxi ?(ri , ri (A))
a
5:
J ? argmaxj ?(cj , cj (A))
?(a, b) = b ? a + a log .
b
6:
if ?(rI , rI (A)) > ?(cJ , cJ (A)) then
I
7:
xI ? xI + log rIr(A)
The choice of ? is justified by its appearance in
Lemma 5, below. While ? is not a metric, it is 8:
else
J
easy to see that ? is nonnegative and satisfies 9:
yJ ? yJ + log cJc(A)
?(a, b) = 0 iff a = b.
10:
A ? D(exp(x))A(0) D(exp(y))
We note that after r(A) and c(A) are com- 11: Output B ? A
puted once at the beginning of the algorithm,
G REENKHORN can easily be implemented such that each iteration runs in only O(n) time.
Theorem 3. The algorithm G REENKHORN outputs a matrix
P B satisfying kr(B) ? rk1 + kc(B) ?
ck1 ? ?0 in O(n(?0 )?2 log(s/`)) iterations, where s = ij Aij and ` = minij Aij . Since each
iteration takes O(n) time, such a matrix can be found in O(n2 (?0 )?2 log(s/`)) time.
The analysis requires the following lemma, which is an easy modification of Lemma 2.
Lemma 5. Let A0 and A00 be successive iterates of G REENKHORN, with corresponding scaling
vectors (x0 , y 0 ) and (x00 , y 00 ). If A00 was obtained from A0 by updating row I, then
f (x0 , y 0 ) ? f (x00 , y 00 ) = ?(rI , rI (A0 )) ,
and if it was obtained by updating column J, then
f (x0 , y 0 ) ? f (x00 , y 00 ) = ?(cJ , cJ (A0 )) .
We also require the following extension of Pinsker?s inequality (proof in Supplement).
6
P
Lemma 6. For any ? ? ?n , ? ? IRn+ , define ?(?, ?) = i ?(?i , ?i ). If ?(?, ?) ? 1, then
p
k? ? ?k1 ? 7?(?, ?) .
Proof of Theorem 3. We follow the proof of Theorem 2. Since the row or column update is chosen
1
greedily, at each step we make progress of at least 2n
(?(r, r(A)) + ?(c, c(A))). If ?(r, r(A)) and
?(c, c(A)) are both at most 1, then under the assumption that kr(A) ? rk1 + kc(A) ? ck1 > ?0 , our
progress is at least
1
1
1 02
(?(r, r(A)) + ?(c, c(A))) ?
(kr(A) ? rk21 + kc(A) ? ck21 ) ?
?
2n
14n
28n
Likewise, if either ?(r, r(A)) or ?(c, c(A)) is larger than 1, our progress is at least 1/2n ?
Therefore, we terminate in at most 28n?0?2 log(s/`) iterations.
4
1 02
28n ? .
Proof of Theorem 1
First, we present a simple guarantee about the rounding Algorithm 2. The following lemma shows that
the `1 distance between the input matrix F and rounded matrix G = ROUND(F, Ur,c ) is controlled
by the total-variation distance between the input matrix?s marginals r(F ) and c(F ) and the desired
marginals r and c.
2
Lemma 7. If r, c ? ?n and F ? IRn?n
+ , then Algorithm 2 takes O(n ) time to output a matrix
G ? Ur,c satisfying
h
i
kG ? F k1 ? 2 kr(F ) ? rk1 + kc(F ) ? ck1 .
The proof of Lemma 7 is simple and left to the Supplement. (We also describe in the Supplement a
randomized variant of Algorithm 2 that achieves a slightly better bound than Lemma 7). We are now
ready to prove Theorem 1.
Proof of Theorem 1. E RROR ANALYSIS . Let B be the output of P ROJ(A, Ur,c , ?0 ), and let P ? ?
argminP ?Ur,c hP, Ci be an optimal solution to the original OT program.
We first show that hB, Ci is not much larger than hP ? , Ci. To that end, write r0 := r(B) and
c0 := c(B). Since B = XAY for positive diagonal matrices X and Y , Lemma 1 implies B is the
optimal solution to
min hP, Ci ? ? ?1 H(P ) .
(3)
P ?Ur0 ,c0
By Lemma 7, there exists a matrix P 0 ? Ur0 ,c0 such that kP 0 ? P ? k1 ? 2 (kr0 ? rk1 + kc0 ? ck1 ).
Moreover, since B is an optimal solution of (3), we have
hB, Ci ? ? ?1 H(B) ? hP 0 , Ci ? ? ?1 H(P 0 ) .
Thus, by H?lder?s inequality
hB, Ci ? hP ? , Ci = hB, Ci ? hP 0 , Ci + hP 0 , Ci ? hP ? , Ci
? ? ?1 (H(B) ? H(P 0 )) + 2(kr0 ? rk1 + kc0 ? ck1 )kCk?
? 2? ?1 log n + 2(kr0 ? rk1 + kc0 ? ck1 )kCk? ,
(4)
where we have used the fact that 0 ? H(B), H(P 0 ) ? 2 log n.
Lemma 7 implies that the output P? of ROUND(B, Ur,c ) satisfies the inequality kB ? P? k1 ?
2 (kr0 ? rk1 + kc0 ? ck1 ). This fact together with (4) and H?lder?s inequality yields
hP? , Ci ? min hP, Ci + 2? ?1 log n + 4(kr0 ? rk1 + kc0 ? ck1 )kCk? .
P ?Ur,c
Applying the guarantee of P ROJ(A, Ur,c , ?0 ), we obtain
hP? , Ci ? min hP, Ci +
P ?Ur,c
7
2 log n
+ 4?0 kCk? .
?
Plugging in the values of ? and ?0 prescribed in Algorithm 1 finishes the error analysis.
RUNTIME ANALYSIS . Lemma 7 shows that Step 2 of Algorithm 1 takes O(n2 ) time. The runtime
of Step 1 is dominated by the P ROJ(A, Ur,c , ?0 ) subroutine. Theorems 2 and 3 imply that both the
S INKHORN and G REENKHORN algorithms accomplish this in S = O(n2 (?0 )?2 log s` ) time, where s
is the sum of the entries of A and ` is the smallest entry of A. Since the matrix C is nonnegative,
the entries of A are bounded above by 1, thus s ? n2 . The smallest entry of A is e??kCk? , so
log 1/` = ?kCk? . We obtain S = O(n2 (?0 )?2 (log n+?kCk? )). The proof is finished by plugging
in the values of ? and ?0 prescribed in Algorithm 1.
5
Empirical results
Cuturi [Cut13] already gave experimental evidence that using
S INKHORN to solve (2) outperforms state-of-the-art techniques for
optimal transport. In this section, we provide strong empirical evidence that our proposed G REENKHORN algorithm significantly outperforms S INKHORN.
We consider transportation between pairs of m?m greyscale images,
normalized to have unit total mass. The target marginals r and c
2
2
represent two images in a pair, and C ? IRm ?m is the matrix of
`1 distances between pixel locations. Therefore, we aim to compute
the earth mover?s distance.
Figure 1: Synthetic image.
We run experiments on two datasets: real images, from MNIST, and synthetic images, as in Figure 1.
5.1
MNIST
We first compare the behavior of G REENKHORN and S INKHORN on real images. To that end, we
choose 10 random pairs of images from the MNIST dataset, and for each one analyze the performance
of A PPROX OT when using both G REENKHORN and S INKHORN for the approximate projection step.
We add negligible noise 0.01 to each background pixel with intensity 0. Figure 2 paints a clear
picture: G REENKHORN significantly outperforms S INKHORN both in the short and long term.
5.2
Random images
To better understand the empirical
behavior of both algorithms in a
number of different regimes, we devised a synthetic and tunable framework whereby we generate images
by choosing a randomly positioned
?foreground? square in an otherwise
black background. The size of this
square is a tunable parameter varied between 20%, 50%, and 80% of
the total image?s area. Intensities
of background pixels are drawn uniformly from [0, 1]; foreground pixels are drawn uniformly from [0, 50].
Such an image is depicted in Figure 1,
Figure 2: Comparison of G REENKHORN and S INKHORN
and results appear in Figure 2.
on pairs of MNIST images of dimension 28 ? 28 (top) and
We perform two other experiments random images of dimension 20 ? 20 with 20% foreground
with random images in Figure 3. (bottom). Left: distance dist(A, U ) to the transport polyIn the first, we vary the number tope (average over 10 random pairsr,c
of images). Right: maxiof background pixels and show that mum, median, and minimum values of the competitive ratio
G REENKHORN performs better when ln (dist(A , U )/dist(A , U )) over 10 runs.
S
r,c
G
r,c
the number of background pixels is
larger. We conjecture that this is related to the fact that G REENKHORN only updates salient rows and
8
columns at each step, whereas S INKHORN wastes time updating rows and columns corresponding to
background pixels, which have negligible impact. This demonstrates that G REENKHORN is a better
choice especially when data is sparse, which is often the case in practice.
In the second, we consider the role of the regularization parameter ?. Our analysis requires taking ?
of order log n/?, but Cuturi [Cut13] observed that in practice ? can be much smaller. Cuturi showed
that S INKHORN outperforms state-of-the art techniques for computing OT distance even when ? is a
small constant, and Figure 3 shows that G REENKHORN runs faster than S INKHORN in this regime
with no loss in accuracy.
Figure 3: Left: Comparison of median competitive ratio for random images containing 20%, 50%,
and 80% foreground. Right: Performance of G REENKHORN and S INKHORN for small values of ?.
9
Acknowledgments
We thank Michael Cohen, Adrian Vladu, John Kelner, Justin Solomon, and Marco Cuturi for helpful
discussions. We are grateful to Pablo Parrilo for drawing our attention to the fact that G REENKHORN
is a coordinate descent algorithm, and to Alexandr Andoni for references.
JA and JW were generously supported by NSF Graduate Research Fellowship 1122374. PR is
supported in part by grants NSF CAREER DMS-1541099, NSF DMS-1541100, NSF DMS-1712596,
DARPA W911NF-16-1-0551, ONR N00014-17-1-2147 and a grant from the MIT NEC Corporation.
References
[ACB17]
M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. ArXiv:1701.07875, January 2017.
[ANOY14]
A. Andoni, A. Nikolov, K. Onak, and G. Yaroslavtsev. Parallel algorithms for geometric graph
problems. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing,
STOC ?14, pages 574?583, New York, NY, USA, 2014. ACM.
[AS14]
P. K. Agarwal and R. Sharathkumar. Approximation algorithms for bipartite matching with metric
and geometric costs. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of
Computing, STOC ?14, pages 555?564, New York, NY, USA, 2014. ACM.
[AZLOW17] Z. Allen-Zhu, Y. Li, R. Oliveira, and A. Wigderson. Much faster algorithms for matrix scaling.
arXiv preprint arXiv:1704.02315, 2017.
[BCC+ 15]
J.-D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyr?. Iterative Bregman projections for
regularized transportation problems. SIAM Journal on Scientific Computing, 37(2):A1111?A1138,
2015.
[BGKL17]
J. Bigot, R. Gouet, T. Klein, and A. L?pez. Geodesic PCA in the Wasserstein space by convex
PCA. Ann. Inst. H. Poincar? Probab. Statist., 53(1):1?26, 02 2017.
[Bub15]
S. Bubeck. Convex optimization: Algorithms and complexity. Found. Trends Mach. Learn.,
8(3-4):231?357, 2015.
[BvdPPH11] N. Bonneel, M. van de Panne, S. Paris, and W. Heidrich. Displacement interpolation using
Lagrangian mass transport. ACM Trans. Graph., 30(6):158:1?158:12, December 2011.
[CBL06]
N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
Cambridge, 2006.
[CMTV17]
M. B. Cohen, A. Madry, D. Tsipras, and A. Vladu. Matrix scaling and balancing via box
constrained Newton?s method and interior point methods. arXiv:1704.02310, 2017.
[Cut13]
M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In C. J. C. Burges,
L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural
Information Processing Systems 26, pages 2292?2300. Curran Associates, Inc., 2013.
[FCCR16]
R. Flamary, M. Cuturi, N. Courty, and A. Rakotomamonjy. Wasserstein discriminant analysis.
arXiv:1608.08063, 2016.
[GCPB16]
A. Genevay, M. Cuturi, G. Peyr?, and F. Bach. Stochastic optimization for large-scale optimal
transport. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances
in Neural Information Processing Systems 29, pages 3440?3448. Curran Associates, Inc., 2016.
[GD04]
K. Grauman and T. Darrell. Fast contour matching using approximate earth mover?s distance. In
Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, 2004. CVPR 2004., volume 1, pages I?220?I?227 Vol.1, June 2004.
[GPC15]
A. Gramfort, G. Peyr?, and M. Cuturi. Fast Optimal Transport Averaging of Neuroimaging Data,
pages 261?272. Springer International Publishing, 2015.
[GY98]
L. Gurvits and P. Yianilos. The deflation-inflation method for certain semidefinite programming
and maximum determinant completion problems. Technical report, NECI, 1998.
[IT03]
P. Indyk and N. Thaper. Fast image retrieval via embeddings. In Third International Workshop on
Statistical and Computational Theories of Vision, 2003.
[JRT08]
A. Juditsky, P. Rigollet, and A. Tsybakov. Learning by mirror averaging. Ann. Statist., 36(5):2183?
2206, 2008.
[JSCG16]
W. Jitkrittum, Z. Szab?, K. P. Chwialkowski, and A. Gretton. Interpretable distribution features
with maximum testing power. In Advances in Neural Information Processing Systems 29: Annual
Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona,
Spain, pages 181?189, 2016.
10
[KK93]
B. Kalantari and L. Khachiyan. On the rate of convergence of deterministic and randomized RAS
matrix scaling algorithms. Oper. Res. Lett., 14(5):237?244, 1993.
[KK96]
B. Kalantari and L. Khachiyan. On the complexity of nonnegative-matrix scaling. Linear Algebra
Appl., 240:87?103, 1996.
[KLRS08]
B. Kalantari, I. Lari, F. Ricca, and B. Simeone. On the complexity of general matrix scaling and
entropy minimization via the RAS algorithm. Math. Program., 112(2, Ser. A):371?401, 2008.
[Leo14]
C. Leonard. A survey of the Schr?dinger problem and some of its connections with optimal
transport. Discrete and Continuous Dynamical Systems, 34(4):1533?1574, 2014.
[LG15]
J. R. Lloyd and Z. Ghahramani. Statistical model criticism using kernel two sample tests. In
Proceedings of the 28th International Conference on Neural Information Processing Systems,
NIPS?15, pages 829?837, Cambridge, MA, USA, 2015. MIT Press.
[LS14]
Y. T. Lee and A. Sidford. Path finding methods for linear programming: Solving linear programs
?
in ?( rank) iterations and faster algorithms for maximum flow. In Proceedings of the 2014
IEEE 55th Annual Symposium on Foundations of Computer Science, FOCS ?14, pages 424?433,
Washington, DC, USA, 2014. IEEE Computer Society.
[MJ15]
J. Mueller and T. Jaakkola. Principal differences analysis: Interpretable characterization of
differences between distributions. In Proceedings of the 28th International Conference on Neural
Information Processing Systems, NIPS?15, pages 1702?1710, Cambridge, MA, USA, 2015. MIT
Press.
[PW09]
O. Pele and M. Werman. Fast and robust earth mover?s distances. In 2009 IEEE 12th International
Conference on Computer Vision, pages 460?467, Sept 2009.
[PZ16]
V. M. Panaretos and Y. Zemel. Amplitude and phase variation of point processes. Ann. Statist.,
44(2):771?812, 04 2016.
[Ren88]
J. Renegar. A polynomial-time algorithm, based on Newton?s method, for linear programming.
Mathematical Programming, 40(1):59?93, 1988.
[RT11]
P. Rigollet and A. Tsybakov. Exponential screening and optimal rates of sparse estimation. Ann.
Statist., 39(2):731?771, 2011.
[RT12]
P. Rigollet and A. Tsybakov. Sparse estimation by exponential weighting. Statistical Science,
27(4):558?575, 2012.
[RTG00]
Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover?s distance as a metric for image retrieval.
Int. J. Comput. Vision, 40(2):99?121, November 2000.
[SA12]
R. Sharathkumar and P. K. Agarwal. A near-linear time -approximation algorithm for geometric
bipartite matching. In H. J. Karloff and T. Pitassi, editors, Proceedings of the 44th Symposium on
Theory of Computing Conference, STOC 2012, New York, NY, USA, May 19 - 22, 2012, pages
385?394. ACM, 2012.
[Sch31]
E. Schr?dinger. ?ber die Umkehrung der Naturgesetze. Angewandte Chemie, 44(30):636?636,
1931.
[SdGP+ 15]
J. Solomon, F. de Goes, G. Peyr?, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas.
Convolutional wasserstein distances: Efficient optimal transportation on geometric domains. ACM
Trans. Graph., 34(4):66:1?66:11, July 2015.
[Sin67]
R. Sinkhorn. Diagonal equivalence to matrices with prescribed row and column sums. The
American Mathematical Monthly, 74(4):402?405, 1967.
[SJ08]
S. Shirdhonkar and D. W. Jacobs. Approximate earth mover?s distance in linear time. In 2008
IEEE Conference on Computer Vision and Pattern Recognition, pages 1?8, June 2008.
[SL11]
R. Sandler and M. Lindenbaum. Nonnegative matrix factorization with earth mover?s distance
metric for image analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence,
33(8):1590?1602, Aug 2011.
[SR04]
G. J. Sz?kely and M. L. Rizzo. Testing for equal distributions in high dimension. Inter-Stat
(London), 11(5):1?16, 2004.
[ST04]
D. A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph
sparsification, and solving linear systems. In Proceedings of the Thirty-sixth Annual ACM
Symposium on Theory of Computing, STOC ?04, pages 81?90, New York, NY, USA, 2004. ACM.
[Vil09]
C. Villani. Optimal transport, volume 338 of Grundlehren der Mathematischen Wissenschaften
[Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 2009. Old and new.
[WPR85]
M. Werman, S. Peleg, and A. Rosenfeld. A distance metric for multidimensional histograms.
Computer Vision, Graphics, and Image Processing, 32(3):328 ? 336, 1985.
11
| 6792 |@word determinant:1 version:1 achievable:1 polynomial:2 norm:2 stronger:1 c0:3 villani:1 open:1 adrian:1 simulation:1 jacob:1 thereby:1 carry:1 reduction:1 contains:1 efficacy:1 pprox:2 outperforms:6 err:2 current:1 comparing:1 ka:2 com:1 john:1 mesh:1 numerical:2 interpretable:2 update:8 juditsky:1 greedy:3 prohibitive:1 intelligence:1 xk:10 beginning:1 core:3 short:2 ck2:3 provides:1 detecting:1 iterates:1 location:1 successive:1 math:1 characterization:1 kelner:1 mathematical:3 symposium:5 khachiyan:2 focs:1 prove:1 x0:3 pairwise:1 inter:1 indeed:1 ra:3 behavior:2 dist:9 solver:3 provided:1 project:1 bounded:3 moreover:3 underlying:1 notation:2 suffice:1 mass:2 what:2 kg:1 argmin:1 onak:1 developed:1 finding:2 sparsification:2 corporation:1 impractical:1 guarantee:10 multidimensional:1 runtime:6 exactly:2 grauman:1 demonstrates:2 ser:1 partitioning:1 unit:1 grant:2 appear:2 arguably:1 positive:5 before:2 negligible:2 modify:1 despite:2 mach:1 analyzing:1 initiated:1 path:1 interpolation:1 approximately:2 lugosi:1 black:1 studied:1 equivalence:1 suggests:2 appl:1 madry:1 factorization:1 bi:1 graduate:1 unique:3 practical:6 acknowledgment:1 yj:4 alexandr:1 practice:10 block:1 implement:2 as14:2 differs:1 thirty:1 testing:2 procedure:2 poincar:1 displacement:1 area:1 empirical:4 nenna:1 significantly:4 projection:11 matching:4 get:1 onto:1 interior:2 selection:1 close:4 cannot:1 lindenbaum:1 context:2 applying:1 seminal:1 writing:1 kalantari:3 deterministic:1 lagrangian:1 transportation:3 pursues:1 straightforward:1 attention:1 go:1 convex:2 rectangular:1 resolution:1 focused:1 simplicity:1 survey:1 enabled:1 embedding:1 st04:2 coordinate:5 variation:4 analogous:1 target:4 play:1 heavily:1 altschuler:1 exact:1 programming:4 curran:2 trick:1 trend:1 associate:2 approximated:2 satisfying:6 updating:3 recognition:2 observed:2 cloud:2 role:3 bottom:1 preprint:1 capture:1 complexity:8 cuturi:13 sired:1 pinsker:4 geodesic:1 prescribes:1 grateful:1 solving:5 algo:1 algebra:1 bipartite:3 triangle:1 easily:2 exi:1 darpa:1 various:1 alphabet:1 fast:5 describe:4 london:1 argmaxi:1 kp:2 zemel:1 choosing:2 apparent:1 larger:4 solve:1 cvpr:1 drawing:1 otherwise:1 lder:2 ability:1 statistic:2 rosenfeld:1 indyk:1 online:1 sequence:1 propose:1 product:1 bigot:1 kak1:4 iff:1 kc0:5 flamary:1 frobenius:1 convergence:2 optimum:2 darrell:1 perfect:1 converges:1 object:1 illustrate:1 develop:1 completion:1 stat:1 ij:8 odd:2 progress:3 aug:1 strong:2 implemented:2 auxiliary:1 come:1 implies:4 peleg:1 lyapunov:1 direction:1 stochastic:2 kb:1 require:4 ja:1 benamou:1 rk1:15 proposition:2 elementary:1 cvn:1 extension:2 strictly:1 marco:1 sufficiently:1 inflation:1 guibas:2 exp:10 werman:2 achieves:1 entropic:3 smallest:3 vary:1 earth:8 uniqueness:1 estimation:2 combinatorially:1 successfully:1 tool:1 reflects:1 minimization:2 mit:9 generously:1 always:2 aim:2 modified:1 rather:1 jaakkola:1 corollary:4 focus:1 june:2 improvement:1 rank:1 check:1 pear:1 greedily:2 a1111:1 criticism:1 helpful:1 renormalizes:1 inst:1 mueller:1 factoring:1 typically:1 a0:4 irn:18 kc:14 subroutine:4 provably:2 pixel:7 sketched:1 sandler:1 dual:1 denoted:1 development:1 art:2 special:1 initialize:1 fairly:1 marginal:1 equal:1 once:1 gurvits:1 gramfort:1 beach:1 washington:1 nearly:2 argminp:1 foreground:4 simplex:1 report:1 roj:10 minij:3 randomly:2 mover:8 divergence:3 phase:1 interest:2 screening:1 a1138:1 eaij:1 violation:1 truly:2 extreme:1 semidefinite:1 pw09:2 spain:1 bregman:1 ckc:4 old:1 irm:1 desired:3 re:1 guidance:1 theoretical:5 panne:1 instance:1 column:16 bistochastic:1 ar:1 sidford:3 w911nf:1 cost:9 rakotomamonjy:1 entry:11 uniform:2 rounding:3 peyr:4 graphic:2 proximal:1 accomplish:1 combined:1 chooses:1 st:1 synthetic:3 fundamental:2 randomized:2 siam:1 international:5 kely:1 lee:4 rounded:1 michael:2 together:1 butscher:1 central:1 cesa:1 solomon:2 containing:2 choose:1 dinger:3 american:1 return:1 li:1 oper:1 potential:2 parrilo:1 de:3 sec:1 waste:1 lloyd:1 int:1 inc:2 explicitly:1 ad:1 tion:1 jason:1 closed:1 analyze:2 characterizes:1 competitive:2 parallel:1 cbl06:2 defer:1 orlin:1 contribution:3 square:3 ir:5 accuracy:2 convolutional:1 who:1 likewise:1 yield:3 multiplying:1 thaper:1 parallelizable:2 definition:1 sixth:3 dm:3 chintala:1 proof:13 proved:1 treatment:1 dataset:1 tunable:2 jitkrittum:1 cj:10 amplitude:1 positioned:1 back:1 appears:3 follow:3 improved:1 entrywise:1 jw:1 though:1 box:1 generality:1 until:1 hand:3 transport:15 replacing:1 puted:1 scientific:1 usa:8 building:1 effect:1 normalized:1 rk2:3 true:1 regularization:3 alternating:2 leibler:2 deal:1 round:9 game:1 noted:1 whereby:1 kak:1 die:1 prominent:1 stress:1 demonstrate:1 performs:3 dedicated:1 allen:1 fj:1 ranging:1 image:20 common:1 pseudocode:4 rigollet:5 cohen:3 volume:2 interpretation:1 approximates:1 mathematischen:1 marginals:6 interpret:1 theirs:1 a00:2 monthly:1 cambridge:4 imposing:1 tuning:1 hp:18 similarly:1 sugiyama:1 had:1 l3:2 sinkhorn:29 heidrich:1 pkq:3 add:1 pitassi:1 recent:6 showed:2 claimed:1 certain:2 n00014:1 verlag:1 inequality:7 onr:1 came:1 postponed:2 yi:1 der:2 minimum:3 wasserstein:6 additional:1 arjovsky:1 r0:1 forty:2 nikolov:1 july:1 ii:1 arithmetic:1 gretton:1 ing:1 technical:1 faster:6 xf:1 characterized:1 match:1 long:3 bach:1 retrieval:2 devised:1 a1:1 laplacian:1 qi:1 prediction:1 controlled:1 scalable:1 variant:2 plugging:2 vision:8 metric:10 impact:1 arxiv:5 iteration:20 represent:1 kernel:1 histogram:1 agarwal:2 achieved:2 neci:1 justified:1 whereas:3 addition:1 background:6 fellowship:1 else:2 median:2 standpoint:1 ot:14 extra:1 posse:1 rizzo:1 grundlehren:1 december:2 chwialkowski:1 flow:2 integer:1 call:2 near:17 leverage:1 enough:1 easy:4 hb:4 variety:2 embeddings:1 finish:1 gave:1 lightspeed:1 karloff:1 inner:1 idea:5 whether:1 pir:1 pca:2 shirdhonkar:1 pele:1 penalty:1 suffer:1 carlier:1 york:4 remark:1 simeone:1 generally:1 clear:1 amount:1 oliveira:1 tsybakov:3 statist:4 generate:1 exist:1 nsf:4 disjoint:1 per:1 klein:1 discrete:3 write:5 vol:1 kck:9 key:1 salient:1 nevertheless:2 traced:1 drawn:2 clarity:1 qk1:1 graph:7 sum:7 year:1 convert:1 run:8 luxburg:1 noticing:1 powerful:1 apalgorithm:1 throughout:1 guyon:1 vn:1 scaling:9 bit:3 entirely:1 bound:6 pay:1 guaranteed:1 simplification:1 nonnegative:5 annual:5 renegar:1 constraint:1 n3:1 ri:18 dominated:1 min:9 optimality:2 extremely:3 xki:2 performing:1 prescribed:3 conjecture:1 structured:1 terminates:1 slightly:1 increasingly:1 smaller:1 ur:43 modification:3 rkr:3 constrained:1 pr:1 ln:1 equation:1 lari:1 previously:1 turn:1 kerr:1 argmaxj:1 deflation:1 end:2 available:1 operation:2 appropriate:2 appearing:1 weinberger:1 existence:1 original:1 top:1 running:2 include:1 ensure:1 gan:1 graphical:1 publishing:1 hinge:1 xay:5 ck1:15 wigderson:1 newton:2 k1:6 especially:1 establish:1 ghahramani:2 classical:3 society:2 already:3 question:3 quantity:1 occurs:1 paint:1 dependence:2 diagonal:4 devote:1 exhibit:1 gradient:1 distance:43 separate:1 thank:1 berlin:1 polytope:1 discriminant:1 trivial:1 provable:1 providing:1 balance:1 ratio:2 difficult:1 neuroimaging:2 unfortunately:1 stoc:4 greyscale:1 negative:5 implementation:1 ambitious:1 unknown:1 rir:1 perform:1 bianchi:1 datasets:2 benchmark:1 implementable:1 descent:4 november:1 philippe:1 january:1 pez:1 ever:1 schr:3 dc:1 varied:1 arbitrary:1 community:1 intensity:2 introduced:2 complement:1 pair:4 pablo:1 paris:1 connection:2 tomasi:1 bonneel:1 barcelona:1 nip:3 trans:2 beyond:1 justin:1 below:2 pattern:3 dynamical:1 appeared:2 regime:2 program:6 including:1 memory:1 maxij:1 power:1 natural:5 rely:1 regularized:1 hr:2 zhu:1 brief:1 imply:3 picture:1 finished:1 ready:1 extract:1 knopp:4 sept:1 prior:1 geometric:5 literature:2 bcc:2 probab:1 loss:2 limitation:1 penalization:1 foundation:1 rubner:1 pij:2 principle:1 editor:3 pi:4 heavy:1 balancing:1 row:17 supported:2 jth:1 enjoys:1 aij:13 allow:1 understand:1 burges:1 ber:1 taking:2 sparse:3 benefit:1 van:1 overcome:1 dimension:8 lett:1 contour:1 preventing:1 interchange:1 simplified:1 ple:1 nguyen:1 welling:1 transaction:1 approximate:10 emphasize:2 ignore:1 kullback:2 sz:1 xi:7 x00:3 continuous:2 un:3 iterative:2 search:1 additionally:1 learn:1 terminate:2 robust:1 ca:1 career:1 angewandte:1 bypassed:1 obtaining:1 improving:1 unavailable:1 errc:1 genevay:1 du:1 hc:2 bottou:2 garnett:1 domain:1 yianilos:1 wissenschaften:1 dense:2 main:2 noise:1 arise:2 n2:11 courty:1 weed:1 gadget:1 x1:1 rithm:1 ny:4 exponential:3 comput:1 lie:4 uncapacitated:1 third:1 weighting:1 bij:1 theorem:13 showing:3 yaroslavtsev:1 closeness:1 evidence:2 exists:1 workshop:1 mnist:4 andoni:2 kr:14 ci:23 mirror:2 supplement:5 nec:1 entropy:3 intersection:1 depicted:1 appearance:1 bubeck:1 rror:1 applies:1 springer:2 minimizer:1 satisfies:3 relies:3 acm:9 ma:2 goal:1 ann:4 leonard:1 twofold:1 price:1 feasible:3 specifically:1 uniformly:2 szab:1 averaging:2 lemma:22 principal:1 total:5 teng:2 experimental:1 meaningful:1 l4:1 support:1 arises:1 jonathan:1 brevity:1 spielman:2 absolutely:1 mum:1 |
6,404 | 6,793 | PixelGAN Autoencoders
Alireza Makhzani, Brendan Frey
University of Toronto
{makhzani,frey}@psi.toronto.edu
Abstract
In this paper, we describe the ?PixelGAN autoencoder?, a generative autoencoder
in which the generative path is a convolutional autoregressive neural network on
pixels (PixelCNN) that is conditioned on a latent code, and the recognition path
uses a generative adversarial network (GAN) to impose a prior distribution on the
latent code. We show that different priors result in different decompositions of
information between the latent code and the autoregressive decoder. For example,
by imposing a Gaussian distribution as the prior, we can achieve a global vs. local
decomposition, or by imposing a categorical distribution as the prior, we can
disentangle the style and content information of images in an unsupervised fashion.
We further show how the PixelGAN autoencoder with a categorical prior can be
directly used in semi-supervised settings and achieve competitive semi-supervised
classification results on the MNIST, SVHN and NORB datasets.
1
Introduction
In recent years, generative models that can be trained via direct back-propagation have enabled
remarkable progress in modeling natural images. One of the most successful models is the generative
adversarial network (GAN) [1], which employs a two player min-max game. The generative model,
G, samples the prior p(z) and generates the sample G(z). The discriminator, D(x), is trained to
identify whether a point x is a sample from the data distribution or a sample from the generative
model. The generator is trained to maximally confuse the discriminator into believing that generated
samples come from the data distribution. The cost function of GAN is
min max Ex?pdata [log D(x)] + Ez?p(z) [log(1 ? D(G(z))].
G
D
GANs can be considered within the wider framework of implicit generative models [2, 3, 4]. Implicit
distributions can be sampled through their generative path, but their likelihood function is not
tractable. Recently, several papers have proposed another application of GAN-style algorithms for
approximate inference [2, 3, 4, 5, 6, 7, 8, 9]. These algorithms use implicit distributions to learn
posterior approximations that are more expressive than the distributions with tractable densities that
are often used in variational inference. For example, adversarial autoencoders [6] use a universal
approximator posterior as the implicit posterior distribution and use adversarial training to match the
aggregated posterior of the latent code to the prior distribution. Adversarial variational Bayes [3, 7]
uses a more general amortized GAN inference framework within a maximum-likelihood learning
setting. Another type of GAN inference technique is used in the ALI [8] and BiGAN [9] models,
which have been shown to approximate maximum likelihood learning [3]. In these models, both
the recognition and generative models are implicit and are jointly learnt by an adversarial training
process.
Variational autoencoders (VAE) [10, 11] are another state-of-the-art image modeling technique that
use neural networks to parametrize the posterior distribution and pair it with a top-down generative
network. Both networks are jointly trained to maximize a variational lower bound on the data loglikelihood. A different framework for learning density models is autoregressive neural networks such
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: Architecture of the PixelGAN autoencoder.
as NADE [12], MADE [12], PixelRNN [12] and PixelCNN [13]. Unlike variational autoencoders,
which capture the statistics of the data in hierarchical latent codes, the autoregressive models learn
the image densities directly at the pixel level without learning a hierarchical latent representation.
In this paper, we present the PixelGAN autoencoder as a generative autoencoder that combines the
benefits of latent variable models with autoregressive architectures. The PixelGAN autoencoder is a
generative autoencoder in which the generative path is a PixelCNN that is conditioned on a latent
variable. The latent variable is inferred by matching the aggregated posterior distribution to the prior
distribution by an adversarial training technique similar to that of the adversarial autoencoder [6].
However, whereas in adversarial autoencoders the statistics of the data distribution are captured by
the latent code, in the PixelGAN autoencoder they are captured jointly by the latent code and the
autoregressive decoder. We show that imposing different distributions as the prior results in different
factorizations of information between the latent code and the autoregressive decoder. For example, in
Section 2.1, we show that by imposing a Gaussian distribution on the latent code, we can achieve
a global vs. local decomposition of information. In this case, the global latent code no longer has
to model all the irrelevant and fine details of the image, and can use its capacity to capture more
relevant and global statistics of the image. Another type of decomposition of information that can
be learnt by PixelGAN autoencoders is a discrete vs. continuous decomposition. In Section 2.2, we
show that we can achieve this decomposition by imposing a categorical prior on the latent code using
adversarial training. In this case, the categorical latent code captures the discrete underlying factors
of variation in the data, such as class label information, and the autoregressive decoder captures
the remaining continuous structure, such as style information, in an unsupervised fashion. We then
show how PixelGAN autoencoders with categorical priors can be directly used in clustering and
semi-supervised scenarios and achieve very competitive classification results on several datasets in
Section 3. Finally, we present one of the main potential applications of PixelGAN autoencoders in
learning cross-domain relations between two different domains in Section 4.
2
PixelGAN Autoencoders
Let x be a datapoint that comes from the distribution pdata (x) and z be the hidden code. The
recognition path of the PixelGAN autoencoder (Figure 1) defines an implicit posterior distribution
q(z|x) by using a deterministic neural function z = f (x, n) that takes the input x along with random
noise n with a fixed distribution p(n) and outputs z. The aggregated posterior q(z) of this model is
defined as follows:
Z
q(z) =
q(z|x)pdata (x)dx.
x
This parametrization of the implicit posterior distribution was originally proposed in the adversarial
autoencoder work [6] as the universal approximator posterior. We can sample from this implicit
distribution q(z|x), by evaluating f (x, n) at different samples of n, but the density function of this
posterior distribution is intractable. Appendix A.1 discusses the importance of the input noise in
training PixelGAN autoencoders. The generative path p(x|z) is a conditional PixelCNN [13] that
conditions on the latent vector z using an adaptive bias in PixelCNN layers. The inference is done by
an amortized GAN inference technique that was originally proposed in the adversarial autoencoder
work [6]. In this method, an adversarial network is attached on top of the hidden code vector of
2
the autoencoder and matches the aggregated posterior distribution, q(z), to an arbitrary prior, p(z).
Samples from q(z) and p(z) are provided to the adversarial network as the negative and positive
examples respectively, and the generator of the adversarial network, which is also the encoder of
the autoencoder, tries to match q(z) to p(z) by the gradient that comes through the discriminative
adversarial network.
The adversarial network, the PixelCNN decoder and the encoder are trained jointly in two phases ? the
reconstruction phase and the adversarial phase ? executed on each mini-batch. In the reconstruction
phase, the ground truth input x along with the hidden code z inferred by the encoder are provided to
the PixelCNN decoder. The PixelCNN decoder weights are updated to maximize the log-likelihood of
the input x. The encoder weights are also updated at this stage by the gradient that comes through the
conditioning vector of the PixelCNN. In the adversarial phase, the adversarial network updates both
its discriminative network and its generative network (the encoder) to match q(z) to p(z). Once the
training is done, we can sample from the model by first sampling z from the prior distribution p(z),
and then sampling from the conditional likelihood p(x|z) parametrized by the PixelCNN decoder.
We now establish a connection between the PixelGAN autoencoder cost and maximum likelihood
learning using a decomposition of the aggregated evidence lower bound (ELBO) proposed in [14]:
h
i
h
i
Ex?pdata (x) [log p(x)] > ?Ex?pdata (x) Eq(z|x) [? log p(x|z)] ? Ex?pdata (x) KL(q(z|x)kp(z)) (1)
h
i
(2)
= ? Ex?pdata (x) Eq(z|x) [? log p(x|z)] ? KL(q(z)kp(z)) ? I(z; x)
{z
} | {z }
{z
} |
|
reconstruction term
marginal KL
mutual info.
The first term in Equation 2 is the reconstruction term and the second term is the marginal KL
divergence between the aggregated posterior and the prior distribution. The third term is the mutual
information between the latent code z and the input x. This is a regularization term that encourages z
and x to be decoupled by removing the information of the data distribution from the hidden code. If
the training set has N examples, I(z; x) is bounded as follows (see [14]).
0 < I(z; x) < log N
(3)
In order to maximize the ELBO, we need to minimize all the three terms of Equation 2. We consider
two cases for the decoder p(x|z):
Deterministic Decoder. If the decoder p(x|z) is deterministic or has very limited stochasticity such
as the simple factorized decoder of the VAE, the mutual information term acts in the complete opposite
direction of the reconstruction term. This is because the only way to minimize the reconstruction
error of x is to learn a hidden code z that is relevant to x, which results in maximizing I(z; x).
Indeed, it can be shown that minimizing the reconstruction term maximizes a variational lower
bound on I(z; x) [15, 16]. For example, in the case of the VAE trained on MNIST, since the
reconstruction is precise, the mutual information term is dominated and is close to its maximum value
I(z; x) ? log N ? 11.00 nats [14].
Stochastic Decoder. If we use a powerful decoder such as the PixelCNN, the reconstruction term
and the mutual information term will not compete with each other anymore and the network can
minimize both independently. In this case, the optimal solution for maximizing the ELBO would be
to model pdata (x) solely by p(x|z) and thereby minimizing the reconstruction term, and at the same
time, minimizing the mutual information term by ignoring the latent code. As a result, even though
the model achieves a high likelihood, the latent code does not learn any useful representation, which
is undesirable. This problem has been observed in several previous works [17, 18] and different
techniques such as annealing the weight of the KL term [17] or weakening the decoder [18] have
been proposed to make z and x more dependent.
As suggested in [19, 18], we think that the maximum likelihood objective by itself is not a useful
objective for representation learning especially when a powerful decoder is used. In PixelGAN
autoencoders, in order to encourage learning more useful representations, we modify the ELBO
(Equation 2) by removing the mutual information term from it, since this term is explicitly encouraging
z to become independent of x. So our cost function only includes the reconstruction term and the
marginal KL term. The reconstruction term is optimized by the reconstruction phase of training and
the marginal KL term is approximately optimized by the adversarial phase1 . Note that since the
1
The original GAN formulation optimizes the Jensen-Shannon divergence [1], but there are other formulations
that optimize the KL divergence, e.g. [3].
3
(a) PixelGAN Samples
(2D code, limited receptive field)
(b) PixelCNN Samples
(limited receptive field)
(c) AAE Samples
(2D code)
Figure 2: (a) Samples of the PixelGAN autoencoder with 2D Gaussian code and limited receptive
field of size 9. (b) Samples of the PixelCNN (c) Samples of the adversarial autoencoder.
mutual information term is upper bounded by a constant (log N ), we are still maximizing a lower
bound on the log-likelihood of data. However, this bound is weaker than the ELBO, which is the
price that is paid for learning more useful latent representations by balancing the decomposition of
information between the latent code and the autoregressive decoder.
For implementing the conditioning adaptive bias in the PixelCNN decoder, we explore two different
architectures [13]. In the location-invariant bias, for each PixelCNN layer, we use the latent code
to construct a vector that is broadcasted within each feature map of the layer and then added as an
adaptive bias to that layer. In the location-dependent bias, we use the latent code to construct a spatial
feature map that is broadcasted across different feature maps and then added only to the first layer
of the decoder as an adaptive bias. We will discuss the effect of these architectures on the learnt
representation in Figure 3 of Section 2.1 and their implementation details in Appendix A.2.
2.1
PixelGAN Autoencoders with Gaussian Priors
Here, we show that PixelGAN autoencoders with Gaussian priors can decompose the global and local
statistics of the images between the latent code and the autoregressive decoder. Figure 2a shows the
samples of a PixelGAN autoencoder model with the location-dependent bias trained on the MNIST
dataset. For the purpose of better illustrating the decomposition of information, we have chosen a
2-D Gaussian latent code and a limited the receptive field of size 9 for the PixelGAN autoencoder.
Figure 2b shows the samples of a PixelCNN model with the same limited receptive field size of 9 and
Figure 2c shows the samples of an adversarial autoencoder with the 2-D Gaussian latent code. The
PixelCNN can successfully capture the local statistics, but fails to capture the global statistics due
to the limited receptive field size. In contrast, the adversarial autoencoder, whose sample quality is
very similar to that of the VAE, can successfully capture the global statistics, but fails to generate the
details of the images. However, the PixelGAN autoencoder, with the same receptive field and code
size, can combine the best of both and generates sharp images with global statistics.
In PixelGAN autoencoders, both the PixelCNN depth and the conditioning architecture affect the
decomposition of information between the latent code and the autoregressive decoder. We investigate
these effects in Figure 3 by training a PixelGAN autoencoder on MNIST where the code size is
chosen to be 2 for the visualization purpose. As shown in Figure 3a,b, when a shallow decoder is
used, most of the information will be encoded in the hidden code and there is a clean separation
between the digit clusters. As we make the PixelCNN more powerful (Figure 3c,d), we can see that
the hidden code is still used to capture some relevant information of the input, but the separation of
digit clusters is not as sharp when the limited code size of 2 is used. In the next section, we will show
that by using a larger code size (e.g., 30), we can get a much better separation of digit clusters even
when a powerful PixelCNN is used.
The conditioning architecture also affects the decomposition of information. In the case of the
location-invariant bias, the hidden code is encouraged to learn the global information that is locationinvariant (the what information and not the where information) such as the class label information.
For example, we can see in Figure 3a,c that the network has learnt to use one of the axes of the 2D
Gaussian code to explicitly encode the digit label even though a continuous prior is imposed. In this
4
(a) Shallow PixelCNN
Location-invariant bias
(b) Shallow PixelCNN
Location-dependent bias
(c) Deep PixelCNN
Location-invariant bias
(d) Deep PixelCNN
Location-dependent bias
Figure 3: The effect of the PixelCNN decoder depth and the conditioning architecture on the learnt
representation of the PixelGAN autoencoder. (Shallow=3 ResBlocks, Deep=12 ResBlocks)
case, we can potentially get a much better separation if we impose a discrete prior. This makes this
architecture suitable for the discrete vs. continuous decomposition and we use it for our clustering
and semi-supervised learning experiments. In the case of the location-dependent bias (Figure 3b,d),
the hidden code is encouraged to learn the global information that has location dependent information
such as low-frequency content of the image, similar to what the hidden code of an adversarial or
variational autoencoder would learn (Figure 2c). This makes this architecture suitable for the global
vs. local decomposition experiments such as Figure 2a.
From Figure 3, we can see that the class label information is mostly captured by p(z) while the style
information of the images is captured by both p(z) and p(x|z). This decomposition of information
has also been studied in other works that combine the latent variable models with autoregressive
decoders such as PixelVAE [20] and variational lossy autoencoders (VLAE) [18]. For example, the
VLAE model [18] proposes to use the depth of the PixelCNN decoder to control the decomposition of
information. In their model, the PixelCNN decoder is designed to have a shallow depth (small local
receptive field) so that the latent code z is forced to capture more global information. This approach is
very similar to our example of the PixelGAN autoencoder in Figure 2. However, the question that has
remained unanswered is whether it is possible to achieve a complete decomposition of content and
style in an unsupervised fashion, where the class label or discrete structure information is encoded in
the latent code z, and the remaining continuous structure such as style is captured by a powerful and
deep PixelCNN decoder. This kind of decomposition is particularly interesting as it can be directly
used for clustering and semi-supervised classification. In the next section, we show that we can
learn this decomposition of content and style by imposing a categorical distribution on the latent
representation z using adversarial training. Note that this discrete vs. continuous decomposition is
very different from the global vs. local decomposition, because a continuous factor of variation such
as style can have both global and local effect on the image. Indeed, in order to achieve the discrete
vs. continuous decomposition, we have to use very deep and powerful PixelCNN decoders (up to 20
residual blocks) to capture both the global and local statistics of the style by the PixelCNN while the
discrete content of the image is captured by the categorical latent variable.
2.2
PixelGAN Autoencoders with Categorical Priors
In this section, we present an architecture of the PixelGAN autoencoder that can separate the discrete
information (e.g., class label) from the continuous information (e.g., style information) in the images.
We then show how our architecture can be naturally adopted for the semi-supervised settings.
The architecture that we use is similar to Figure 1, with the difference that we impose a categorical distribution as the prior rather the Gaussian distribution (Figure 4) and also use the location-independent
bias architecture. Another difference is that we use a convolutional network as the inference network
q(z|x) to encourage the encoder to preserve the content and lose the style information of the image.
The inference network has a softmax output and predicts a one-hot vector whose dimension is the
number of discrete labels or categories that we wish the data to be clustered into. The adversarial
network is trained directly on the continuous probability outputs of the softmax layer of the encoder.
Imposing a categorical distribution at the output of the encoder imposes two constraints. The first
constraint is that the encoder has to make confident decisions about the class labels of the inputs. The
5
Figure 4: Architecture of the PixelGAN autoencoder with the categorical prior. p(z) captures the
class label and p(x|z) is a multi-modal distribution that captures the style distribution of a digit
conditioned on the class label of that digit.
adversarial training pushes the output of the encoder to the corners of the softmax simplex, by which
it ensures that the autoencoder cannot use the latent vector z to carry any continuous style information.
The second constraint imposed by adversarial training is that the aggregated posterior distribution of
z should match the categorical prior distribution with uniform outcome probabilities. This constraint
enforces the encoder to evenly distribute the class labels across the corners of the softmax simplex.
Because of these constraints, the latent variable will only capture the discrete content of the image
and all the continuous style information will be captured by the autoregressive decoder.
In order to better understand and visualize the effect of the adversarial training on shaping the hidden
code distribution, we train a PixelGAN autoencoder on the first three digits of MNIST (18000 training
and 3000 test points) and choose the number of clusters to be 3. Suppose z = [z1 , z2 , z3 ] is the hidden
code which in this case is the output probabilities of the softmax layer of the inference network.
In Figure 5a, we project the 3D softmax simplex of z1 + z2 + z3 = 1 onto a 2D triangle and plot
the hidden codes of the training examples when no distribution is imposed on the hidden code. We
can see from this figure that the network has learnt to use the surface of the softmax simplex to
encode style information of the digits and thus the three corners of the simplex do not have any
meaningful interpretation. Figure 5b corresponds to the code space of the same network when a
categorical distribution is imposed using the adversarial training. In this case, we can see the network
has successfully learnt to encode the label information of the three digits in the three corners of the
simplex, and all the style information has been separately captured by the autoregressive decoder.
This network achieves an almost perfect test error-rate of 0.3% on the first three digits of MNIST,
even though it is trained in a purely unsupervised fashion.
Once the PixelGAN autoencoder is trained, its encoder can be used for clustering new points and its
decoder can be used to generate samples from each cluster. Figure 6 illustrates the samples of the
PixelGAN autoencoder trained on the full MNIST dataset. The number of clusters is set to be 30
and each row corresponds to the conditional samples of one of the clusters (only 16 are shown). We
can see that the discrete latent code of the network has learnt discrete factors of variation such as
(a) Without GAN Regularization
(b) With GAN Regularization
Figure 5: Effect of GAN regularization (categorical prior) on the code space of PixelGAN autoencoders.
6
Figure 6: Disentangling the content and style in an unsupervised fashion with PixelGAN autoencoders.
Each row shows samples of the model from one of the learnt clusters.
class label information and some discrete style information. For example digit 1s are put in different
clusters based on how much tilted they are. The network is also assigning different clusters to digit 2s
(based on whether they have a loop) and digit 7s (based on whether they have a dash in the middle).
In Section 3, we will show that by using the encoder of this network, we can obtain about 5% error
rate in classifying digits in an unsupervised fashion, just by matching each cluster to a digit type.
Semi-Supervised PixelGAN Autoencoders. The PixelGAN autoencoder can be used in a semisupervised setting. In order to incorporate the label information, we add a semi-supervised training
phase. Specifically, we set the number of clusters to be the same as the number of class labels
and after executing the reconstruction and the adversarial phases on an unlabeled mini-batch, the
semi-supervised phase is executed on a labeled mini-batch, by updating the weights of the encoder
q(z|x) to minimize the cross-entropy cost. The semi-supervised cost also reduces the mode-missing
behavior of the GAN training by enforcing the encoder to learn all the modes of the categorical
distribution. In Section 3, we will evaluate the performance of the PixelGAN autoencoders on the
semi-supervised classification tasks.
3
Experiments
In this paper, we presented the PixelGAN autoencoder as a generative model, but the currently
available metrics for evaluating the likelihood of GAN-based generative models such as Parzen
window estimate are fundamentally flawed [21]. So in this section, we only present the performance of
the PixelGAN autoencoder on downstream tasks such as unsupervised clustering and semi-supervised
classification. The details of all the experiments can be found in Appendix B.
Unsupervised Clustering. We trained a PixelGAN autoencoder in an unsupervised fashion on
the MNIST dataset (Figure 6). We chose the number of clusters to be 30 and used the following
evaluation protocol: once the training is done, for each cluster i, we found the validation example
(a) SVHN
(1000 labels)
(b) MNIST
(100 labels)
(c) NORB
(1000 labels)
Figure 7: Conditional samples of the semi-supervised PixelGAN autoencoder.
7
Semi-supervised SVHN
100 Labels
50 Labels
20 Labels
Unsupervised
(30 clusters)
0
25
50
75
100
125
150
Error Rate
Error Rate
Semi-supervised MNIST
0.38
0.36
0.34
0.32
0.30
0.28
0.26
0.24
0.22
0.20
0.18
0.16
0.14
0.12
0.10
0.08
0.06
0.04
0.02
0.00
175
0.38
0.36
0.34
0.32
0.30
0.28
0.26
0.24
0.22
0.20
0.18
0.16
0.14
0.12
0.10
0.08
0.06
0.04
0.02
0.00
1000 Labels
500 Labels
0
100
200
300
Epochs
400
500
600
700
800
900
Epochs
Figure 8: Semi-supervised error-rate of PixelGAN autoencoders on the MNIST and SVHN datasets.
MNIST
MNIST
MNIST
MNIST
SVHN
SVHN
NORB
(Unsupervised)
(20 labels)
(50 labels)
(100 labels)
(500 labels)
(1000 labels)
(1000 labels)
VAE [24]
VAT [25]
ADGM [26]
SDGM [26]
Adversarial Autoencoder [6]
Ladder Networks [27]
Convolutional CatGAN [22]
InfoGAN [16]
Feature Matching GAN [28]
Temporal Ensembling [23]
4.10 (?1.13)
4.27
5.00
-
16.77 (?4.52)
-
2.21 (?1.36)
-
3.33 (?0.14)
2.33
0.96 (?0.02)
1.32 (?0.07)
1.90 (?0.10)
0.89 (?0.50)
1.39 (?0.28)
0.93 (?0.06)
-
18.44 (?4.80)
7.05 (?0.30)
36.02 (?0.10)
24.63
22.86
16.61 (?0.24)
17.70 (?0.30)
8.11 (?1.30)
5.43 (?0.25)
18.79 (?0.05)
9.88
10.06 (?0.05)
9.40 (?0.04)
-
PixelGAN Autoencoders
5.27 (?1.81)
12.08 (?5.50)
1.16 (?0.17)
1.08 (?0.15)
10.47 (?1.80)
6.96 (?0.55)
8.90 (?1.0)
Table 1: Semi-supervised learning and clustering error-rate on MNIST, SVHN and NORB datasets.
xn that maximizes q(zi |xn ), and assigned the label of xn to all the points in the cluster i. We then
computed the test error based on the assigned class labels to each cluster. As shown in the first
column of Table 1, the performance of PixelGAN autoencoders is on par with other GAN-based
clustering algorithms such as CatGAN [22], InfoGAN [16] and adversarial autoencoders [6].
Semi-supervised Classification. Table 1 and Figure 8 report the results of semi-supervised classification experiments on the MNIST, SVHN and NORB datasets. On the MNIST dataset with 20,
50 and 100 labels, our classification results are highly competitive. Note that the classification rate
of unsupervised clustering of MNIST is better than semi-supervised MNIST with 20 labels. This
is because in the unsupervised case, the number of clusters is 30, but in the semi-supervised case,
there are only 10 class labels which makes it more likely to confuse two digits. On the SVHN dataset
with 500 and 1000 labels, the PixelGAN autoencoder outperforms all the other methods except the
recently proposed temporal ensembling work [23] which is not a generative model. On the NORB
dataset with 1000 labels, the PixelGAN autoencoder outperforms all the other reported results.
Figure 7 shows the conditional samples of the semi-supervised PixelGAN autoencoder on the MNIST,
SVHN and NORB datasets. Each column of this figure presents sampled images conditioned on a
fixed one-hot latent code. We can see from this figure that the PixelGAN autoencoder can achieve a
rather clean separation of style and content on these datasets with very few labeled data.
4
Learning Cross-Domain Relations with PixelGAN Autoencoders
In this section, we discuss how the PixelGAN autoencoder can be viewed in the context of learning
cross-domain relations between two different domains. We also describe how the problem of
clustering or semi-supervised learning can be cast as the problem of finding a smooth cross-domain
mapping from the data distribution to the categorical distribution.
Recently several GAN-based methods have been developed to learn a cross-domain mapping between
two different domains [29, 30, 31, 6, 32]. In [31], an unsupervised cost function called the output
distribution matching (ODM) is proposed to find a cross-domain mapping F between two domains
D1 and D2 by imposing the following unsupervised constraint on the uncorrelated samples from
x ? D1 and y ? D2 :
Distr[F (x)] = Distr[y]
8
(4)
where Distr[z] denotes the distribution of the random variable z. The adversarial training is proposed
as one of the methods for matching these distributions. If we have access to a few labeled pairs (x, y),
then F can be further trained on them in a supervised fashion to satisfy F (x) = y. For example,
in speech recognition, we want to find a cross-domain mapping from a sequence of phonemes to a
sequence of characters. By optimizing the ODM cost function in Equation 4, we can find a smooth
function F that takes phonemes at its input and outputs a sequence of characters that respects the
language model. However, the main problem with this method is that the network can learn to ignore
part of the input distribution and still satisfy the ODM cost function by its output distribution. This
problem has also been observed in other works such as [29]. One way to avoid this problem is to add
a reconstruction term to the ODM cost function by introducing a reverse mapping from the output of
the encoder to the input domain. The is essentially the idea of the adversarial autoencoder (AAE) [6]
which learns a generative model by finding a cross-domain mapping between a Gaussian distribution
and the data distribution. Using the ODM cost function along with a reconstruction term to learn
cross-domain relations have been explored in several previous works. For example, InfoGAN [16]
adds a mutual information term to the ODM cost function and optimizes a variational lower bound
on this term. It can be shown that maximizing this variational bound is indeed minimizing the
reconstruction cost of an autoencoder [15]. Similarly, in [32, 33], an AAE is used to learn the
cross-domain relations of the vector representations of words from two different languages. The
architecture of the recent works of DiscoGAN [29] and CycleGAN [30] are also similar to an AAE
in which the latent representation is enforced to have the distribution of the other domain. Here we
describe how our proposed PixelGAN autoencoder can be potentially used in all these application
areas to learn better cross-domain relations. Suppose we want to learn a mapping from domain D1
to D2 . In the architecture of Figure 1, we can use independent samples of x ? D1 at the input and
instead of imposing a Gaussian distribution on the latent code, we can impose the distribution of
the second domain using its independent samples y ? D2 . Unlike AAEs, the encoder of PixelGAN
autoencoders does not have to retain all the input information in order to have a lossless reconstruction.
So the encoder can use all its capacity to learn the most relevant mapping from D1 to D2 and at the
same time, the PixelCNN can capture the remaining information that has been lost by the encoder.
We can adopt the ODM idea for semi-supervised learning by assuming D1 is the image domain and
D2 is the label domain. Independent samples of D1 and D2 correspond to samples from the data
distribution pdata (x) and the categorical distribution. The function F = q(y|x) can be parametrized
by a neural networkR that is trained to satisfy the ODM cost function by matching the aggregated
distribution q(y) = q(y|x)pdata (x)dx to the categorical distribution using adversarial training. The
few labeled examples are used to further train F to satisfy F (x) = y. However, as explained above,
the problem with this method is that the network can learn to generate the categorical distribution
by ignoring some part of the input distribution. The AAE solves this problem by adding an inverse
mapping from the categorical distribution to the data distribution. However, the main drawback of
the AAE architecture is that due to the reconstruction term, the latent representation now has to
model all the underlying factors of variation in the image. For example, in the semi-supervised AAE
architecture [6], while we are only interested in the one-hot label representation to do semi-supervised
learning, we also need to infer the style of the image so that we can have a lossless reconstruction of
the image. The PixelGAN autoencoder solves this problem by enabling the encoder to only infer the
factor of variation that we are interested in (i.e., label information), while the remaining structure of
the input (i.e., style information) is automatically captured by the autoregressive decoder.
5
Conclusion
In this paper, we proposed the PixelGAN autoencoder, which is a generative autoencoder that
combines a generative PixelCNN with a GAN inference network that can impose arbitrary priors
on the latent code. We showed that imposing different distributions as the prior enables us to learn
a latent representation that captures the type of statistics that we care about, while the remaining
structure of the image is captured by the PixelCNN decoder. Specifically, by imposing a Gaussian
prior, we were able to disentangle the low-frequency and high-frequency statistics of the images,
and by imposing a categorical prior we were able to disentangle the style and content of images and
learn representations that are specifically useful for clustering and semi-supervised learning tasks.
While the main focus of this paper was to demonstrate the application of PixelGAN autoencoders in
downstream tasks such as semi-supervised learning, we discussed how these architectures have many
other potentials such as learning cross-domain relations between two different domains.
9
Acknowledgments
We would like to thank Nathan Killoran for helpful discussions. We also thank NVIDIA for GPU
donations.
References
[1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing
Systems, pages 2672?2680, 2014.
[2] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint
arXiv:1610.03483, 2016.
[3] Ferenc Husz?r. Variational inference using implicit distributions. arXiv preprint arXiv:1702.08235, 2017.
[4] Dustin Tran, Rajesh Ranganath, and David M Blei. Deep and hierarchical implicit models. arXiv preprint
arXiv:1702.08896, 2017.
[5] Rajesh Ranganath, Dustin Tran, Jaan Altosaar, and David Blei. Operator variational inference. In Advances
in Neural Information Processing Systems, pages 496?504, 2016.
[6] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial
autoencoders. arXiv preprint arXiv:1511.05644, 2015.
[7] Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying
variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722, 2017.
[8] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and
Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
[9] Jeff Donahue, Philipp Kr?henb?hl, and Trevor Darrell. Adversarial feature learning. arXiv preprint
arXiv:1605.09782, 2016.
[10] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. International Conference on
Learning Representations (ICLR), 2014.
[11] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. International Conference on Machine Learning, 2014.
[12] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv
preprint arXiv:1601.06759, 2016.
[13] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional
image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems, pages
4790?4798, 2016.
[14] Matthew D Hoffman and Matthew J Johnson. Elbo surgery: yet another way to carve up the variational
evidence lower bound. In NIPS 2016 Workshop on Advances in Approximate Bayesian Inference, 2016.
[15] David Barber and Felix V Agakov. The im algorithm: A variational approach to information maximization.
In NIPS, pages 201?208, 2003.
[16] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan:
Interpretable representation learning by information maximizing generative adversarial nets. In Advances
in Neural Information Processing Systems, pages 2172?2180, 2016.
[17] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio.
Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
[18] Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever,
and Pieter Abbeel. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731, 2016.
[19] Ferenc Husz?r. Is Maximum Likelihood Useful for Representation Learning? http://www.inference.
vc/maximum-likelihood-for-representation-learning-2.
[20] Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and
Aaron Courville. Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013,
2016.
10
[21] Lucas Theis, A?ron van den Oord, and Matthias Bethge. A note on the evaluation of generative models.
arXiv preprint arXiv:1511.01844, 2015.
[22] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
[23] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint
arXiv:1610.02242, 2016.
[24] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised
learning with deep generative models. In Advances in Neural Information Processing Systems, pages
3581?3589, 2014.
[25] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing
with virtual adversarial training. stat, 1050:25, 2015.
[26] Lars Maal?e, Casper Kaae S?nderby, S?ren Kaae S?nderby, and Ole Winther. Auxiliary deep generative
models. arXiv preprint arXiv:1602.05473, 2016.
[27] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised
learning with ladder networks. In Advances in Neural Information Processing Systems, pages 3532?3540,
2015.
[28] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved
techniques for training gans. In Advances in Neural Information Processing Systems, pages 2226?2234,
2016.
[29] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. Learning to discover crossdomain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017.
[30] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using
cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
[31] Ilya Sutskever, Rafal Jozefowicz, Karol Gregor, Danilo Rezende, Tim Lillicrap, and Oriol Vinyals. Towards
principled unsupervised learning. arXiv preprint arXiv:1511.06440, 2015.
[32] Antonio Valerio Miceli Barone. Towards cross-lingual distributed representations without parallel text
trained with adversarial autoencoders. arXiv preprint arXiv:1608.02996, 2016.
[33] Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. Adversarial training for unsupervised bilingual
lexicon induction.
[34] Daniel Jiwoong Im, Sungjin Ahn, Roland Memisevic, and Yoshua Bengio. Denoising criterion for
variational auto-encoding framework. arXiv preprint arXiv:1511.06406, 2015.
[35] Casper Kaae S?nderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz?r. Amortised map
inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
[36] Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and
Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016.
[37] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado,
Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey
Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg,
Dan Man?, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens,
Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda
Vi?gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.
TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from
tensorflow.org.
[38] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn
with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517,
2017.
[39] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[40] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
11
| 6793 |@word illustrating:1 middle:1 cha:1 pieter:2 d2:7 decomposition:21 paid:1 thereby:1 carry:1 liu:1 jimenez:2 daniel:1 hyunsoo:1 maosong:1 outperforms:2 steiner:1 z2:2 assigning:1 dx:2 diederik:5 gpu:1 yet:1 john:2 tilted:1 devin:1 enables:1 christian:1 designed:1 plot:1 update:1 interpretable:1 v:8 generative:32 alec:1 isard:1 ivo:1 parametrization:1 timo:1 blei:2 toronto:2 location:11 philipp:1 ron:1 lexicon:1 zhang:1 org:1 wierstra:1 along:3 bowman:1 direct:1 become:1 olah:1 abadi:1 yuan:1 combine:4 dan:1 jiwoong:1 indeed:3 behavior:1 multi:1 kundan:1 discretized:1 automatically:1 duan:2 encouraging:1 window:1 provided:2 project:1 underlying:2 bounded:2 maximizes:2 factorized:1 discover:1 what:2 kind:1 developed:1 finding:2 temporal:3 act:1 zaremba:1 sherjil:1 control:1 faruk:1 nakae:1 danihelka:1 positive:1 felix:1 frey:3 local:9 modify:1 encoding:2 meng:1 path:6 solely:1 approximately:1 chose:1 studied:1 luke:1 factorization:1 limited:8 catgan:2 acknowledgment:1 enforces:1 lost:1 block:1 backpropagation:1 digit:16 shin:2 area:1 universal:2 yan:3 matching:6 word:1 pixelrnn:1 get:2 cannot:1 close:1 undesirable:1 onto:1 unlabeled:1 put:1 context:1 altosaar:1 operator:1 andrej:1 optimize:1 www:1 deterministic:3 map:4 imposed:4 maximizing:5 missing:1 mescheder:1 shi:1 dean:1 independently:1 jimmy:1 resolution:1 pouget:1 matthieu:1 shlens:2 enabled:1 unanswered:1 variation:5 updated:2 suppose:2 olivier:1 us:2 samy:1 goodfellow:4 pixelvae:2 jaitly:1 mikko:1 amortized:2 kunal:1 recognition:4 particularly:1 updating:1 nderby:3 balaji:1 agakov:1 predicts:1 labeled:4 distributional:1 observed:2 mike:1 preprint:25 capture:15 ensures:1 cycle:1 sun:1 principled:1 nats:1 warde:1 tobias:1 trained:15 ferenc:3 ali:2 purely:1 triangle:1 discogan:1 harri:1 train:2 forced:1 describe:3 kp:2 ole:1 vicki:1 outcome:1 rein:1 kalchbrenner:3 whose:2 encoded:2 larger:1 jean:1 loglikelihood:1 elbo:6 encoder:20 statistic:11 simonyan:1 think:1 jointly:4 itself:1 shakir:3 sequence:3 net:2 matthias:1 reconstruction:20 tran:2 relevant:4 loop:1 achieve:8 sutskever:4 cluster:18 darrell:1 generating:1 adam:1 perfect:1 executing:1 ben:1 wider:1 donation:1 recurrent:1 andrew:2 stat:1 tim:4 progress:1 eq:2 solves:2 auxiliary:1 jiwon:1 come:4 direction:1 kaae:3 drawback:1 stochastic:3 lars:2 vc:1 jonathon:2 virtual:1 implementing:1 espeholt:1 abbeel:2 clustered:1 decompose:1 im:2 considered:1 ground:1 jungkwon:1 caballero:1 mapping:9 visin:1 visualize:1 matthew:2 efros:1 achieves:2 adopt:1 purpose:2 lose:1 label:39 currently:1 honkala:1 successfully:3 hoffman:1 gaussian:12 super:1 rather:2 husz:3 avoid:1 vae:5 encode:3 ax:1 focus:1 rezende:3 believing:1 likelihood:13 masanori:1 contrast:1 ishii:1 adversarial:49 brendan:2 kim:3 helpful:1 inference:17 dependent:7 weakening:1 hidden:14 relation:8 interested:2 pixel:4 classification:9 lucas:2 proposes:1 adrien:1 smoothing:1 art:1 spatial:1 softmax:7 marginal:4 mutual:9 once:3 field:8 construct:2 beach:1 sampling:2 encouraged:2 flawed:1 adversarially:1 park:1 yu:1 unsupervised:18 koray:2 pdata:10 simplex:6 report:1 mirza:1 fundamentally:1 yoshua:2 employ:1 few:3 preserve:1 divergence:3 phase:9 jeffrey:1 investigate:1 highly:1 alexei:1 zheng:1 evaluation:2 benoit:1 mixture:1 farley:1 rajesh:2 andy:1 encourage:2 sdgm:1 decoupled:1 column:2 modeling:2 ishmael:1 maximization:1 cost:13 introducing:1 uniform:1 successful:1 odm:8 johnson:1 reported:1 learnt:9 kudlur:1 confident:1 st:1 density:4 international:2 winther:1 oord:4 retain:1 memisevic:1 lee:1 bigan:1 michael:1 parzen:1 ilya:4 bethge:1 gans:2 ashish:1 rafal:3 choose:1 berglund:1 corner:4 lukasz:1 style:22 wojciech:1 szegedy:1 potential:2 distribute:1 includes:1 lakshminarayanan:1 satisfy:4 explicitly:2 vi:1 try:1 dumoulin:1 competitive:3 bayes:3 parallel:1 jia:1 minimize:4 greg:1 convolutional:3 phoneme:2 correspond:1 identify:1 bayesian:1 vincent:2 kavukcuoglu:2 craig:1 ren:1 vazquez:1 datapoint:1 taiga:1 sebastian:1 trevor:1 frequency:3 mohamed:3 derek:1 tucker:1 naturally:1 psi:1 sampled:2 dataset:6 wicke:1 shaping:1 back:1 originally:2 supervised:33 danilo:3 modal:1 maximally:1 improved:1 formulation:2 done:3 though:3 cyclegan:1 just:1 implicit:11 stage:1 jaan:1 autoencoders:30 expressive:1 mehdi:1 propagation:1 defines:1 mode:2 logistic:1 quality:1 gulrajani:1 lossy:2 semisupervised:1 usa:1 effect:6 phillip:1 lillicrap:1 vasudevan:1 regularization:4 assigned:2 moore:1 game:1 irving:1 encourages:1 davis:1 levenberg:1 samuel:1 criterion:1 complete:2 demonstrate:1 svhn:10 image:29 variational:19 recently:3 attached:1 conditioning:5 broadcasted:2 discussed:1 interpretation:1 jozefowicz:3 ishaan:1 imposing:12 similarly:1 stochasticity:1 language:2 pixelcnn:36 access:1 longer:1 surface:1 ahn:1 add:3 pete:1 disentangle:3 posterior:14 recent:2 showed:1 optimizing:1 irrelevant:1 optimizes:2 reverse:1 scenario:1 sherry:1 nvidia:1 luan:1 wattenberg:1 samuli:1 fernanda:1 captured:10 arjovsky:1 dai:1 care:1 impose:5 tapani:1 isola:1 aggregated:8 maximize:3 corrado:1 semi:32 full:1 reduces:1 infer:2 smooth:2 takeru:1 match:5 taesung:1 ahmed:1 cross:14 long:1 valerio:1 roland:1 vat:1 jost:1 heterogeneous:1 essentially:1 metric:1 navdeep:1 arxiv:50 sergey:1 alireza:2 monga:1 agarwal:1 normalization:1 whereas:1 want:2 fine:1 separately:1 annealing:1 unlike:2 warden:1 yang:1 bengio:3 mastropietro:1 affect:2 zi:1 architecture:19 taeksoo:1 opposite:1 andreas:1 idea:2 barham:1 shift:1 whether:4 accelerating:1 manjunath:1 henb:1 speech:1 karen:1 deep:10 antonio:1 useful:6 karpathy:1 category:1 ken:1 unpaired:1 generate:3 http:1 discrete:14 ichi:1 harp:1 yangqing:1 clean:2 nal:3 downstream:2 year:1 houthooft:1 enforced:1 compete:1 inverse:1 laine:1 powerful:6 jose:1 springenberg:1 talwar:1 almost:1 lamb:1 separation:5 geiger:1 decision:1 appendix:3 layer:7 bound:8 dash:1 courville:3 aae:7 constraint:6 alex:3 software:1 dominated:1 generates:2 nathan:1 carve:1 min:2 kumar:1 martin:3 lingual:1 across:2 character:2 aila:1 shallow:5 modification:1 hl:1 explained:1 invariant:4 den:4 equation:4 visualization:1 bing:1 discus:3 tractable:2 maal:1 adopted:1 parametrize:1 available:2 brevdo:1 distr:3 hierarchical:3 salimans:3 anymore:1 batch:4 original:1 top:2 remaining:5 clustering:11 denotes:1 gan:17 miyato:1 unifying:1 especially:1 establish:1 murray:1 gregor:1 surgery:1 objective:2 added:2 question:1 kaiser:1 receptive:8 makhzani:3 gradient:2 iclr:1 separate:1 thank:2 valpola:1 capacity:2 decoder:34 parametrized:2 koyama:1 evenly:1 chris:1 barber:1 enforcing:1 induction:1 ozair:1 assuming:1 code:52 mini:3 z3:2 minimizing:4 rasmus:1 executed:2 mostly:1 disentangling:1 potentially:2 info:1 negative:1 ba:1 implementation:1 upper:1 francesco:1 datasets:7 daan:1 enabling:1 gas:1 precise:1 arbitrary:2 sharp:2 inferred:2 david:5 pair:2 cast:1 kl:8 connection:1 discriminator:2 optimized:2 z1:2 adgm:1 sentence:1 xiaoqiang:1 learned:1 tensorflow:2 dhariwal:1 kingma:5 nip:3 able:2 suggested:1 poole:1 sanjay:1 maeda:1 max:4 video:1 hot:3 suitable:2 natural:2 residual:1 zhu:1 lossless:2 ladder:2 raiko:1 categorical:22 jun:1 autoencoder:51 auto:2 text:1 prior:27 epoch:2 schulman:2 eugene:1 theis:2 graf:2 par:1 crossdomain:1 interesting:1 generation:1 approximator:2 remarkable:1 geoffrey:1 generator:2 validation:1 vanhoucke:1 consistent:1 imposes:1 classifying:1 uncorrelated:1 balancing:1 nowozin:1 row:2 casper:2 translation:1 antti:1 bias:14 weaker:1 understand:1 karol:1 amortised:1 benefit:1 van:4 distributed:1 depth:4 dimension:1 evaluating:2 xn:3 autoregressive:15 made:1 adaptive:4 sungjin:1 welling:2 ranganath:2 approximate:4 ignore:1 belghazi:1 global:15 ioffe:1 norb:7 discriminative:2 xi:4 continuous:13 latent:43 table:3 moonsu:1 learn:19 ca:1 ignoring:2 improving:1 domain:23 protocol:1 main:4 wenzhe:1 noise:2 bilingual:1 paul:2 xu:1 lasse:1 ensembling:3 nade:1 fashion:8 fails:2 wish:1 infogan:4 third:1 dustin:2 learns:1 donahue:1 ian:4 zhifeng:1 down:1 removing:2 remained:1 covariate:1 jensen:1 ghemawat:1 explored:1 abadie:1 evidence:2 intractable:1 workshop:1 mnist:21 adding:1 importance:1 kr:1 conditioned:4 confuse:2 push:1 illustrates:1 chen:5 vijay:1 entropy:1 explore:1 likely:1 ez:1 josh:1 vinyals:5 radford:1 corresponds:2 truth:1 mart:1 conditional:6 viewed:1 cheung:1 towards:2 jeff:1 price:1 man:1 content:10 specifically:3 except:1 reducing:1 denoising:1 called:1 mathias:1 player:1 shannon:1 meaningful:1 citro:1 aaron:6 internal:1 vilnis:1 rajat:1 oriol:5 incorporate:1 evaluate:1 d1:7 schuster:1 ex:5 |
6,405 | 6,794 | Consistent Multitask Learning with
Nonlinear Output Relations
Carlo Ciliberto ?,1
Alessandro Rudi ?,? ,2
Lorenzo Rosasco 3,4,5
Massimiliano Pontil 1,5
{c.ciliberto,m.pontil}@ucl.ac.uk [email protected] [email protected]
1
Department of Computer Science, University College London, London, UK.
2
INRIA - Sierra Project-team and ?cole Normale Sup?rieure, Paris, France.
3
Massachusetts Institute of Technology, Cambridge, USA.
4
Universit? degli studi di Genova, Genova, Italy.
5
Istituto Italiano di Tecnologia, Genova, Italy.
?
Equal Contribution
Abstract
Key to multitask learning is exploiting the relationships between different tasks
to improve prediction performance. Most previous methods have focused on the
case where tasks relations can be modeled as linear operators and regularization
approaches can be used successfully. However, in practice assuming the tasks to
be linearly related is often restrictive, and allowing for nonlinear structures is a
challenge. In this paper, we tackle this issue by casting the problem within the
framework of structured prediction. Our main contribution is a novel algorithm for
learning multiple tasks which are related by a system of nonlinear equations that
their joint outputs need to satisfy. We show that our algorithm can be efficiently
implemented and study its generalization properties, proving universal consistency
and learning rates. Our theoretical analysis highlights the benefits of non-linear
multitask learning over learning the tasks independently. Encouraging experimental
results show the benefits of the proposed method in practice.
1
Introduction
Improving the efficiency of learning from human supervision is one of the great challenges in
machine learning. Multitask learning is one of the key approaches in this sense and it is based on
the assumption that different learning problems (i.e. tasks) are often related, a property that can
be exploited to reduce the amount of data needed to learn each individual tasks and in particular
to learn efficiently novel tasks (a.k.a. transfer learning, learning to learn [1]). Special cases of
multitask learning include vector-valued regression and multi-category classification; applications are
numerous, including classic ones in geophysics, recommender systems, co-kriging or collaborative
filtering (see [2, 3, 4] and references therein). Diverse methods have been proposed to tackle this
problem, for examples based on kernel methods [5], sparsity approaches [3] or neural networks [6].
Furthermore, recent theoretical results allowed to quantify the benefits of multitask learning from a
generalization point view when considering specific methods [7, 8].
A common challenge for multitask learning approaches is the problem of incorporating prior assumptions on the task relatedness in the learning process. This can be done implicitly, as in neural
networks [6], or explicitly, as done in regularization methods by designing suitable regularizers
[5]. This latter approach is flexible enough to incorporate different notions of tasks? relatedness
expressed, for example, in terms of clusters or a graph, see e.g. [9, 10]. Further, it can be extended
to learn the tasks? structures when they are unknown [3, 11, 12, 13, 14, 15, 16]. However, most
?
Work performed while A.R. was at the Istituto Italiano di Tecnologia.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
regularization approaches are currently limited to imposing, or learning, tasks structures expressed by
linear relations (see Sec. 5). For example an adjacency matrix in the case of graphs or a block matrix
in the case of clusters. Clearly while such a restriction might make the problem more amenable to
statistical and computational analysis, in practice it might be a severe limitation.
Encoding and exploiting nonlinear task relatedness is the problem we consider in this paper. Previous
literature on the topic is scarce. Neural networks naturally allow to learn with nonlinear relations,
however it is unclear whether such relations can be imposed a-priori. As explained below, our
problem has some connections to that of manifold valued regression [17]. To our knowledge this
is the first work addressing the problem of explicitly imposing nonlinear output relations among
multiple tasks. Close to our perspective is [18], where however a different approach is proposed,
implicitly enforcing a nonlinear structure on the problem by requiring the parameters of each task
predictors to lie on a shared manifold in the hypotheses space.
Our main contribution is a novel method for learning multiple tasks which are nonlinearly related.
We address this problem from the perspective of structured prediction (see [19, 20] and references
therein) building upon ideas recently proposed in [21]. Specifically we look at multitask learning
as the problem of learning a vector-valued function taking values in a prescribed set, which models
tasks? interactions. We also discuss how to deal with possible violations of such a constraint set.
We study the generalization properties of the proposed approach, proving universal consistency and
learning rates. Our theoretical analysis allows also to identify specific training regimes in which
multitask learning is clearly beneficial in contrast to learning all tasks independently.
2
Problem Formulation
Multitask learning (MTL) studies the problem of estimating multiple (real-valued) functions
f1 , . . . , f T : X ! R
(1)
t
from corresponding training sets (xit , yit )ni=1
with xit 2 X and yit 2 R,
idea in MTL is to estimate f1 , . . . , fT jointly, rather than independently.
for t = 1, . . . , T . The key
The intuition is that if the
different tasks are related this strategy can lead to a substantial decrease of sample complexity, that is
the amount of data needed to achieve a given accuracy. The crucial question is then how to encode
and exploit such relations among the tasks.
Previous work on MTL has mostly focused on studying the case where the tasks are linearly related
(see Sec. 5). Indeed, this allows to capture a wide range of relevant situations and the resulting
problem can be often cast as a convex optimization, which can be solved efficiently. However, it
is not hard to imagine situations where different tasks might be nonlinearly related. As a simple
example consider the problem of learning two functions f1 , f2 : [0, 2?] ! R, with f1 (x) = cos(x)
and f2 (x) = sin(x). Clearly the two tasks are strongly related one to the other (they need to satisfy
f1 (x)2 + f2 (x)2 1 = 0 for all x 2 [0, 2?]) but such structure in nonlinear (here an equation of
degree 2). More realistic examples can be found for instance in the context of modeling physical
systems, such as the case of a robot manipulator. A prototypical learning problem (see e.g. [22]) is to
associate the current state of the system (position, velocity, acceleration) to a variety of measurements
(e.g. torques) that are nonlinearly related one to the other by physical constraints (see e.g. [23]).
Following the intuition above, in this work we model tasks relations as a set of P equations. Specifically we consider a constraint function : RT ! RP and require that (f1 (x), . . . , fT (x)) = 0 for
all x 2 X . When is linear, the problem reverts to linear MTL and can be addressed via standard
approaches (see Sec. 5). On the contrary, the nonlinear case becomes significantly more challenging
and it is not clear how to address it in general. The starting point of our study is to consider the
tasks predictors as a vector-valued function f = (f1 , . . . , fT ) : X ! RT but then observe that
imposes constraints on its range. Specifically, in this work we restrict f : X ! C to take values in
the constraint set
C = y 2 RT | (y) = 0 ? RT
(2)
and formulate the nonlinear multitask learning problem as that of finding a good approximation
fb : X ! C to the solution of the multi-task expected risk minimization problem
T Z
1X
minimize E(f ),
E(f ) =
`(ft (x), y)d?t (x, y)
(3)
f :X !C
T t=1 X ?R
2
where ` : R ? R ! R is a prescribed loss function measuring prediction errors for each individual
task and, for every t = 1, . . . , T , ?t is the distribution on X ? R from which the training points
t
(xit , yit )ni=1
have been independently sampled.
Nonlinear MTL poses several challenges to standard machine learning approaches. Indeed, when C
is a linear space (e.g. is a linear map) the typical strategy to tackle problem (3) is to minimize the
PT
P nt
empirical risk T1 t=1 n1t i=1
`(ft (xit ), yit ) over some suitable space of hypotheses f : X ! C
within which optimization can be performed efficiently. However, if C is a nonlinear subset of
RT , it is not clear how to parametrize a ?good? space of functions since most basic properties
typically needed by optimization algorithms are lost (e.g. f1 , f2 : X ! C does not necessarily imply
f1 + f2 : X ! C). To address this issue, in this paper we adopt the structured prediction perspective
proposed in [21], which we review in the following.
2.1
Background: Structured Prediction and the SELF Framework
The term structured prediction typically refers to supervised learning problems with discrete outputs,
such as strings or graphs [19, 20, 24]. The framework in [21] generalizes this perspective to account
for a more flexible formulation of structured prediction where the goal is to learn an estimator
approximating the minimizer of
Z
minimize
L(f (x), y)d?(x, y)
(4)
f :X !C
X ?Y
given a training set
of points independently sampled from an unknown distribution ? on
X ? Y, where L : Y ? Y ! R is a loss function. The output sets Y and C ? Y are not assumed
to be linear spaces but can be either discrete (e.g. strings, graphs, etc.) or dense (e.g. manifolds,
distributions, etc.) sets of ?structured? objects. This generalization will be key to tackle the question
of multitask learning with nonlinear output relations in Sec. 3 since it allows to consider the case
where C is a generic subset of Y = RT . The analysis in [21] hinges on the assumption that the loss L
is ?bi-linearizable?, namely
Definition 1 (SELF). Let Y be a compact set. A function ` : Y ? Y ! R is a Structure Encoding
Loss Function (SELF) if there exists a continuous feature map : Y ! H, with H a reproducing
kernel Hilbert space on Y and a continuous linear operator V : H ! H such that for all y, y 0 2 Y
(xi , yi )ni=1
`(y, y 0 ) = h (y), V (y 0 )iH .
(5)
In the original work the SELF definition was dubbed ?loss trick? as a parallel to the kernel trick [25].
As we discuss in Sec. 4, most MTL loss functions indeed satisfy the SELF property. Under this
assumption, it can be shown that a solution f ? : X ! C to Eq. (4) must satisfy
Z
?
?
?
f (x) = argmin h (c), V g (x)iH
with
g (x) =
(y) d?(y|x)
(6)
c2C
Y
for all x 2 X (see [21] or the Appendix). Since g : X ! H is a function with values in a linear
space, we can apply standard regression techniques to learn a gb : X ! H to approximate g ? given
(xi , (yi ))ni=1 and then obtain the estimator fb : X ! C as
?
fb(x) = argmin h (c) , V gb(x)iH
c2C
(7)
8x 2 X .
The intuition here is that if gb is close to g ? , so it will be fb to f ? (see Sec. 4 for a rigorous analysis of
this
If gb is the kernel ridge regression estimator obtained by minimizing the empirical risk
Prelation).
n
1
kg(x
)
(yi )k2H (plus regularization), Eq. (7) becomes
i
i=1
n
fb(x) = argmin
c2C
n
X
?i (x)L(c, yi ),
?(x) = (?1 (x), . . . , ?n (x))> = (K + n I)
i=1
1
Kx (8)
Pn
since gb can be written as the linear combination gb(x) = i=1 ?i (x) (yi ) and the loss function L is
SELF. In the above formula > 0 is a hyperparameter, I 2 Rn?n the identity matrix, K 2 Rn?n
the kernel matrix with elements Kij = k(xi , xj ), Kx 2 Rn the vector with entries (Kx )i = k(x, xi )
and k : X ? X ! R a reproducing kernel on X .
3
The SELF structured prediction approach is therefore conceptually divided into two distinct phases:
a learning step, where the score functions ?i : X ! R are estimated, which consists in solving the
kernel ridge regression in gb, followed by a prediction step, where the vector c 2 C minimizing the
weighted sum in Eq. (8) is identified. Interestingly, while the feature map , the space H and the
operator V allow to derive the SELF estimator, their knowledge is not needed to evaluate fb(x) in
practice since the optimization at Eq. (8) depends exclusively on the loss L and the score functions
?i .
3
Structured Prediction for Nonlinear MTL
In this section we present the main contribution of this work, namely the extension of the SELF
framework to the MTL setting. Furthermore, we discuss how to cope with possible violations of the
constraint set in practice. We study the theoretical properties of the proposed estimator in Sec. 4. We
begin our analysis by applying the SELF approach to vector-valued regression which will then lead
to the MTL formulation.
3.1
Nonlinear Vector-valued Regression
Vector-valued regression (VVR) is a special instance of MTL where for each input, all output
examples are available during training. In other words, the training sets can be combined into a single
dataset (xi , yi )ni=1 ,P
with yi = (yi1 , . . . , yit )> 2 RT . If we denote L : RT ? RT ! R the separable
1
0
loss L(y, y ) = T t=1 `(yt , yt0 ), nonlinear VVR coincides with the structured prediction problem
in Eq. (4). If L is SELF, we can therefore obtain an estimator according to Eq. (8).
PT
Example 1 (Nonlinear VVR with Square Loss). Let L(y, y 0 ) = t=1 (yt yt0 )2 . Then, the SELF
estimator for nonlinear VVR can be obtained as fb : X ! C from Eq. (8) and corresponds to the
projection onto C
fb(x) = argmin kc
c2C
2
b(x)/a(x)k2 = ?C (b(x)/a(x))
(9)
Pn
Pn
Pn
with a(x) =
i=1 ?i (x) and b(x) =
i=1 ?i (x) yi . Interestingly, b(x) =
i=1 ?i (x)yi =
Y > (K +n I) 1 Kx corresponds to the solution of the standard vector-valued kernel ridge regression
without constraints (we denoted Y 2 Rn?T the matrix with rows yi> ). Therefore, nonlinear VVR
consists in: 1) computing the unconstrained kernel ridge regression estimator b(x), 2) normalizing it
by a(x) and 3) projecting it onto C.
The example above shows that for specific loss functions the estimation of fb(x) can be significantly
simplified. In general, such optimization will depend on the properties of the constraint set C (e.g.
convex, connected, etc.) and the loss ` (e.g. convex, smooth, etc.). In practice, if C is a discrete
(or discretized) subset of RT , the computation can be performed efficiently via a nearest neighbor
search (e.g. using k-d trees based approaches to speed up computations [26]). If C is a manifold,
recent geometric optimization methods [27] (e.g. SVRG [28]) can be applied to find critical points of
Eq. (8). This setting suggests a connection with manifold regression as discussed below.
Remark 1 (Connection to Manifold Regression). When C is a Riemannian manifold, the problem of
learning f : X ! C shares some similarities to the manifold regression setting studied in [17] (see
also [29] and references therein). Manifold regression can be interpreted as a vector-valued learning
setting where outputs are constrained to be in C ? RT and prediction errors are measured according
to the geodesic distance. However, note that the two problems are also significantly different since,
1) in MTL noise could make output examples yi lie close but not exactly on the constraint set C and
moreover, 2) the loss functions used in MTL typically measure errors independently for each task (as
in Eq. (3), see also [5]) and rarely coincide with a geodesic distance.
3.2
Nonlinear Multitask Learning
Differently from nonlinear vector-valued regression, the SELF approach introduced in Sec. 2.1 cannot
be applied to the MTL setting. Indeed, the estimator at Eq. (8) requires knowledge of all tasks outputs
t
yi 2 Y = RT for every training input xi 2 X while in MTL we have a separate dataset (xit , yit )ni=1
for each task, with yit 2 R (this could be interpreted as the vector yi to have ?missing entries?).
4
Therefore, in this work we extend the SELF framework to nonlinear MTL. We begin by proving a
characterization of the minimizer f ? : X ! C of the expected risk E(f ) akin to Eq. (6).
Proposition 2. Let ` : R ? R ! R be SELF, with `(y, y 0 ) = h (y), V (y 0 )iH . Then, the expected
risk E(f ) introduced at Eq. (3) admits a measurable minimizer f ? : X ! C. Moreover, any such
minimizer satisfies, almost everywhere on X ,
?
f (x) = argmin
c2C
T
X
t=1
h (ct ), V
gt? (x)iH ,
with
gt? (x)
=
Z
R
(y) d?t (y|x).
(10)
Prop. 2 extends Eq. (6) by relying on the linearity induced by the SELF assumption combined with
the Aumann?s principle [30], which guarantees the existence of a measurable selector f ? for the
minimization problem at Eq. (10) (see Appendix). By following the strategy outlined in Sec. 2.1, we
propose to learn T independent functions gbt : X ! H, each aiming to approximate the corresponding
gt? : X ! H and then define fb : X ! C such that
fb(x) = argmin
c2C
T
X
t=1
h (ct ) , V gbt (x) iH
(11)
8x 2 X .
We choose the gbt to be the solutions to T independent kernel ridge regressions problems
minimize
g2H?G
nt
1 X
kg(xit )
nt i=1
(yit )k2 +
2
t kgkH?G
(12)
for t = 1, . . . , T , where G is a reproducing kernel Hilbert space on X associated to a kernel
k : X ? X ! R and the candidate solution g : X ! H is an element of H ? G. The following result
shows that in this setting, evaluating the estimator fb can be significantly simplified.
Proposition 3 (The Nonlinear MTL Estimator). Let k : X ? X ! R be a reproducing kernel with
associated reproducing kernel Hilbert space G. Let gbt : X ! H be the solution of Eq. (12) for
t = 1, . . . , T . Then the estimator fb : X ! C defined at Eq. (11) is such that
fb(x) = argmin
c2C
nt
T X
X
?it (x)`(ct , yit ),
(?1t (x), . . . , ?nt t (x))> = (Kt + nt t I)
1
Ktx (13)
t=1 i=1
for all x 2 X and t = 1, . . . , T , where Kt 2 Rnt ?nt denotes the kernel matrix of the t-th task,
namely (Kt )ij = k(xit , xjt ), and Ktx 2 Rnt the vector with i-th component equal to k(x, xit ).
Prop. 3 provides an equivalent characterization for nonlinear MTL estimator at Eq. (11) that is more
amenable to computations (it does not require explicit knowledge of H, or V ) and generalizes the
SELF approach (indeed for VVR, Eq. (13) reduces to the SELF estimator at Eq. (8)). Interestingly,
the proposed strategy learns the score functions ?im : X ! R separately for each task and then
combines them in the joint minimization over C. This can be interpreted as the estimator weighting
predictions according to how ?reliable? each task is on the input x 2 X . We make this intuition more
clear in the following.
Example 2 (Nonlinear MTL with Square Loss). Let ` be the square loss. Then, analogously to
Example 1 we have that for any x 2 X , the multitask estimator at Eq. (13) is
fb(x) = argmin
c2C
T
X
at (x) ct
bt (x)/at (x)
2
(14)
t=1
P nt
P nt
with at (x) = i=1
?it (x) and bt (x) = i=1
?it (x)yit , which corresponds to perform the proA(x)
b
jection f (x) = ?C (w(x)) of the vector w(x) = (b1 (x)/a1 (x), . . . , bT (x)/aT (x)) according to
the metric deformation induced by the matrix A(x) = diag(a1 (x), . . . , aT (x)). This suggests to
interpret at (x) as a measure of confidence of task t with respect to x 2 X . Indeed, tasks with small
at (x) will affect less the weighted projection ?A(x)
.
C
5
3.3
Extensions: Violating C
In practice, it is natural to expect the knowledge of the constraints set C to be not exact, for instance due
to noise or modeling inaccuracies. To address this issue, we consider two extensions of nonlinear MTL
that allow candidate predictors to slightly violate the constraints C and introduce a hyperparameter to
control this effect.
Robustness w.r.t. perturbations of C. We soften the effect of the constraint set by requiring
candidate predictors to take value within a radius > 0 from C, namely f : X ! C with
The scalar
C = { c + r | c 2 C, r 2 RT , krk ?
}.
(15)
> 0 is now a hyperparameter ranging from 0 (C0 = C) to +1 (C1 = RT ).
Penalizing w.r.t. the distance from C. We can penalize predictions depending on their distance
from the set C by introducing a perturbed version `t? : RT ? RT ! R of the loss
`t? (y, z) = `(yt , zt ) + kz
?C (z)k2 /?
for all y, z 2 RT
(16)
where ?C : RT ! C denotes the orthogonal projection onto C (see Example 1). Below we report the
closed-from solution for nonlinear vector-valued regression with square loss.
Example 3 (VVR and Violations of C). With the same notation as Example 1, let f0 : X ! C denote
the solution at Eq. (9) of nonlinear VVR with exact constraints, let r = b(x)/a(x) f0 (x) 2 RT .P
Then,
the solutions to the problem with robust constraints C and perturbed loss function L? = T1 t `t?
are respectively (see Appendix for the MTL)
4
fb (x) = f0 (x) + r min(1, /krk)
and
fb? (x) = f0 (x) + r ?/(1 + ?).
(17)
Generalization Properties of Nonlinear MTL
We now study the statistical properties of the proposed nonlinear MTL estimator. Interestingly,
this will allow to identify specific training regimes in which nonlinear MTL achieves learning rates
significantly faster than those available when learning the tasks independently. Our analysis revolves
around the assumption that the loss function used to measure prediction errors is SELF. To this end
we observe that most multitask loss functions are indeed SELF.
Proposition 4. Let `? : [a, b] ! R be differentiable almost everywhere with derivative Lipschitz
?
continuous almost everywhere. Let ` : [a, b] ? [a, b] ! R be such that `(y, y 0 ) = `(y
y0 )
? 0 ) for all y, y 0 2 R. Then: (i) ` is SELF and (ii) the separable function
or `(y, y 0 ) = `(yy
P
L : Y T ? Y T ! R such that L(y, y 0 ) = T1 Tt=1 `(yt , yt0 ) for all y, y 0 2 Y T is SELF.
Interestingly, most (mono-variate) loss functions used in multitask and supervised learning satisfy
the assumptions of Prop. 4. Typical examples are the square loss (y y 0 )2 , hinge max(0, 1 yy 0 )
or logistic log(1 exp( yy 0 )): the corresponding derivative with respect to z = y y 0 or z = yy 0
exists and it is Lipschitz almost everywhere on compact sets.
The nonlinear MTL estimator introduced in Sec. 3.2 relies on the intuition that if for each x 2 X
the kernel ridge regression solutions gbt (x) are close to the conditional expectations gt? (x), then
also fb(x) will be close to f ? (x). The following result formally characterizes the relation between
the two problems, proving what is often referred to as a comparison inequality in the context of
surrogate frameworks [31]. Throughout the rest of this section we assume ?t (x, y) = ?t (y|x)?X (x)
for each t = 1, . . . , T and denote kgkL2? the L2?X = L2 (X , H, ?X ) norm of a function g : X ! H
X
according to the marginal distribution ?X .
Theorem 5 (Comparison Inequality). With the same assumptions of Prop. 2, for t = 1, . . . , T let
f ? : X ! C and gt? : X ! H be defined as in Eq. (10), let gbt : X ! H be measurable functions
and let fb : X ! C satisfy Eq. (11). Let V ? be the adjoint of V . Then,
v
v
u
u
T
T
u1 X
u1 X
?
t
?
2
b
E(f ) E(f ) ? qC,`,T
kb
gt gt kL 2 , qC,`,T = 2 sup t
kV ? (ct )k2H . (18)
?
T t=1
T t=1
X
c2C
6
The comparison inequality at Eq. (18) is key to study the generalization properties of our nonlinear
MTL estimator by showing that we can control its excess risk in terms of how well the gbt approximate
the true gt? (see Appendix for a proof of Thm. 5).
Theorem 6. Let C ? [a, b]T , let X be a compact set and k : X ? X ! R a continuous universal
reproducing kernel (e.g. Gaussian). Let ` : [a, b] ? [a, b] ! R be a SELF. Let fbN : X ! C denote
the estimator at Eq. (13) with N = (n1 , . . . , nT ) training points independently sampled from ?t for
1/4
each task t = 1, . . . , T and t = nt
. Let n0 = min1?t?T nt . Then, with probability 1
lim
n0 !+1
E(fbN ) = inf E(f ).
(19)
f :X !C
The proof of Thm. 6 relies on the comparison inequality in Thm. 5, which links the excess risk
of the MTL estimator to the square error between g?t and gt? . Standard results from kernel ridge
regression allow to conclude the proof [32] (see a more detailed discussion in the Appendix). By
imposing further standard assumptions, we can also obtain generalization bounds on kb
gt gt? kL2
that automatically apply to nonlinear MTL again via the comparison inequality, as shown below.
Theorem 7. With the same assumptions and notation of Thm. 6 let fbN : X ! C denote the estimator
1/2
at Eq. (13) with t = nt
and assume gt? 2 H ? G, for all t = 1, . . . , T . Then for any ? > 0 we
have, with probability at least 1 8e ? , that
E(fbN )
inf E(f ) ? qC,`,T h` ? 2 n0
f :X !C
1/4
(20)
log T,
where qC,`,T is defined as in Eq. (18) and h` is a constant independent of C, N, nt ,
t , ?, T .
The the excess risk bound in Thm. 7 is comparable to that in [21] (Thm. 5). To our knowledge this is
the first result studying the generalization properties of a learning approach to MTL with constraints.
4.1
Benefits of Nonlinear MTL
The rates in Thm. 7 strongly depend on the constraints C via the constant qC,`,T . The following result
studies two special cases that allow to appreciate this effect.
Lemma 8. Let B 1, B = [ B, B]T , S ?pRT be the sphere of radius
B centered at the origin
p
and let ` be the square loss. Then qB,`,T ? 2 5 B 2 and qS,`,T ? 2 5 B 2 T 1/2 .
P
To explain the effect of C on MTL, define n = Tt=1 nt and assume that n0 = nt = n/T . Lemma 8
together with Thm. 7 shows that when the tasks are assumed not to be related (i.e. C = B) the learning
e T )1/4 ), as if the tasks were learned independently. On the other hand,
rate of nonlinear MTL is of O((
n
when the tasks have a relation (e.g. C = S, implying a quadratic relation between the tasks) nonlinear
e 1 )1/4 ), which improves as the number of tasks increases and as
MTL achieves a learning rate of O((
nT
the total number of observed examples increases. Specifically, for T of the same order of n, we obtain
1/2
e
a rate of O(n
) which is comparable to the optimal rates available for kernel ridge regression
with only one task trained on the total number n of examples [32]. This observation corresponds to
the intuition that if we have many related tasks with few training examples each, we can expect to
achieve significantly better generalization by taking advantage of such relations rather than learning
each task independently.
5
Connection to Previous Work: Linear MTL
In this work we formulated the nonlinear MTL problem as that of learning a function f : X ! C
taking values in a set of constraints C ? RT implicitly identified by a set of equations (f (x)) = 0. An
alternative approach would be to characterize the set C via an explicit parametrization ? : RQ ! C, for
Q 2 N, so that the multitask predictor can be decomposed as f = ? h, with h : X ! RQ . We can
learn h : X ! RQ using empirical risk minimization strategies such as Tikhonov regularization,
minimize
h=(h1 ,...,hQ )2HQ
Q
n
X
1X
L(? h(xi ), yi ) +
khq k2H
n i=1
q=1
7
(21)
Figure 1: (Bottom) MSE (logaritmic scale) of MTL methods for learning constrained on a circumference (Left)
or a Lemniscate (Right). Results are reported in a boxplot across 10 trials. (Top) Sample predictions of the three
methods trained on 100 points and compared with the ground truth.
since candidate h take value in RQ and therefore H can be a standard linear space of hypotheses.
However, while Eq. (21) is interesting from the modeling standpoint, it also poses several problems:
1) ? can be nonlinear or even non-continuous, making Eq. (21) hard to solve in practice even for
L convex; 2) ? is not uniquely identified by C and therefore different parametrizations may lead
to very different fb = ? b
h, which is not always desirable; 3) There are few results on empirical
risk minimization applied to generic loss functions L(?(?), ?) (via so-called oracle inequalities, see
[30] and references therein), and it is unclear what generalization properties to expect in this setting.
A relevant exception to the issues above is the case where ? is linear. In this setting Eq. (21)
becomes more amenable to both computations and statistical analysis and indeed most previous MTL
literature has been focused on this setting, either by designing ad-hoc output metrics [33], linear
output encodings [34] or regularizers [5]. Specifically, in this latter case the problem is cast as that of
minimizing the functional
minimize
f =(f1 ,...,fT )2HT
n
X
i=1
L(f (xi ), yi ) +
T
X
t,s=1
Ats hft , fs iH
(22)
where the psd matrix A = (Ats )Ts,t=1 encourages linear relations between the tasks. It can be
shown that this problem is equivalent to Eq. (21) when the ? 2 RT ?Q is linear and A is set to
the pseudoinverse of ??> . As shown in [14], a variety of situations are recovered considering the
approach above, such as the case where tasks are centered around a common average [9], clustered
in groups [10] or sharing the same subset of features [3, 35]. Interestingly, the above framework can
be further extended to estimate the structure matrix A directly from data, an idea initially proposed in
[12] and further developed in [2, 14, 16].
6
Experiments
Synthetic Dataset. We considered a model of the form y = f ? (x) + ?, with ? ? N(0, I) noise sampled according to a normal distribution and f ? : X ! C, where C ? R2 was either a circumference or
a lemniscate (see Fig. 1) of equation circ (y) = y12 + y22 1 = 0 and lemn (y) = y14 (y12 y22 ) = 0
?
?
for y 2 R2 . We set X = [ ?, ?] and fcirc
(x) = (cos(x), sin(x)) or flemn
(x) = (sin(x), sin(2x))
the parametric functions associated respectively to the circumference and Lemniscate. We sampled
from 10 to 1000 points for training and 1000 for testing, with noise = 0.05.
We trained and tested three regression models over 10 trials. We used a Gaussian kernel on the
input and chose the corresponding bandwidth and the regularization parameter by hold-out crossvalidation on 30% of the training set (see details in the appendix). Fig. 1 (Bottom) reports the mean
8
Table 1: Explained variance of the robust (NL-MTL[R]) and perturbed (NL-MTL[P]) variants of nonlinear MTL,
compared with linear MTL methods on the Sarcos dataset reported from [16].
Expl.
Var. (%)
STL
MTL[36]
CMTL[10]
MTRL[11]
MTFL[13]
FMTL[16]
NL-MTL[R]
NL-MTL[P]
40.5
?7.6
34.5
?10.2
33.0
?13.4
41.6
?7.1
49.9
?6.3
50.3
?5.8
55.4
?6.5
54.6
?5.1
Table 2: Rank prediction error according to the weighted binary loss in [37, 21].
Rank
Loss
NL-MTL
SELF[21]
Linear [37]
Hinge [38]
Logistic [39]
SVMStruct [20]
STL
MTRL[11]
0.271
?0.004
0.396
?0.003
0.430
?0.004
0.432
?0.008
0.432
?0.012
0.451
?0.008
0.581
0.003
0.613
?0.005
square error (MSE) of our nonlinear MTL approach (NL-MTL) compared with the standard least
squares single task learning (STL) baseline and the multitask relations learning (MTRL) from [11],
which encourages tasks to be linearly dependent. However, for both circumference and Lemniscate,
the tasks are strongly nonlinearly related. As a consequence our approach consistently outperforms
its two competitors which assume only linear relations (or none at all). Fig. 1 (Top) provides a
qualitative comparison on the three methods (when trained with 100 examples) during a single trial.
Sarcos Dataset. We report experiments on the Sarcos dataset [22]. The goal is to predict the torque
measured at each joint of a 7 degrees-of-freedom robotic arm, given the current state, velocities and
accelerations measured at each joint (7 tasks/torques for 21-dimensional input). We used the 10
dataset splits available online for the dataset in [13], each containing 2000 examples per task with 15
examples used for training/validation while the rest is used to measure errors in terms of the explained
variance, namely 1 - nMSE (as a percentage). To compare with results in [13] we used the linear
kernel on the input. We refer to the Appending for details on model selection.
Tab. 1 reports results from [13, 16] for a wide range of previous linear MTL methods [36, 10, 3, 11,
13, 16], together with our NL-MTL approach (both robust and perturbed versions). Since, we did not
find Sarcos robot model parameters online, we approximated the constraint set C as a point cloud by
collecting 1000 random output vectors that did not belong to training or test sets in [13] (we sampled
them from the original dataset [22]). NL-MTL clearly outperforms the ?linear? competitors. Note
indeed that the torques measured at different joints of a robot are highly nonlinear (see for instance
[23]) and therefore taking such structure into account can be beneficial to the learning process.
Ranking by Pair-wise Comparison. We consider a ranking problem formulated withing the MTL
setting: given D documents, we learn T = D(D 1)/2 functions fp,q : X ! { 1, 0, 1}, for
each pair of documents p, q = 1, . . . , D that predict whether one document is more relevant than
the other for a given input query x. The problem can be formulated as multi-label MTL with 0-1
loss: for a given training query x only some labels yp,q 2 { 1, 0, 1} are available in output (with
1 corresponding to document p being more relevant than q, 1 the opposite and 0 that the two are
equivalent). We have therefore T separate training sets, one for each task (i.e. pair of documents).
Clearly, not all possible combinations of outputs f : X ! { 1, 0, 1}T are allowed since predictions
need to be consistent (e.g. if p q (read ?p more relevant than q?) and q r, then we cannot have
r
p). As shown in [37] these constraints are naturally encoded in a set DAG(D) in RT of all
vectors G 2 RT that correspond to (the vectorized, upper triangular part of the adjacency matrix of)
a Directed Acyclic Graph with D vertices. The problem can be cast in our nonlinear MTL framework
with f : X ! C = DAG(D) (see Appendix for details on how to perform the projection onto C).
We performed experiments on Movielens100k [40] (movies = documents, users = queries) to
compare our NL-MTL estimator with both standard MTL baselines as well as methods designed for
ranking problems. We used the (linear) input kernel and the train, validation and test splits adopted in
[21] to perform 10 independent trials with 5-fold cross-validation for model selection. Tab. 2 reports
the average ranking error and standard deviation of the (weighed) 0-1 loss function considered in
[37, 21] for the ranking methods proposed in [38, 39, 37], the SVMStruct estimator [20], the SELF
estimator considered in [21] for ranking, the MTRL and STL baseline, corresponding to individual
SVMs trained for each pairwise comparison. Results for previous methods are reported from [21].
NL-MTL outperforms all competitors, achieving better performance than the the original SELF
estimator. For the sake of brevity we refer to the Appendix for more details on the experiments.
Acknowledgments. This work was supported in part by EPSRC grant EP/P009069/1.
9
References
[1] Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
[2] Mauricio A. ?lvarez, Neil Lawrence, and Lorenzo Rosasco. Kernels for vector-valued functions: a review.
Foundations and Trends in Machine Learning, 4(3):195?266, 2012.
[3] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning. Advances
in neural information processing systems, 19:41, 2007.
[4] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and
data engineering, 22(10):1345?1359, 2010.
[5] Charles A Micchelli and Massimiliano Pontil. Kernels for multi?task learning. In Advances in Neural
Information Processing Systems, pages 921?928, 2004.
[6] Christopher M Bishop. Machine learning and pattern recognition. Information Science and Statistics.
Springer, Heidelberg, 2006.
[7] Andreas Maurer and Massimiliano Pontil. Excess risk bounds for multitask learning with trace norm
regularization. In Conference on Learning Theory (COLT), volume 30, pages 55?76, 2013.
[8] Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. The benefit of multitask representation learning. Journal of Machine Learning Research, 17(81):1?32, 2016.
[9] Theodoros Evgeniou, Charles A. Micchelli, and Massimiliano Pontil. Learning multiple tasks with kernel
methods. In Journal of Machine Learning Research, pages 615?637, 2005.
[10] Laurent Jacob, Francis Bach, and Jean-Philippe Vert. Clustered multi-task learning: a convex formulation.
Advances in Neural Information Processing Systems, 2008.
[11] Yu Zhang and Dit-Yan Yeung. A convex formulation for learning task relationships in multi-task learning.
In Conference on Uncertainty in Artificial Intelligence (UAI), 2010.
[12] Francesco Dinuzzo, Cheng S. Ong, Peter V. Gehler, and Gianluigi Pillonetto. Learning output kernels with
block coordinate descent. International Conference on Machine Learning, 2011.
[13] Pratik Jawanpuria and J Saketha Nath. A convex feature learning formulation for latent task structure
discovery. International Conference on Machine Learning, 2012.
[14] Carlo Ciliberto, Youssef Mroueh, Tomaso A Poggio, and Lorenzo Rosasco. Convex learning of multiple
tasks and their structure. In International Conference on Machine Learning (ICML), 2015.
[15] Carlo Ciliberto, Lorenzo Rosasco, and Silvia Villa. Learning multiple visual tasks while discovering their
structure. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
131?139, 2015.
[16] Pratik Jawanpuria, Maksim Lapin, Matthias Hein, and Bernt Schiele. Efficient output kernel learning for
multiple tasks. In Advances in Neural Information Processing Systems, pages 1189?1197, 2015.
[17] Florian Steinke and Matthias Hein. Non-parametric regression between manifolds. In Advances in Neural
Information Processing Systems, pages 1561?1568, 2009.
[18] Arvind Agarwal, Samuel Gerber, and Hal Daume. Learning multiple tasks using manifold regularization.
In Advances in neural information processing systems, pages 46?54, 2010.
[19] Thomas Hofmann Bernhard Sch?lkopf Alexander J. Smola Ben Taskar Bakir, G?khan and S.V.N Vishwanathan. Predicting structured data. MIT press, 2007.
[20] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods
for structured and interdependent output variables. In Journal of Machine Learning Research, 2005.
[21] Carlo Ciliberto, Lorenzo Rosasco, and Alessandro Rudi. A consistent regularization approach for structured
prediction. Advances in Neural Information Processing Systems 29 (NIPS), pages 4412?4420, 2016.
[22] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. The
MIT Press, 2006.
[23] Lorenzo Sciavicco and Bruno Siciliano. Modeling and control of robot manipulators, volume 8. McGrawHill New York, 1996.
[24] Sebastian Nowozin, Christoph H Lampert, et al. Structured learning and prediction in computer vision.
Foundations and Trends in Computer Graphics and Vision, 2011.
[25] Bernhard Sch?lkopf and Alexander J Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
[26] Thomas H Cormen. Introduction to algorithms. MIT press, 2009.
[27] Suvrit Sra and Reshad Hosseini. Geometric optimization in machine learning. In Algorithmic Advances in
Riemannian Geometry and Applications, pages 73?91. Springer, 2016.
10
[28] Hongyi Zhang, Sashank J. Reddi, and Suvrit Sra. Riemannian svrg: Fast stochastic optimization on
riemannian manifolds. In Advances in Neural Information Processing Systems 29. 2016.
[29] Florian Steinke, Matthias Hein, and Bernhard Sch?lkopf. Nonparametric regression between general
riemannian manifolds. SIAM Journal on Imaging Sciences, 3(3):527?563, 2010.
[30] Ingo Steinwart and Andreas Christmann. Support Vector Machines. Information Science and Statistics.
Springer New York, 2008.
[31] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Convexity, classification, and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[32] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm.
Foundations of Computational Mathematics, 7(3):331?368, 2007.
[33] Vikas Sindhwani, Aurelie C. Lozano, and Ha Quang Minh. Scalable matrix-valued kernel learning and
high-dimensional nonlinear causal inference. CoRR, abs/1210.4792, 2012.
[34] Rob Fergus, Hector Bernal, Yair Weiss, and Antonio Torralba. Semantic label sharing for learning with
many categories. European Conference on Computer Vision, 2010.
[35] Guillaume Obozinski, Ben Taskar, and Michael I Jordan. Joint covariate selection and joint subspace
selection for multiple classification problems. Statistics and Computing, 20(2):231?252, 2010.
[36] Theodoros Evgeniou and Massimiliano Pontil. Regularized multi?task learning. In Proceedings of the
tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2004.
[37] John C Duchi, Lester W Mackey, and Michael I Jordan. On the consistency of ranking algorithms. In
Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 327?334, 2010.
[38] Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Large margin rank boundaries for ordinal regression.
Advances in neural information processing systems, pages 115?132, 1999.
[39] Ofer Dekel, Yoram Singer, and Christopher D Manning. Log-linear models for label ranking. In Advances
in neural information processing systems, page None, 2004.
[40] F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. ACM Transactions
on Interactive Intelligent Systems (TiiS), 5(4):19, 2015.
11
| 6794 |@word multitask:20 trial:4 version:2 norm:2 paredes:1 c0:1 hector:1 dekel:1 jacob:1 score:3 exclusively:1 document:6 interestingly:6 romera:1 outperforms:3 current:2 recovered:1 nt:17 must:1 written:1 john:1 realistic:1 hofmann:2 jawanpuria:2 designed:1 mtfl:1 n0:4 mackey:1 implying:1 intelligence:1 discovering:1 yi1:1 parametrization:1 dinuzzo:1 sarcos:4 characterization:2 provides:2 pillonetto:1 herbrich:1 theodoros:3 zhang:2 rnt:2 quang:1 qualitative:1 consists:2 combine:1 introduce:1 pairwise:1 expected:3 indeed:9 andrea:1 tomaso:1 multi:8 discretized:1 torque:4 relying:1 decomposed:1 automatically:1 encouraging:1 considering:2 becomes:3 project:1 estimating:1 begin:2 moreover:2 linearity:1 notation:2 medium:1 sinno:1 what:2 kg:2 argmin:8 interpreted:3 string:2 developed:1 finding:1 dubbed:1 guarantee:1 every:2 collecting:1 tackle:4 interactive:1 exactly:1 universit:1 k2:3 uk:2 control:3 lester:1 grant:1 mauricio:1 mcauliffe:1 t1:3 engineering:1 aiming:1 consequence:1 encoding:3 laurent:1 inria:2 might:3 plus:1 therein:4 studied:1 chose:1 suggests:2 challenging:1 christoph:1 co:3 revolves:1 limited:1 range:3 bi:1 directed:1 acknowledgment:1 testing:1 practice:8 block:2 lost:1 pontil:8 universal:3 empirical:4 yan:1 significantly:6 vert:1 projection:4 word:1 confidence:1 refers:1 altun:1 onto:4 close:5 cannot:2 operator:3 selection:4 tsochantaridis:1 context:3 risk:12 applying:1 restriction:1 measurable:3 imposed:1 map:3 yt:4 missing:1 equivalent:3 circumference:4 williams:1 starting:1 independently:10 convex:8 focused:3 formulate:1 qc:5 survey:1 estimator:26 q:1 ralf:1 proving:4 classic:1 notion:1 coordinate:1 svmstruct:2 imagine:1 pt:2 user:1 exact:2 carl:1 designing:2 hypothesis:3 origin:1 associate:1 velocity:2 trick:2 element:2 approximated:1 trend:2 recognition:2 gehler:1 observed:1 ft:6 min1:1 bottom:2 cloud:1 solved:1 capture:1 epsrc:1 ep:1 taskar:2 connected:1 decrease:1 linearizable:1 kriging:1 alessandro:3 intuition:6 substantial:1 rq:4 complexity:1 schiele:1 convexity:1 ong:1 vito:1 geodesic:2 n1t:1 trained:5 depend:2 solving:1 upon:1 efficiency:1 f2:5 joint:7 differently:1 train:1 massimiliano:7 distinct:1 gbt:7 london:2 fast:1 query:3 artificial:1 youssef:1 klaus:1 jean:1 encoded:1 bernt:1 valued:14 solve:1 circ:1 tested:1 triangular:1 statistic:3 neil:1 saketha:1 jointly:1 online:2 hoc:1 advantage:1 differentiable:1 matthias:3 ucl:1 propose:1 interaction:1 fr:1 relevant:5 parametrizations:1 achieve:2 adjoint:1 kv:1 crossvalidation:1 exploiting:2 cluster:2 bernal:1 sierra:1 object:1 ben:2 derive:1 depending:1 ac:1 pose:2 measured:4 ij:1 nearest:1 eq:31 edward:1 implemented:1 christmann:1 quantify:1 radius:2 stochastic:1 kb:2 centered:2 human:1 adjacency:2 require:2 f1:10 generalization:10 clustered:2 proposition:3 im:1 extension:3 hold:1 around:2 considered:3 ground:1 normal:1 exp:1 great:1 k2h:3 lawrence:1 predict:2 algorithmic:1 achieves:2 adopt:1 reshad:1 torralba:1 estimation:1 label:4 currently:1 cole:1 successfully:1 weighted:3 minimization:5 mit:5 clearly:5 gaussian:3 always:1 normale:1 rather:2 pn:4 casting:1 encode:1 xit:8 joachim:1 consistently:1 rank:3 contrast:1 rigorous:1 sigkdd:1 baseline:3 sense:1 inference:1 dependent:1 prt:1 typically:3 bt:3 initially:1 y14:1 relation:16 kc:1 tiis:1 france:1 issue:4 classification:3 flexible:2 among:2 denoted:1 priori:1 colt:1 constrained:2 special:3 marginal:1 equal:2 evgeniou:3 ernesto:1 beach:1 qiang:1 look:1 yu:1 icml:2 jon:1 report:5 intelligent:1 few:2 individual:3 phase:1 geometry:1 n1:1 ciliberto:5 psd:1 freedom:1 ab:1 highly:1 mining:1 severe:1 violation:3 nl:10 regularizers:2 jialin:1 amenable:3 kt:3 istituto:2 poggio:1 orthogonal:1 tree:1 maurer:2 gerber:1 withing:1 causal:1 deformation:1 hein:3 theoretical:4 instance:4 kij:1 modeling:4 measuring:1 soften:1 introducing:1 addressing:1 subset:4 entry:2 vertex:1 predictor:5 deviation:1 gianluigi:1 graphic:1 characterize:1 reported:3 perturbed:4 synthetic:1 combined:2 st:1 international:5 siam:1 michael:3 analogously:1 together:2 fbn:4 again:1 weighed:1 lorien:1 containing:1 rosasco:5 choose:1 american:1 derivative:2 yp:1 account:2 de:1 jection:1 sec:10 ioannis:1 satisfy:6 explicitly:2 ranking:8 depends:1 ad:1 performed:4 view:1 h1:1 closed:1 tab:2 sup:2 characterizes:1 francis:1 mcgrawhill:1 parallel:1 contribution:4 collaborative:1 minimize:6 ni:6 accuracy:1 square:10 variance:2 efficiently:5 correspond:1 identify:2 conceptually:1 lkopf:3 cmtl:1 none:2 carlo:4 movielens100k:1 history:1 explain:1 sharing:2 sebastian:2 definition:2 competitor:3 kl2:1 naturally:2 associated:3 di:3 riemannian:5 proof:3 sampled:6 dataset:9 massachusetts:1 knowledge:8 lim:1 improves:1 bakir:1 hilbert:3 graepel:1 g2h:1 maxwell:1 supervised:2 mtl:55 violating:1 wei:1 formulation:6 done:2 strongly:3 furthermore:2 smola:2 hand:1 steinwart:1 christopher:3 nonlinear:44 logistic:2 hal:1 manipulator:2 hongyi:1 building:1 usa:2 effect:4 requiring:2 true:1 thore:1 lozano:1 regularization:10 y12:2 read:1 logaritmic:1 semantic:1 deal:1 sin:4 during:2 self:26 uniquely:1 encourages:2 coincides:1 samuel:1 ridge:8 tt:2 duchi:1 ranging:1 wise:1 novel:3 recently:1 charles:2 common:2 functional:1 physical:2 volume:2 discussed:1 extend:1 belong:1 association:1 interpret:1 measurement:1 refer:2 cambridge:1 imposing:3 dag:2 mroueh:1 unconstrained:1 consistency:3 outlined:1 mathematics:1 bruno:1 robot:4 f0:4 supervision:1 similarity:1 etc:4 gt:12 ktx:2 recent:2 perspective:4 lrosasco:1 italy:2 inf:2 rieure:1 tikhonov:1 suvrit:2 inequality:6 binary:1 yi:15 exploited:1 yasemin:1 florian:2 ii:1 multiple:10 violate:1 desirable:1 reduces:1 caponnetto:1 smooth:1 faster:1 cross:1 long:1 sphere:1 y22:2 divided:1 bach:1 arvind:1 a1:2 prediction:21 variant:1 regression:24 basic:1 xjt:1 vision:4 metric:2 expectation:1 scalable:1 yeung:1 kernel:29 agarwal:1 c1:1 penalize:1 background:1 separately:1 addressed:1 hft:1 crucial:1 standpoint:1 sch:3 rest:2 induced:2 contrary:1 nath:1 jordan:3 reddi:1 yang:1 split:2 enough:1 pratt:1 variety:2 xj:1 affect:1 variate:1 restrict:1 identified:3 bandwidth:1 reduce:1 idea:3 opposite:1 andreas:4 c2c:9 whether:2 bartlett:1 gb:7 akin:1 f:1 peter:2 sashank:1 york:2 remark:1 antonio:1 clear:3 detailed:1 amount:2 nonparametric:1 svms:1 category:2 dit:1 percentage:1 estimated:1 per:1 yy:4 diverse:1 discrete:3 hyperparameter:3 group:1 key:5 achieving:1 yit:10 mono:1 penalizing:1 tenth:1 ht:1 imaging:1 graph:5 fmtl:1 sum:1 everywhere:4 uncertainty:1 extends:1 almost:4 throughout:1 appendix:8 genova:3 comparable:2 bound:4 ct:5 followed:1 rudi:3 cheng:1 fold:1 quadratic:1 oracle:1 constraint:19 vishwanathan:1 boxplot:1 sake:1 u1:2 speed:1 prescribed:2 min:1 qb:1 separable:2 kgkh:1 department:1 structured:14 according:7 combination:2 manning:1 cormen:1 beneficial:2 slightly:1 across:1 y0:1 pan:1 joseph:1 rob:1 making:1 kgkl2:1 explained:3 projecting:1 thorsten:1 equation:5 discus:3 needed:4 ordinal:1 singer:1 italiano:2 end:1 studying:2 ofer:1 parametrize:1 generalizes:2 available:5 adopted:1 apply:2 observe:2 generic:2 appending:1 alternative:1 robustness:1 yair:1 rp:1 existence:1 original:3 thomas:3 denotes:2 top:2 include:1 vikas:1 hinge:3 exploit:1 yoram:1 restrictive:1 hosseini:1 approximating:1 appreciate:1 micchelli:2 question:2 strategy:5 parametric:2 rt:23 surrogate:1 unclear:2 villa:1 obermayer:1 hq:2 distance:4 separate:2 link:1 expl:1 thrun:1 subspace:1 topic:1 manifold:13 evaluate:1 studi:1 enforcing:1 assuming:1 modeled:1 relationship:2 minimizing:3 mostly:1 maksim:1 trace:1 zt:1 unknown:2 perform:3 allowing:1 recommender:1 upper:1 observation:1 francesco:1 datasets:1 ingo:1 minh:1 descent:1 t:1 philippe:1 situation:3 extended:2 team:1 rn:4 perturbation:1 reproducing:6 thm:8 aurelie:1 introduced:3 nonlinearly:4 paris:1 cast:3 namely:5 connection:4 kl:1 pair:3 lvarez:1 khan:1 learned:1 geophysics:1 inaccuracy:1 nip:2 address:4 beyond:1 below:4 pattern:2 regime:2 sparsity:1 challenge:4 fp:1 reverts:1 including:1 reliable:1 max:1 suitable:2 critical:1 natural:1 business:1 regularized:2 predicting:1 scarce:1 arm:1 improve:1 movie:1 technology:1 lorenzo:6 imply:1 numerous:1 prior:1 literature:2 review:2 geometric:2 l2:2 discovery:2 interdependent:1 loss:28 expect:3 highlight:1 prototypical:1 limitation:1 filtering:1 interesting:1 acyclic:1 var:1 validation:3 foundation:3 degree:2 vectorized:1 consistent:3 imposes:1 principle:1 nowozin:1 share:1 row:1 yt0:3 supported:1 rasmussen:1 svrg:2 allow:6 institute:1 wide:2 neighbor:1 taking:4 steinke:2 benefit:5 boundary:1 evaluating:1 pratik:2 fb:20 kz:1 aumann:1 coincide:1 simplified:2 cope:1 transaction:2 excess:4 approximate:3 compact:3 selector:1 relatedness:3 implicitly:3 bernhard:3 pseudoinverse:1 robotic:1 uai:1 b1:1 assumed:2 conclude:1 xi:8 degli:1 fergus:1 continuous:5 search:1 latent:1 table:2 learn:11 transfer:2 robust:3 ca:1 sra:2 improving:1 heidelberg:1 mse:2 necessarily:1 european:1 krk:2 diag:1 did:2 main:3 dense:1 linearly:3 silvia:1 noise:4 lampert:1 daume:1 allowed:2 nmse:1 fig:3 referred:1 position:1 explicit:2 konstan:1 lie:2 candidate:4 weighting:1 mtrl:4 learns:1 formula:1 theorem:3 specific:4 bishop:1 covariate:1 showing:1 r2:2 admits:1 normalizing:1 stl:4 incorporating:1 exists:2 ih:7 corr:1 kx:4 margin:2 visual:1 bernardino:1 expressed:2 scalar:1 sindhwani:1 springer:4 corresponds:4 minimizer:4 satisfies:1 relies:2 truth:1 acm:3 prop:4 obozinski:1 conditional:1 goal:2 identity:1 formulated:3 acceleration:2 shared:1 lipschitz:2 hard:2 tecnologia:2 specifically:5 typical:2 movielens:1 lemma:2 total:2 called:1 experimental:1 rarely:1 formally:1 college:1 exception:1 guillaume:1 support:2 latter:2 harper:1 brevity:1 alexander:2 incorporate:1 lapin:1 argyriou:1 |
6,406 | 6,795 | Alternating minimization for dictionary learning with
random initialization
Niladri S. Chatterji
UC Berkeley
[email protected]
Peter L. Bartlett
UC Berkeley
[email protected]
Abstract
We present theoretical guarantees for an alternating minimization algorithm for
the dictionary learning/sparse coding problem. The dictionary learning problem
is to factorize vector samples y 1 , y 2 , . . . , y n into an appropriate basis (dictionary)
A? and sparse vectors x1? , . . . , xn? . Our algorithm is a simple alternating minimization procedure that switches between `1 minimization and gradient descent in
alternate steps. Dictionary learning and specifically alternating minimization algorithms for dictionary learning are well studied both theoretically and empirically.
However, in contrast to previous theoretical analyses for this problem, we replace
the condition on the operator norm (that is, the largest magnitude singular value)
of the true underlying dictionary A? with a condition on the matrix infinity norm
(that is, the largest magnitude term). This not only allows us to get convergence
rates for the error of the estimated dictionary measured in the matrix infinity norm,
but also ensures that a random initialization will provably converge to the global
optimum. Our guarantees are under a reasonable generative model that allows
for dictionaries with growing operator norms, and can handle an arbitrary level of
overcompleteness, while having sparsity that is information theoretically optimal.
We also establish upper bounds on the sample complexity of our algorithm.
1
Introduction
In the problem of sparse coding/dictionary learning, given i.i.d. samples y 1 , y 2 , . . . , y n 2 Rd
produced from the generative model
y i = A? xi?
(1)
for i 2 {1, 2, . . . , n}, the goal is to recover a fixed dictionary A? 2 Rd?r and s-sparse vectors
xi? 2 Rr . (An s-sparse vector has no more than s non-zero entries.) In many problems of interest,
the dictionary is often overcomplete, that is, r d. This is believed to add flexibility in modeling
and robustness. This model was first proposed in neuroscience as an energy minimization heuristic
that reproduces features of the V1 portion of the visual cortex [28; 22]. It has also been an extremely
successful approach to identifying low dimensional structure in high dimensional data; it is used
extensively to find features in images, speech and video (see, for example, references in [13]).
Most formulations of dictionary learning tend to yield non-convex optimization problems. For
example, note that if either xi? or A? were known, given y i , this would just be a (matrix/sparse)
regression problem. However, estimating both xi? and A? simultaneously leads to both computational
as well as statistical complications. The heuristic of alternating minimization works well empirically
for dictionary learning. At each step, first an estimate of the dictionary is held fixed while the sparse
coefficients are estimated; next, using these sparse coefficients the dictionary is updated. Note that in
each step the sub-problem has a convex formulation, and there is a range of efficient algorithms that
can be used. This heuristic has been very successful empirically, and there has also been significant
recent theoretical progress in understanding its performance, which we discuss next.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1
Related Work
A recent line of work theoretically analyzes local linear convergence rates for alternating minimization
procedures applied to dictionary learning [1; 4]. Arora et al. [4]
p present a neurally plausible algorithm
that recovers the dictionary exactly for sparsity up to s = O( d/(? log(d))), where ? is the level of
incoherence in the dictionary (which is a measure of the similarity of the columns; see Assumption A1
below). Agarwal et al. [1] analyze a least squares/`1 minimization scheme and show that it can tolerate
sparsity up to s = O(d1/6 ). Both of these establish local linear convergence guarantees for the
maximum column-wise distance. Exact recovery guarantees require a singular-value decomposition
(SVD) or clustering based procedure to initialize their dictionary estimates (see also the previous
work [5; 2]).
For the undercomplete case (when r ? d), Sun et al. [33] provide a Riemannian trust region method
that can tolerate sparsity s = O(d), whilepearlier work by Spielman et al. [32] provides an algorithm
that works in this setting for sparsity O( d).
Local and global optima of non-convex formulations for the problem have also been extensively
studied in [36; 17; 18], among others. Apart from alternating minimization, other approaches (without
theoretical convergence guarantees) for dictionary learning include K-SVD [3] and MOD [14]. There
is also a nice formulation by Barak et al. [7], based on the sum-of-squares hierarchy. Recently,
Hazan and Ma [20] provide guarantees for improper dictionary learning, where instead of learning
a dictionary, they learn a comparable encoding via convex relaxations. Our work also adds to the
recent literature on analyzing alternating minimization algorithms [21; 26; 27; 19; 6].
1.2
Contributions
Our main contribution is to present new conditions under which alternating minimization for dictionary learning converges at a linear rate to the global optimum. We impose a condition on the matrix
infinity norm (largest magnitude entry) of the underlying dictionary. This allows dictionaries with
operator norm growing with dimension (d, r). The error rates are measured in the matrix infinity
norm, which is sharper than the previous error rates in maximum column-wise error.
We also identify conditions under which a trivial random initialization of the dictionary works, as
opposed to the more complex SVD and clustering procedures required in previous work. This is
possible as our radius of convergence, again measured in the matrix infinity norm, is larger than that
of previous results, which required the initial estimate to be close column-wise. Our results hold
for a rather arbitrary levelpof overcompleteness, r = O(poly(d)). We establish convergence results
for sparsity level s = O( d), which is information theoretically optimal for incoherent dictionaries
and improves the previously best known results in the overcomplete setting by a logarithmic factor.
Our algorithm is simple, involving an `1 -minimization step followed by a gradient update for the
dictionary.
A key step in our proofs is an analysis of a robust sparse estimator?{`1 , `2 , `1 }-MU Selector?
under fixed (worst case) corruption in the dictionary. We prove that this estimator is minimax optimal
in this setting, which might be of independent interest.
1.3
Organization
In Section 2, we present our alternating minimization algorithm and discuss the sparse regression
estimator. In Section 3, we list the assumptions under which our algorithm converges and state the
main convergence result. Finally, in Section 4, we prove convergence of our algorithm. We defer
technical lemmas, analysis of the sparse regression estimator, and minimax analysis to the appendix.
Notation
For a vector v 2 Rd , vi denotes the ith component of the vector, kvkp denotes the `p norm, supp(v)
denotes the support of a vector v, that is, the set of non-zero entries of the vector, sgn(v) denotes
the sign of the vector v, that is, a vector with sgn(v)i = 1 if vi > 0, sgn(v)i = 1 if vi < 0 and
sgn(v)i = 0 if vi = 0. For a matrix W , Wi denotes the ith column, Wij is the element in the ith
row and j th column, kW kop denotes the operator norm, and kW k1 denotes the maximum of the
magnitudes of the elements of W . For a set J, we denote its cardinality by |J|. Throughout the paper,
2
Algorithm 1: Alternating Minimization for Dictionary Learning
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Input : Step size ?, samples {y k }nk=1 , initial estimate A(0) , number of steps T , thresholds
{? (t) }Tt=1 , initial radius R(0) and parameters { (t) }Tt=1 , { (t) }Tt=1 and {? (t) }Tt=1 .
for t = 1, 2, . . . , T do
for k = 1, 2, . . . , n do
wk,(t) = M U S (t) , (t) ,? (t) (y k , A(t 1) , R(t 1) )
for l = 1, 2, 3 . . . , r ?do
?
k,(t)
k,(t)
k,(t)
xl
= wl I |wl | > ? (t) , (xk,(t) is the sparse estimate)
end
end
for i = 1, 2, . . . , d do
for j = 1, 2, . . . , r do
hP
?
(t)
(t 1)
(t
r
? Pn
Aij = Aij
k=1
p=1 Aip
n
end
end
R(t) = 34 R(t
end
1)
1) k,(t) k,(t)
xp xj
k,(t)
yik xj
?i
.
we use C multiple times to denote global constants that are independent of the problem parameters
and dimension. We denote the indicator function by I(?).
2
Algorithm
Given an initial estimate of the dictionary A(0) we alternate between an `1 minimization procedure
(specifically the {`1 , `2 , `1 }-MU Selector?M U S , ,? in the algorithm?followed by a thresholding
step) and a gradient step, under sample `2 loss, to update the dictionary. We analyze this algorithm
and demand linear convergence at a rate of 3/4; convergence analysis for other rates follows in the
same vein with altered constants. In subsequent sections, we also establish conditions under which
the initial estimate for the dictionary A(0) can be chosen randomly. Below we state the permitted
range for the various parameters in the algorithm above.
1. Step size: We need to set the step size in the range 3r/4s < ? < r/s.
2. Threshold: At each step set the threshold at ? (t) = 16R(t
1)
M (R(t
1)
p
(s + 1) + s/ d).
3. Tuning parameters: We need to pick (t) and ? (t) such that the assumption (D5) is satisfied.
A choice that is suitable that satisfies this assumption is
?
?2
128s R(t 1) ? ? (t) ? 3,
?
?
?
?2 s3/2 R(t 1) ? ?
6
3/2
(t 1)
32 s
R
+
4+ p
? (t) ? 3.
s
d1/2
We need to set
2.1
(t)
as specified by Theorem 16,
r
p ? (t 1) ?2
s (t
(t)
= s R
+
R
d
1)
.
Sparse Regression Estimator
Our proof of convergence for Algorithm 1 also goes through with a different choices of robust sparse
regression estimators, however, we can establish the tightest guarantees when the {`1 , `2 , `1 }-MU
Selector is used in the sparse regression step. The {`1 , `2 , `1 }-MU Selector [8] was established as
a modification of the Dantzig selector to handle uncertainty in the dictionary. There is a beautiful
line of work that precedes this that includes [30; 31; 9]. There are also modified non-convex LASSO
3
programs that have been studied [23] and Orthogonal Matching Pursuit algorithms under in-variable
errors [11]. However these estimators require the error in the dictionary to be stochastic and zero mean
which makes them less suitable in this setting. Also note that standard `1 minimization estimators
like the LASSO and Dantzig selector are highly unstable under errors in the dictionary and would
lead to much worse guarantees in terms of radius of convergence (as studied in [1]). We establish the
error guarantees for a robust sparse estimator {`1 , `2 , `1 }-MU Selector under fixed corruption in
the dictionary. We also establish that this estimator is minimax optimal when the error in the sparse
estimate is measured in infinity norm k?? ?? k1 and the dictionary is corrupted.
The {`1 , `2 , `1 }-MU Selector
? t?, u
Define the estimator ?? such that (?,
?) 2 Rr ? R+ ? R+ is the solution to the convex minimization
problem
(
1
min k?k1 + t + ?u ? 2 ?, A> y
?,t,u
d
A?
2
1
? t + R u, k?k2 ? t, k?k1 ? u
)
(2)
where, , and ? are tuning parameters that are chosen appropriately. R is an upper bound on the
? of this
error in our dictionary measured in matrix infinity norm. Henceforth the first coordinate (?)
estimator is called M U S , ,? (y, A, R), where the first argument is the sample, the second is the
matrix, and the third is the value of the upper bound on the error of the dictionary measured in infinity
norm. We will see that under our assumptions
to establish an upper bound on the
? we will be able
p ?
error on the estimator, k?? ?? k1 ? 16RM R(s + 1) + s/ d , where |?j? | ? M 8j. We define a
p
threshold at each step ? = 16RM (R(s + 1) + s/ d). The thresholded estimate ?? is defined as
??i = ??i I[|??i | > ? ]
(3)
8i 2 {1, 2, . . . , r}.
? = sgn(?? ). This will be crucial in
Our assumptions will ensure that we have the guarantee sgn(?)
our proof of convergence. The analysis of this estimator is presented in Appendix B.
As with previous analyses for alternating minimization for dictionary learning [1; 4], we rely on
identifying the sign of the sparse covariates correctly at each step. For the sparse recovery step, we
restrict ourselves to a class of two step estimators, where we first estimate a vector ?? with some
error guarantee in infinity norm k?? ?? k1 ? ? and then have an element-wise thresholding step
(??i = ??i I[|??i | > ? ]). To identify the sign correctly using this class of thresholded estimators we
would like in the first step to use an estimator that is optimal, as this would lead to tighter control
over the radius of convergence. This makes the choice of {`1 , `2 , `1 }-MU Selector natural, as we
will show it is minimax optimal under certain settings.
Theorem 1 (informal). Define the sets of matrices A = {B 2 Rd?r kBi k2 ? 1, 8i 2 {1, . . . , r}}
p
and W = {P 2 Rd?r kP k1 ? R} with R = O(1/ s). Then there exists an A? 2 A and W 2 W
with A , A? + W such that
inf supkT?
T?
??
?? k1
CRL
s
1
log(s)
log(r)
!
,
(4)
where the inf T? is over all measurable estimators T? with input (A? ?? , A, R), and the sup is over
s-sparse vectors ?? with 2-norm L > 0.
p
p
Remark 2. Note that when R = O(1/ s) and s = O( d), this lower bound matches the upper
bound we have for Theorem 16 (up to logarithmic factors) and hence the {`1 , `2 , `1 }-MU Selector
is minimax optimal.
The proof of this theorem follows by Fano?s method and is relegated to Appendix C.
4
2.2
Gradient Update for the dictionary
We note that the update to the dictionary at each step in Algorithm 1 is as follows
" r
#!
n
?
1 X X ? (t 1) k,(t) k,(t)
(t)
(t 1)
k,(t)
Aij = Aij
?
Aip xp xj
yik xj
,
n
k=1 p=1
|
{z
}
,?
gij
(t)
for i 2 {1, . . . , d}, j 2 {1, . . . , r} and t 2 {1, . . . , T }. If we consider the loss function at time step t
built using the vector samples y 1 , . . . , y n and sparse estimates x1,(t) , . . . , xn,(t) ,
n
2
1 X k
Ln (A) =
y
Axk,(t) ,
A 2 Rd?r ,
2n
2
k=1
we can identify the update to the dictionary g?(t) as the gradient of this loss function evaluated at
A(t 1) ,
@Ln (A)
g?(t) =
.
@A A(t 1)
3
Main Results and Assumptions
In this section we state our convergence result and state the assumptions under which our results are
valid.
3.1
Assumptions
Assumptions on A?
(A1) ? - incoherence: We assume the the true underlying dictionary is ?-incoherent
|hA?i , A?j i| ? ? 8 i, j 2 {1, . . . , r} such that, i 6= j.
This is a standard assumption in the sparse regression literature when support recovery is of
interest. It was introduced in [15; 34] in signal processing and independently in [38; 25] in
statistics. We can also establish guarantees under the strictly weaker `1 -sensitivity condition
(cf. [16]) used in analyzing sparse estimators under in-variable uncertainty in [9; 31]. The
{`1 , `2 , `1 }-MU selector that we use for our sparse recovery step also works with this more
general assumption, however for ease of exposition we assume A? to be ?-incoherent.
(A2) Bounded max-norm: We assume that A? is bounded in matrix infinity norm
Cb
kA? k1 ?
.
s
This is in contrast with previous work that imposes conditions on the operator norm of A?
[4; 1; 5]. Our assumptions help provide guarantees under alternate assumptions and it also
?
allows the operator?norm to
? grow with dimension, whereas earlier work requires A to be such
p
that kA? kop ? C
r/d . In general the infinity norm and operator norm balls are hard to
compare. However, one situation where a comparison is possible is if we assume the entries
of the dictionary to be drawn iid from a Gaussian distribution N (0, 2 ). Then by standard
concentration theorems, for the operator norm condition to be satisfied we would need the
variance ( 2 ) of the distribution to scale as O(1/d) while, for the infinity norm condition to be
2
?
satisfied we need the variance to be O(1/s
). This means that modulo constants the variance
can be much larger for the infinity norm condition to be satisfied than for the operator norm
condition.
(A3) Normalized Columns: We assume that all the columns of A? are normalized to 1,
kA?i k2 = 1 8 i 2 {1, . . . , r}.
i n
Note that the samples {y }i=1 are invariant when we scale the columns of A? or under
permutations of its columns. Thus we restrict ourselves to dictionaries with normalized
columns and label the entire equivalence class of dictionaries with permuted columns and
varying signs as A? . We will converge linearly to the dictionary in this equivalence class that is
closest to our initial estimate A(0) .
5
Assumption on the initial estimate and initialization
(B1) We require an initial estimate for the dictionary A(0) such that,
CR
kA(0) A? k1 ?
.
s
2
with 2Cb ? CR ; where CR = 1/2000M . Assuming 2Cb ? CR allows us have a fast random
initialization where we draw each entry of the initial estimate from the uniform distribution
(on the interval [ Cb /s, Cb /s]). This allows us to circumvent the computationally heavy
SVD/clustering step involved in getting an initial dictionary estimate required in previous work
[4; 1; 5]. Note that we need a random initialization and cannot start off with A(0) = 0 as this
will be equally close to the entire equivalence class of dictionaries A? (with varying signs of
columns) and cause our sparse estimator to fail. A random initialization perturbs the initial
solution to be closest to one of the dictionaries in the equivalence class and then we linearly
converge to that dictionary in the class.
Assumptions on x?
Next we assume a generative model on the s-sparse covariates x? . Here are the assumptions we make
about the (unknown) distribution of x? .
(C1) Conditional Independence: We assume that distribution of non-zero entries of x? is conditionally independent and identically distributed. That is, x?i ?? x?j |x?i , x?j 6= 0.
(C2) Sparsity Level:We assume that the level of sparsity s is bounded
p
p
p
2 ? s ? min(2 d, Cb d, C d/?),
where C is an appropriate global constant such that A? satisfies assumption (D3), see Remark
15. For ?- incoherent dictionaries this upper bound is tight up to constant factors for sparse
recovery to be feasible [12; 18].
(C3) Boundedness: Conditioned on the event that i is in the subset of non-zero entries, we have
m ? |x?i | ? M,
p
with m 32R(0) M (R(0) (s + 1) + s/ d) and M > 1. This is needed for the thresholded
sparse estimator to correctly predict the sign of the true covariate (sgn(x) = sgn(x? )). We can
also relax the boundedness assumption: it suffices for the x?i to have sub-Gaussian distributions.
(C4) Probability of support: The probability of i being in the support of x? is uniform over all
i 2 {1, 2, . . . , r}. This translates to
s
s(s 1)
Pr(x?i 6= 0) =
8 i 2 {1, . . . , r}, Pr(x?i , x?j 6= 0) =
8 i 6= j 2 {1, . . . , r}.
r
r(r 1)
(C5) Mean and variance of variables in the support: We assume that the non-zero random
variables in the support of x? are centered and are normalized
?
E(x?i |x?i 6= 0) = 0,
E(x?2
i |xi 6= 0) = 1.
We note that these assumptions (A1), (A3) and (C1) - (C5) are similar to those made in [4; 1].
Agarwal et al. [1] require the matrices to satisfy the restricted isometry property, which is strictly
weaker than ?-incoherence, however they can tolerate a much lower level of sparsity (d1/6 ).
3.2
Main Result
Theorem 3. Suppose that true dictionary A? and the distribution of the s-sparse samples x? satisfy
the assumptions stated in Section 3.1 and we are given an estimate A(0) such that kA(0) A? k1 ?
R(0) ? CR /s. If we are given n i.i.d. samples in every iteration with n = ?(rsM 2 log(dr/ )) then
Algorithm 1 with parameters ({? (t) }Tt=1 , { (t) }Tt=1 , { (t) }Tt=1 , {? (t) }Tt=1 , ?) chosen as specified in
Section 3.1 after T iterations returns a dictionary A(T ) such that,
? ?T
3
kA(T ) A? k1 ?
R(0) + 4?|?n |
4
where 4?|?n | ? R(0) /4 with probability 1
.
We note that the ?n can be driven to be smaller with high probability at the cost of more samples.
6
4
Proof of Convergence
In this section we will prove the main convergence results stated as Theorem 3. To prove this result
we will analyze the gradient update to the dictionary at each step. We will decompose this gradient
update (which is a random variable) into a first term which is its expected value and a second term
which is its deviation from expectation. We will prove a deterministic convergence result by working
with the expected value of the gradient and then appeal to standard concentration arguments to control
the deviation of the gradient from its expected value with high probability.
By Lemma 8, Algorithm 1 is guaranteed to estimate the sign pattern correctly at every round of the
algorithm, sgn(x) = sgn(x? ) (see proof in Appendix A.1).
(t)
0
(t+1)
To un-clutter notation let, A?ij = a?ij , Aij = aij , Aij
= aij . The k th coordinate of the mth
m?
m
th
covariate is written as xk . Similarly let xk be the k coordinate of the estimate of the mth
covariate at step t. Finally let R(t) = R and g?ij be the (i, j)th element of the gradient with n samples
at step t. Unwrapping the expression for g?ij we get,
" r
#
n
1 X X
m
g?ij =
aik xm
yim xm
k xj
j
n m=1
k=1
" r
#
n
?
1 X X?
m
? m?
m
=
aik xk
aik xk xj
n m=1
k=1
" r
#
?
X?
? ?
=E
aik xk aik xk xj
k=1
"
" r
n
1 X X?
+
aik xm
k
n m=1
a?ik xm?
k
k=1
= gij + g?ij gij ,
| {z }
?
xm
j
#
E
"
r ?
X
aik xk
a?ik x?k
k=1
?
xj
##
,?n
where gij denotes (i, j)th element of the expected value (infinite samples) of the gradient. The second
term ?n is the deviation of the gradient from its expected value. By Theorem 10 we can control the
deviation of the sample gradient from its mean via an application of McDiarmid?s inequality. With
this notation in place we are now ready to prove Theorem 3.
Proof [Proof of Theorem 3] First we analyze the structure of the expected value of the gradient.
Step 1: Unwrapping the expected value of the gradient we find it decomposes into three terms
gij = E aij x2j
= (aij
|
2
a?ij x?j xj + E 4
s
a?ij ) E
r{z
?
c
,gij
x2j |x?j
?
6= 0
}
X
k6=j
aik xk xj
3
a?ik x?k xj 5
2
X
?
?
s
+ a?ij E (xj x?j )xj |x?j 6= 0 + E 4
aik xk xj
| r
{z
}
k6=j
|
{z
,?1
,?2
3
a?ik x?k xj 5 .
}
c
The first term gij
points in the correct direction and will be useful in converging to the right answer.
The other terms could be in a bad direction and we will control their magnitude with Lemma 5 such
s
that |?1 | + |?2 | ? 3r
R. The proof of Lemma 5 is the main technical challenge in the convergence
analysis to control the error in the gradient. Its proof is deferred to the appendix.
7
Step 2: Given this bound, we analyze the gradient update,
0
aij = aij
= aij
= aij
??
gij
?(gij + ?n )
? c
?
? gij
+ (?1 + ?2 ) + ?n .
So if we look at the distance to the optimum a?ij we have the relation,
?
0
s ?
aij a?ij = aij a?ij ?(aij a?ij ) E x2j |x?j 6= 0
? {(?1 + ?2 ) + ?n } .
r
Taking absolute values, we get
(i) ?
??
0
s ?
|aij a?ij | ? 1 ? E x2j |x?j 6= 0 |aij a?ij | + ? {|?1 | + |?2 | + |?n |}
r
?s ?
(ii) ?
??
s ?
? 1 ? E x2j |x?j 6= 0 |aij a?ij | + ?
R + ?|?n |
r?
3r
?
?
?
? 1
s
? 1 ?
E x2j |x?j 6= 0
R + ?|?n |,
r
3
provided the first term is at non-negative. Here, (i) follows
inequality and (ii) is by
? by triangle
?
Lemma 5. Next we give an upper and lower bound on E x2j |x?j 6= 0 . We would expect that as R
?
?
?
gets smaller this variance term approaches E x?2
j |xj 6= 0 = 1. By invoking Lemma 6 we can bound
?
?
this term to be 23 ? E x2j |x?j 6= 0 ? 43 . So if we want converge at a rate 3/4 then it suffices to have
?
?
? (ii)
(i)
?
? 1
s
3
0? 1 ?
E x2j |x?j 6= 0
? .
r
3
4
Coupled with Lemma 6, Inequality (i) follows from ? ? rs while inequality (ii) follows from ?
So if we unroll the bound for t steps we have,
3
(t)
|aij
a?ij | ? |R(t 1) | + ?|?n |
4?
?
3 3 (t 2)
?
|R
| + ?|?n | + ?|?n |
4 4
(t 1 ? ? )
? ?t
X 3 q
3
(0)
?
|R | +
(?|?n |)
4
4
q=0
!
? ?t
1
X
3
?
|R(0) | + 4?|?n |
as
(3/4)q = 4 .
4
q=0
3r
4s .
By Theorem 10, we have that 4?|?n | ? R(0) /4 with probability 1
, thus we are guaranteed to
remain in our initial ball of radius R(0) with high probability, completing the proof.
5
Conclusion
An interesting question would be to further explore and analyze the range of algorithms for which
alternating minimization works and identifying the conditions under which they provably converge.
There also seem to be many open questions for p
improper dictionary learning and developing provably
faster algorithms there. Going beyond sparsity d still remains challenging, and as noted in previous
work alternating minimization also appears to break down experimentally and new algorithms are
required in this regime. Also all theoretical work on analyzing alternating minimization for dictionary
learning seems to rely on identifying the signs of the samples (x? ) correctly at every step. It would
be an interesting theoretical question to analyze if this is a necessary condition or if an alternate
proof strategy and consequently a bigger radius of convergence are possible. Lastly, it is not known
what the optimal sample complexity for this problem is and lower bounds there could be useful in
designing more sample efficient algorithms.
8
Acknowledgments
We gratefully acknowledge the support of the NSF through grant IIS-1619362, and of the Australian
Research Council through an Australian Laureate Fellowship (FL110100281) and through the ARC
Centre of Excellence for Mathematical and Statistical Frontiers. Thanks also to the Simons Institute
for the Theory of Computing Spring 2017 Program on Foundations of Machine Learning.
References
[1] Agarwal, A., A. Anandkumar, P. Jain, P. Netrapalli, and R. Tandon (2014). Learning sparsely
used overcomplete dictionaries. In COLT, pp. 123?137.
[2] Agarwal, A., A. Anandkumar, and P. Netrapalli (2013). A clustering approach to learn sparselyused overcomplete dictionaries. arXiv preprint arXiv:1309.1952.
[3] Aharon, M., M. Elad, and A. Bruckstein (2006). K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing 54(11),
4311?4322.
[4] Arora, S., R. Ge, T. Ma, and A. Moitra (2015). Simple, efficient, and neural algorithms for sparse
coding. In COLT, pp. 113?149.
[5] Arora, S., R. Ge, and A. Moitra (2013). New algorithms for learning incoherent and overcomplete
dictionaries.
[6] Balakrishnan, S., M. J. Wainwright, B. Yu, et al. (2017). Statistical guarantees for the EM
algorithm: From population to sample-based analysis. The Annals of Statistics 45(1), 77?120.
[7] Barak, B., J. A. Kelner, and D. Steurer (2015). Dictionary learning and tensor decomposition via
the sum-of-squares method. In Proceedings of the Forty-Seventh Annual ACM on Symposium on
Theory of Computing, pp. 143?151. ACM.
[8] Belloni, A., M. Rosenbaum, and A. B. Tsybakov (2014). An {`1 , `2 , `1 }-Regularization
Approach to High-Dimensional Errors-in-variables Models. arXiv preprint arXiv:1412.7216.
[9] Belloni, A., M. Rosenbaum, and A. B. Tsybakov (2016). Linear and conic programming
estimators in high dimensional errors-in-variables models. Journal of the Royal Statistical Society:
Series B (Statistical Methodology).
[10] Boucheron, S., G. Lugosi, and P. Massart (2013). Concentration inequalities: A nonasymptotic
theory of independence. Oxford university press.
[11] Chen, Y. and C. Caramanis (2013). Noisy and Missing Data Regression: Distribution-Oblivious
Support Recovery.
[12] Donoho, D. L. and X. Huo (2001). Uncertainty principles and ideal atomic decomposition.
IEEE Transactions on Information Theory 47(7), 2845?2862.
[13] Elad, M. and M. Aharon (2006). Image denoising via sparse and redundant representations over
learned dictionaries. IEEE Transactions on Image processing 15(12), 3736?3745.
[14] Engan, K., S. O. Aase, and J. H. Husoy (1999). Method of optimal directions for frame
design. In Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International
Conference on, Volume 5, pp. 2443?2446. IEEE.
[15] Fuchs, J.-J. (2004). Recovery of exact sparse representations in the presence of noise. In
Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP?04). IEEE International
Conference on, Volume 2, pp. ii?533. IEEE.
[16] Gautier, E. and A. Tsybakov (2011). High-dimensional instrumental variables regression and
confidence sets. arXiv preprint arXiv:1105.2454.
[17] Gribonval, R., R. Jenatton, and F. Bach (2015). Sparse and spurious: dictionary learning with
noise and outliers. IEEE Transactions on Information Theory 61(11), 6298?6319.
9
[18] Gribonval, R. and M. Nielsen (2003). Sparse representations in unions of bases. IEEE
transactions on Information theory 49(12), 3320?3325.
[19] Hardt, M. (2014). Understanding alternating minimization for matrix completion. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pp. 651?660.
IEEE.
[20] Hazan, E. and T. Ma (2016). A Non-generative Framework and Convex Relaxations for
Unsupervised Learning. In Advances in Neural Information Processing Systems, pp. 3306?3314.
[21] Jain, P., P. Netrapalli, and S. Sanghavi (2013). Low-rank matrix completion using alternating
minimization. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing,
pp. 665?674. ACM.
[22] Lewicki, M. S. and T. J. Sejnowski (2000). Learning overcomplete representations. Neural
computation 12(2), 337?365.
[23] Loh, P.-L. and M. J. Wainwright (2011). High-dimensional regression with noisy and missing
data: Provable guarantees with non-convexity. In Advances in Neural Information Processing
Systems, pp. 2726?2734.
[24] McDiarmid, C. (1989). On the method of bounded differences. Surveys in combinatorics 141(1),
148?188.
[25] Meinshausen, N. and P. B?hlmann (2006). High-dimensional graphs and variable selection with
the Lasso. The annals of statistics, 1436?1462.
[26] Netrapalli, P., P. Jain, and S. Sanghavi (2013). Phase retrieval using alternating minimization.
In Advances in Neural Information Processing Systems, pp. 2796?2804.
[27] Netrapalli, P., U. Niranjan, S. Sanghavi, A. Anandkumar, and P. Jain (2014). Non-convex robust
PCA. In Advances in Neural Information Processing Systems, pp. 1107?1115.
[28] Olshausen, B. A. and D. J. Field (1997). Sparse coding with an overcomplete basis set: A
strategy employed by V1? Vision research 37(23), 3311?3325.
[29] Rigollet, P. and A. Tsybakov (2011). Exponential screening and optimal rates of sparse
estimation. The Annals of Statistics, 731?771.
[30] Rosenbaum, M., A. B. Tsybakov, et al. (2010). Sparse recovery under matrix uncertainty. The
Annals of Statistics 38(5), 2620?2651.
[31] Rosenbaum, M., A. B. Tsybakov, et al. (2013). Improved matrix uncertainty selector. In From
Probability to Statistics and Back: High-Dimensional Models and Processes?A Festschrift in
Honor of Jon A. Wellner, pp. 276?290. Institute of Mathematical Statistics.
[32] Spielman, D. A., H. Wang, and J. Wright (2012). Exact Recovery of Sparsely-Used Dictionaries.
In COLT, pp. 37?1.
[33] Sun, J., Q. Qu, and J. Wright (2017). Complete dictionary recovery over the sphere I: Overview
and the geometric picture. IEEE Transactions on Information Theory 63(2), 853?884.
[34] Tropp, J. A. (2006). Just relax: Convex programming methods for identifying sparse signals in
noise. IEEE transactions on information theory 52(3), 1030?1051.
[35] Tsybakov, A. B. (2009). Introduction to nonparametric estimation. Revised and extended from
the 2004 French original. Translated by Vladimir Zaiats.
[36] Wu, S. and B. Yu (2015). Local identifiability of `1 -minimization dictionary learning: a
sufficient and almost necessary condition. arXiv preprint arXiv:1505.04363.
[37] Yu, B. (1997). Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pp. 423?435.
Springer.
[38] Zhao, P. and B. Yu (2006). On model selection consistency of Lasso. Journal of Machine
learning research 7(Nov), 2541?2563.
10
| 6795 |@word instrumental:1 norm:25 seems:1 open:1 r:1 decomposition:3 invoking:1 pick:1 boundedness:2 initial:12 series:1 ka:6 written:1 subsequent:1 update:8 generative:4 xk:10 huo:1 ith:3 gribonval:2 provides:1 complication:1 mcdiarmid:2 kelner:1 mathematical:2 c2:1 symposium:3 ik:4 focs:1 prove:6 excellence:1 theoretically:4 expected:7 growing:2 cardinality:1 provided:1 estimating:1 underlying:3 notation:3 bounded:4 what:1 guarantee:15 berkeley:4 every:3 exactly:1 k2:3 rm:2 control:5 grant:1 local:4 encoding:1 analyzing:3 oxford:1 incoherence:3 lugosi:1 might:1 initialization:7 studied:4 dantzig:2 equivalence:4 meinshausen:1 challenging:1 ease:1 range:4 acknowledgment:1 atomic:1 union:1 procedure:5 matching:1 confidence:1 kbi:1 get:4 cannot:1 close:2 selection:2 operator:9 measurable:1 deterministic:1 missing:2 go:1 independently:1 convex:9 survey:1 identifying:5 recovery:10 estimator:22 d5:1 population:1 handle:2 coordinate:3 updated:1 annals:4 hierarchy:1 suppose:1 tandon:1 modulo:1 exact:3 aik:9 programming:2 designing:2 element:5 sparsely:2 vein:1 preprint:4 wang:1 worst:1 region:1 ensures:1 improper:2 sun:2 mu:9 complexity:2 covariates:2 convexity:1 cam:2 tight:1 basis:2 triangle:1 translated:1 icassp:1 various:1 caramanis:1 jain:4 fast:1 sejnowski:1 kp:1 precedes:1 heuristic:3 larger:2 plausible:1 elad:2 relax:2 statistic:7 noisy:2 rr:2 flexibility:1 getting:1 convergence:20 optimum:4 converges:2 help:1 completion:2 measured:6 ij:17 progress:1 netrapalli:5 rosenbaum:4 australian:2 direction:3 radius:6 correct:1 stochastic:1 centered:1 sgn:10 require:4 suffices:2 decompose:1 tighter:1 strictly:2 frontier:1 hold:1 wright:2 cb:6 predict:1 dictionary:71 a2:1 estimation:2 gautier:1 label:1 lucien:1 council:1 largest:3 wl:2 overcompleteness:2 minimization:26 gaussian:2 modified:1 rather:1 pn:1 cr:5 varying:2 rank:1 contrast:2 entire:2 mth:2 relation:1 spurious:1 relegated:1 wij:1 going:1 fl110100281:1 provably:3 among:1 colt:3 k6:2 initialize:1 uc:2 field:1 having:1 beach:1 kw:2 look:1 yu:4 unsupervised:1 jon:1 others:1 sanghavi:3 aip:2 oblivious:1 randomly:1 simultaneously:1 festschrift:2 phase:1 ourselves:2 organization:1 interest:3 screening:1 highly:1 deferred:1 held:1 necessary:2 orthogonal:1 overcomplete:8 theoretical:6 column:13 modeling:1 earlier:1 hlmann:1 cost:1 deviation:4 entry:7 subset:1 undercomplete:1 uniform:2 successful:2 seventh:1 answer:1 corrupted:1 st:1 thanks:1 international:2 sensitivity:1 off:1 again:1 satisfied:4 moitra:2 opposed:1 henceforth:1 dr:1 worse:1 zhao:1 return:1 supp:1 nonasymptotic:1 coding:4 wk:1 includes:1 coefficient:2 kvkp:1 satisfy:2 combinatorics:1 vi:4 break:1 analyze:7 hazan:2 portion:1 sup:1 recover:1 start:1 identifiability:1 defer:1 simon:1 contribution:2 square:3 variance:5 yield:1 identify:3 produced:1 iid:1 corruption:2 energy:1 pp:14 involved:1 proof:12 riemannian:1 recovers:1 hardt:1 improves:1 nielsen:1 jenatton:1 back:1 appears:1 tolerate:3 permitted:1 methodology:1 improved:1 formulation:4 evaluated:1 just:2 lastly:1 working:1 tropp:1 trust:1 axk:1 french:1 olshausen:1 usa:1 normalized:4 true:4 unroll:1 hence:1 regularization:1 alternating:18 boucheron:1 conditionally:1 round:1 noted:1 tt:8 complete:1 rsm:1 image:3 wise:4 recently:1 permuted:1 rigollet:1 empirically:3 overview:1 volume:2 significant:1 rd:6 tuning:2 consistency:1 hp:1 fano:2 similarly:1 centre:1 gratefully:1 cortex:1 similarity:1 add:2 base:1 closest:2 isometry:1 recent:3 inf:2 apart:1 driven:1 certain:1 honor:1 inequality:5 analyzes:1 impose:1 employed:1 converge:5 forty:2 redundant:1 signal:5 ii:6 neurally:1 multiple:1 technical:2 match:1 faster:1 believed:1 long:1 bach:1 retrieval:1 sphere:1 equally:1 niranjan:1 bigger:1 a1:3 converging:1 involving:1 regression:10 vision:1 expectation:1 arxiv:8 iteration:2 agarwal:4 c1:2 whereas:1 want:1 fellowship:1 interval:1 singular:2 grow:1 crucial:1 appropriately:1 massart:1 tend:1 balakrishnan:1 mod:1 seem:1 anandkumar:3 presence:1 ideal:1 identically:1 switch:1 xj:16 independence:2 lasso:4 restrict:2 translates:1 expression:1 engan:1 pca:1 bartlett:1 fuchs:1 wellner:1 loh:1 peter:2 speech:3 cause:1 remark:2 yik:2 useful:2 clutter:1 tsybakov:7 nonparametric:1 extensively:2 nsf:1 s3:1 sign:8 estimated:2 neuroscience:1 correctly:5 key:1 threshold:4 drawn:1 d3:1 thresholded:3 v1:2 graph:1 relaxation:2 sum:2 uncertainty:5 place:1 throughout:1 reasonable:1 almost:1 wu:1 draw:1 appendix:5 comparable:1 bound:12 completing:1 followed:2 guaranteed:2 annual:3 infinity:13 belloni:2 argument:2 extremely:1 min:2 spring:1 developing:1 alternate:4 ball:2 smaller:2 remain:1 em:1 wi:1 qu:1 modification:1 outlier:1 invariant:1 pr:2 restricted:1 ln:2 computationally:1 previously:1 remains:1 discus:2 fail:1 needed:1 ge:2 end:5 informal:1 pursuit:1 tightest:1 aharon:2 appropriate:2 yim:1 robustness:1 original:1 denotes:8 clustering:4 include:1 ensure:1 cf:1 k1:12 establish:9 society:1 tensor:1 question:3 strategy:2 concentration:3 gradient:17 distance:2 perturbs:1 unstable:1 trivial:1 provable:1 assuming:1 vladimir:1 sharper:1 chatterji:2 stated:2 negative:1 steurer:1 design:1 unknown:1 upper:7 revised:1 arc:1 acknowledge:1 descent:1 situation:1 extended:1 frame:1 arbitrary:2 introduced:1 required:4 specified:2 c3:1 c4:1 acoustic:2 learned:1 established:1 nip:1 able:1 beyond:1 below:2 pattern:1 xm:5 regime:1 sparsity:10 challenge:1 program:2 built:1 max:1 royal:1 video:1 wainwright:2 suitable:2 event:1 natural:1 rely:2 beautiful:1 circumvent:1 indicator:1 minimax:5 scheme:1 altered:1 picture:1 conic:1 arora:3 ready:1 incoherent:5 coupled:1 nice:1 understanding:2 literature:2 geometric:1 loss:3 expect:1 permutation:1 interesting:2 foundation:2 sufficient:1 xp:2 imposes:1 thresholding:2 principle:1 unwrapping:2 heavy:1 row:1 aij:21 weaker:2 barak:2 institute:2 taking:1 absolute:1 sparse:39 fifth:1 distributed:1 dimension:3 xn:2 valid:1 c5:2 made:1 transaction:7 nov:1 selector:12 laureate:1 global:5 reproduces:1 bruckstein:1 b1:1 factorize:1 xi:5 un:1 decomposes:1 learn:2 robust:4 ca:1 zaiats:1 complex:1 poly:1 main:6 linearly:2 noise:3 x1:2 sub:2 exponential:1 xl:1 third:1 kop:2 theorem:11 down:1 bad:1 covariate:3 list:1 appeal:1 a3:2 exists:1 magnitude:5 conditioned:1 demand:1 nk:1 chen:1 logarithmic:2 explore:1 visual:1 lewicki:1 springer:1 satisfies:2 acm:4 ma:3 assouad:1 conditional:1 goal:1 consequently:1 exposition:1 donoho:1 replace:1 crl:1 feasible:1 hard:1 experimentally:1 specifically:2 infinite:1 denoising:1 lemma:7 called:1 gij:10 x2j:9 svd:5 support:8 spielman:2 d1:3 |
6,407 | 6,796 | Learning ReLUs via Gradient Descent
Mahdi Soltanolkotabi
Ming Hsieh Department of Electrical Engineering
University of Southern California
Los Angeles, CA
[email protected]
Abstract
In this paper we study the problem of learning Rectified Linear Units (ReLUs)
which are functions of the form x ? max(0, ?w, x?) with w ? Rd denoting the
weight vector. We study this problem in the high-dimensional regime where the
number of observations are fewer than the dimension of the weight vector. We
assume that the weight vector belongs to some closed set (convex or nonconvex)
which captures known side-information about its structure. We focus on the
realizable model where the inputs are chosen i.i.d. from a Gaussian distribution
and the labels are generated according to a planted weight vector. We show that
projected gradient descent, when initialized at 0, converges at a linear rate to the
planted model with a number of samples that is optimal up to numerical constants.
Our results on the dynamics of convergence of these very shallow neural nets may
provide some insights towards understanding the dynamics of deeper architectures.
1
Introduction
Nonlinear data-fitting problems are fundamental to many supervised learning tasks in signal processing and machine learning. Given training data consisting of n pairs of input features xi ? Rd and
desired outputs yi ? R we wish to infer a function that best explains the training data. In this paper
we focus on fitting Rectified Linear Units (ReLUs) to the data which are functions ?w ? Rd ? R of
the form
?w (x) = max (0, ?w, x?) .
A natural approach to fitting ReLUs to data is via minimizing the least-squares misfit aggregated over
the data. This optimization problem takes the form
min
w?Rd
L(w) ?=
1 n
2
? (max (0, ?w, xi ?) ? yi )
n i=1
subject to R(w) ? R,
(1.1)
with R ? Rd ? R denoting a regularization function that encodes prior information on the weight
vector.
Fitting nonlinear models such as ReLUs have a rich history in statistics and learning theory [12]
with interesting new developments emerging [6] (we shall discuss all these results in greater detail in
Section 5). Most recently, nonlinear data fitting problems in the form of neural networks (a.k.a. deep
learning) have emerged as powerful tools for automatically extracting interpretable and actionable
information from raw forms of data, leading to striking breakthroughs in a multitude of applications
[13, 15, 4]. In these and many other empirical domains it is common to use local search heuristics
such as gradient or stochastic gradient descent for nonlinear data fitting. These local search heuristics
are surprisingly effective on real or randomly generated data. However, despite their empirical success
the reasons for their effectiveness remains mysterious.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Focusing on fitting ReLUs, a-priori it is completely unclear why local search heuristics such as
gradient descent should converge for problems of the form (1.1), as not only the regularization
function maybe nonconvex but also the loss function! Efficient fitting of ReLUs in this highdimensional setting poses new challenges: When are the iterates able to escape local optima and
saddle points and converge to global optima? How many samples do we need? How does the number
of samples depend on the a-priori prior knowledge available about the weights? What regularizer
is best suited to utilizing a particular form of prior knowledge? How many passes (or iterations) of
the algorithm is required to get to an accurate solution? At the heart of answering these questions is
the ability to predict convergence behavior/rate of (non)convex constrained optimization algorithms.
In this paper we build up on a new framework developed in the context of phase retrieval [21] for
analyzing nonconvex optimization problems to address such challenges.
2
Precise measures for statistical resources
We wish to characterize the rates of convergence for the projected gradient updates (3.2) as a function
of the number of samples, the available prior knowledge and the choice of the regularizer. To make
these connections precise and quantitative we need a few definitions. Naturally the required number
of samples for reliable data fitting depends on how well the regularization function R can capture the
properties of the weight vector w. For example, if we know that the weight vector is approximately
sparse, naturally using an `1 norm for the regularizer is superior to using an `2 regularizer. To quantify
this capability we first need a couple of standard definitions which we adapt from [17, 18, 21].
Definition 2.1 (Descent set and cone) The set of descent of a function R at a point w? is defined as
DR (w? ) = {h ? R(w? + h) ? R(w? )}.
The cone of descent is defined as a closed cone CR (w? ) that contains the descent set, i.e. DR (w? ) ?
CR (w? ). The tangent cone is the conic hull of the descent set. That is, the smallest closed cone
CR (w? ) obeying DR (w? ) ? CR (w? ).
We note that the capability of the regularizer R in capturing the properties of the unknown weight
vector w? depends on the size of the descent cone CR (w? ). The smaller this cone is the more suited
the function R is at capturing the properties of w? . To quantify the size of this set we shall use the
notion of mean width.
Definition 2.2 (Gaussian width) The Gaussian width of a set C ? Rd is defined as:
?(C) ?= Eg [sup ?g, z?],
z?C
where the expectation is taken over g ? N (0, Ip ). Throughout we use B d /Sd?1 to denote the the unit
ball/sphere of Rd .
We now have all the definitions in place to quantify the capability of the function R in capturing the
properties of the unknown parameter w? . This naturally leads us to the definition of the minimum
required number of samples.
Definition 2.3 (minimal number of samples) Let CR (w? ) be a cone of descent of R at w? . We
define the minimal sample function as
M(R, w? ) = ? 2 (CR (w? ) ? B d ).
We shall often use the short hand n0 = M(R, w? ) with the dependence on R, w? implied.
We note that n0 is exactly the minimum number of samples required for structured signal recovery
from linear measurements when using convex regularizers [3, 1]. Specifically, the optimization
problem
n
?
? (yr ? ?xi , w ?)
2
subject to R(w) ? R(w? ),
i=1
2
(2.1)
succeeds at recovering an unknown weight vector w? with high probability from n observations of
the form yi = ?ai , w? ? if and only if n ? n0 .1 While this result is only known to be true for convex
regularization functions we believe that n0 also characterizes the minimal number of samples even for
nonconvex regularizers in (2.1). See [17] for some results in the nonconvex case as well as the role
this quantity plays in the computational complexity of projected gradient schemes for linear inverse
problems. Given that with nonlinear samples we have less information (we loose some information
compared to linear observations) we can not hope to recover the weight vector from n ? n0 when
using (1.1). Therefore, we can use n0 as a lower-bound on the minimum number of observations
required for projected gradient descent iterations (3.2) to succeed at finding the right model.
3
Theoretical results for learning ReLUs
A simple heuristic for optimizing (1.1) is to use gradient descent. One challenging aspect of the
above loss function is that it is not differentiable and it is not clear how to run projected gradient
descent. However, this does not pose a fundamental challenge as the loss function is differentiable
except for isolated points and we can use the notion of generalized gradients to define the gradient at
a non-differentiable point as one of the limit points of the gradient in a local neighborhood of the
non-differentiable point. For the loss in (1.1) the generalized gradient takes the form
?L(w) ?=
1 n
? (ReLU (?w, xi ?) ? yi ) (1 + sgn(?w, xi ?)) xi .
n i=1
(3.1)
Therefore, projected gradient descent takes the form
w? +1 = PK (w? ? ?? ?L(w? )) ,
(3.2)
where ?? is the step size and K = {w ? Rd ? R(w) ? R} is the constraint set with PK denoting the
Euclidean projection onto this set.
Theorem 3.1 Let w? ? Rd be an arbitrary weight vector and R ? Rd ? R be a proper function
(convex or nonconvex). Suppose the feature vectors xi ? Rd are i.i.d. Gaussian random vectors
distributed as N (0, I) with the corresponding labels given by
yi = max (0, ?xi , w? ?) .
To estimate w? , we start from the initial point w0 = 0 and apply the Projected Gradient Descent
(PGD) updates of the form
w? +1 = PK (w? ? ?? ?L(w? )) ,
(3.3)
with K ?= {w ? Rd ? R(w) ? R(w? )} and ?L defined via (3.1). Also set the learning parameter
sequence to ?0 = 2 and ?? = 1 for all ? = 1, 2, . . . and let n0 = M(R, w? ), per Definition 2.3, be
our lower bound on the number of observations. Also assume
n > cn0 ,
(3.4)
holds for a fixed numerical constant c. Then there is an event of probability at least 1 ? 9e??n such
that on this event the updates (3.3) obey
1 ?
?w? ? w? ?`2 ? ( ) ?w? ?`2 .
2
(3.5)
Here ? is a fixed numerical constant.
The first interesting and perhaps surprising aspect of this result is its generality: it applies not only to
convex regularization functions but also nonconvex ones! As we mentioned earlier the optimization
problem in (1.1) is not known to be tractable even for convex regularizers. Despite the nonconvexity
of both the objective and regularizer, the theorem above shows that with a near minimal number
1
We would like to note that n0 only approximately characterizes the minimum number of samples required. A
? ?( t+1 ) ?
more precise characterization is ??1 (? 2 (CR (w? ) ? Bd )) ? ? 2 (CR (w? ) ? Bd ) where ?(t) = 2 ?( 2t ) ? t.
2
However, since our results have unspecified constants we avoid this more accurate characterization.
3
1
ReLU samples
Linear samples
Estimation error
0.8
0.6
0.4
0.2
0
0
5
10
15
20
Figure 1: Estimation error (?w? ? w? ?`2 ) obtained via running PGD iterates as a function of the
number of iterations ? . The plots are for two different observations models: 1) ReLU observations of
the form y =ReLU(Xw? ) and 2) linear observations of the form y = Xw? . The bold colors depict
average behavior over 100 trials. None bold color depict the estimation error of some sample trials.
of data samples, projected gradient descent provably learns the original weight vector w? without
getting trapped in any local optima.
Another interesting aspect of the above result is that the convergence rate is linear. Therefore, to
achieve a relative error of the total number of iterations is on the order of O(log(1/)). Thus the
overall computational complexity is on the order of O (nd log(1/)) (in general the cost is the total
number of iterations multiplied by the cost of applying the feature matrix X and its transpose). As
a result, the computational complexity is also now optimal in terms of dependence on the matrix
dimensions. Indeed, for a dense matrix even verifying that a good solution has been achieved requires
one matrix-vector multiplication which takes O(nd) time.
4
Numerical experiments
In this section we carry out a simple numerical experiment to corroborate our theoretical results. For
this purpose we generate a unit norm sparse vector w? ? Rd of dimension d = 1000 containing s =
d/50 non-zero entries. We also generate a random feature matrix X ? Rn?d with n = ?8s log(d/s)?
and containing i.i.d. N (0, 1) entries. We now take two sets of observations of size n from ? ? :
? ReLU observations: the response vector is equal to y =ReLU(Xw? ).
? Linear observations: the response is y = Xw? .
We apply the projected gradient iterations to both observation models starting from w0 = 0. For the
ReLU observations we use the step size discussed in Theorem 3.1. For the linear model we apply
projected gradient descent updates of the form
w? +1 = PK (w? ?
1 T
X (Xw? ? y)) .
n
In both cases we use the regularizer R(w) = ?w?`0 so that the projection only keeps the top s
entries of the vector (a.k.a. iterative hard thresholding). In Figure 1 the resulting estimation errors
(?w? ? w? ?`2 ) is depicted as a function of the number of iterations ? . The bold colors depict average
behavior over 100 trials. The estimation error of some sample trials are also depicted in none bold
4
colors. This plot clearly show that PGD iterates applied to ReLU observations converge quickly
to the ground truth. This figure also clearly demonstrates that the behavior of the PGD iterates
applied to both models are similar, further corroborating the results of Theorem 3.1. We note that
the sample complexity used in this simulation is 8s log(n/s) which is a constant factor away from
n0 ? s log(n/s) confirming our assertion that the required sample complexity is a constant factor
away from n0 (as predicted by Theorem 3.1).
5
Discussions and prior art
There is a large body of work on learning nonlinear models. A particular class of such problems
that have been studied are the so called idealized Single Index Models (SIMs) [9, 10]. In these
problems the inputs are labeled examples {(xi , yi )}ni=1 ? Rd ? R which are guaranteed to satisfy
yi = f (?w, xi ?) for some w ? Rd and nondecreasing (Lipchitz continuous) f ? R ? R. The goal in
this problem is to find a (nearly) accurate such f and w. An interesting polynomial-time algorithm
called the Isotron exists for this problem [12, 11]. In principle, this approach can also be used to
fit ReLUs. However, these results differ from ours in term of both assumptions and results. On the
one had, the assumptions are slightly more restrictive as they require bounded features xi , outputs
yi and weights. On the other hand, these result hold for much more general distributions and more
general models than the realizable model studied in this paper. These results also do not apply in the
high dimensional regime where the number of observations is significantly smaller than the number
of parameters (see [5] for some results in this direction). In the realizable case, the Isotron result
require O( 1 ) iterations to achieve error in objective value. In comparison, our results guarantee
convergence to a solution with relative error (?w? ? w? ?`2 / ?w? ?`2 ? ) after log (1/) iterations.
Focusing on the specific case of ReLU functions, an interesting recent result [6] shows that reliable
learning of ReLUs is possible under very general but bounded distributional assumptions. To achieve
an accuracy of the algorithm runs in poly(1/) time. In comparison, as mentioned earlier our result
rquires log(1/) iterations for reliable parameter estimation. We note however we study the problem
in different settings and a direct comparison is not possible between the two results.
We would like to note that there is an interesting growing literature on learning shallow neural
networks with a single hidden layer with i.i.d. inputs, and under a realizable model (i.e. the labels are
generated from a network with planted weights) [23, 2, 25]. For isotropic Gaussian inputs, [23] shows
that with two hidden unites (k = 2) there are no critical points for configurations where both weight
vectors fall into (or outside) the cone of ground truth weights. With the same assumptions, [2] proves
that for a single-hidden ReLU network with a single non-overlapping convolutional filter, all local
minimizers of the population loss are global; they also give counter-examples in the overlapping case
and prove the problem is NP-hard when inputs are not Gaussian. [25] studies general single-hidden
layer networks and shows that a version of gradient descent which uses a fresh batch of samples in
each iteration converges to the planted model. This holds using an initialization obtained via a tensor
decomposition method. Our approach and convergence results differ from this literature in a variety
of different ways. First, we focus on zero hidden layers with a regularization term. Some of this
literature focuses on one-hidden layers without (or with specific) regularization. Second, unlike some
of these results such as [2, 14], we study the optimization properties of the empirical function, not its
expected value. Third, we initialize at zero in lieu of sophisticated initialization schemes. Finally,
our framework does not require a fresh batch of samples per new gradient iteration as in [25]. We
also note that several publications study the effect of over-parametrization on the training of neural
networks without any regularization [19, 8, 16, 22]. Therefore, the global optima are not unique
and hence the solutions may not generalize. In comparison we study the problem with an arbitrary
regularization which allows for a unique global optima.
6
6.1
Proofs
Preliminaries
In this section we gather some useful results on concentration of stochastic processes which will be
crucial in our proofs. These results are mostly adapted from [21]. We begin with a lemma which is a
direct consequence of Gordon?s escape from the mesh lemma [7].
5
Lemma 6.1 Assume C ? Rd is a cone and Sd?1 is the unit sphere of Rd . Also assume that
n ? max (20
? 2 (C ? Sd?1 ) 1
,
? 1) ,
?2
2?
for a fixed numerical constant c. Then for all h ? C
?
1 n
2
2
2
?(?xi , h?) ? ?h?`2 ? ? ? ?h?`2 ,
n i=1
?2
holds with probability at least 1 ? 2e? 360 n .
We also need a generalization of the above lemma stated below.
Lemma 6.2 ([21]) Assume C ? Rd is a cone (not necessarily convex) and Sd?1 is the unit sphere of
Rd . Also assume that
n ? max (80
? 2 (C ? Sd?1 ) 2
, ? 1) ,
?2
?
for a fixed numerical constant c. Then for all u, h ? C
?
1 n
?
??xi , u??xi , h? ? u h? ? ? ?u?`2 ?h?`2 ,
n i=1
?2
holds with probability at least 1 ? 6e? 1440 n .
We next state a generalization of Gordon?s escape through the mesh lemma also from [21].
Lemma 6.3 ([21]) Let s ? Rd be fixed vector with nonzero entries and construct the diagonal matrix
S = diag(s). Also, let X ? Rn?d have i.i.d. N (0, 1) entries. Furthermore, assume T ? Rd and
define bd (s) = E[?Sg?`2 ], where g ? Rd is distributed as N (0, In ). Also, define
?(T ) ?= max ?v?`2 .
v?T
Then for all u ? T
??SAu?`2 ? bd (s) ?u?`2 ? ? ?s?`? ?(T ) + ?,
holds with probability at least 1 ? 6e
?
?2
8?s?2 ? 2 (T )
`?
.
The previous lemma leads to the following Corollary.
Corollary 6.4 Let s ? Rd be fixed vector with nonzero entries and assume T ? B d . Furthermore,
assume
2
2
?s?`2 ? max (20 ?s?`?
Then for all u ? T ,
? 2 (T ) 3
,
? 1) .
?2
2?
RRR n 2
R
2
RRR ?i=1 si (?xi , u?) ? ?u?2 RRRRR ? ?,
`2 RR
2
RRRR
RRR
?s?`2
R
?2
2
holds with probability at least 1 ? 6e? 1440 ?s?`2 .
6.2
Convergence proof (Proof of Theorem 3.1)
In this section we shall prove Theorem 3.1. Throughout, we use the shorthand C to denote the descent
cone of R at w? , i.e. C = CR (w? ). We begin by analyzing the first iteration. Using w0 = 0 we have
w1 ?= PK (w0 ? ?0 ?L(w0 )) = PK (
2 n
2 n
?
? yi xi ) = PK ( ? ReLU(?xi , w ?)xi ) .
n i=1
n i=1
6
We use the argument of [21][Page 25, inequality (7.34)] which shows that
?w1 ? w? ?`2 ? 2 ? sup uT (
u?C?Bd
Using ReLU(z) =
z+?z?
2
2 n
?
?
? ReLU(?xi , w ?)xi ? w ) .
n i=1
(6.1)
we have
2 n
1 n
T
?
?
?
T 1
?
? ReLU(?xi , w ?)?xi , u? ? ?u, w ? = u ( X X ? I) w + ? ??xi , w ?? ?xi , u?. (6.2)
n i=1
n
n i=1
We proceed by bounding the first term in the above equality. To this aim we decompose u in the
direction parallel/perpendicular to that of w? and arrive at
T
?
1
(uT w? )
1
w? (w? ) ?
? T 1
T
?
uT ( X T X ? I) w? =
(w
)
(
u, Xw? ?,
X
X
?
I)
w
+
?X
I
?
2
2
?
?
n
n
n
?
?
?w ?`2
?w ?`2
2
T
? ?g?`2
?
? ?w? ?`
w? (w? ) ?
u,
? 1 + ? 2 aT I ?
2
n
? n
?
?
?w? ?`2 ?
RRR ?g?2
RRR ?w? ?
T
?
w? (w? ) ?
`2
`
R
?
R
? ?w ?`2 RRR
u,
? 1RRRRR + ? 2 sup aT I ?
2
n u?C?Bd
?
RRR n
RRR
?w? ?`2 ?
?(uT w? )
(6.3)
with g ? Rn and a ? Rd are independent random Gaussian random vectors distributed as N (0, Id )
and N (0, In ). By concentration of Chi-squared random variables
2
??g?`2 /n ? 1? ? ?,
holds with probability at least 1 ? 2e?n
?2
8
(6.4)
. Also,
T
?
w? (w? ) ?
1
1
? aT I ?
u ? ? (? (C ? B d ) + ?) ,
2
?
n
n
?
?
?w ?`2
?2
holds with probability at least 1 ? e? 2 . Plugging (6.4) with ? =
? 2 (C ? B d ), then
(6.3), as long as n ? 36
?2
sup
u?C?Bd
?
6
and (6.5) with ? =
(6.5)
??
n
6
1
?
uT ( X T X ? I) w? ? ?w? ?`2 ,
n
2
into
(6.6)
?2
holds with probability at least 1 ? 3e?n 288 .
We now focus on bounding the second term in (6.2). To this aim we decompose u in the direction
parallel/perpendicular to that of w? and arrive at
RRR
RRR
1 n
1 n ??xi , w? ?? ?xi , w? ? 1 n
RRR ,
?
+
? ? ??xi , w? ?? ?xi , u?? = RRRRR(uT w? ) ?
??x
,
w
??
?x
,
u
?
?
i
i
?
RRR
2
n i=1
n i=1
n i=1
?w? ?`2
RRR
RR
R
RRR n
??xi , w? ?? ?xi , w? ? RRRR 1 n
1
? ?w? ?`2 RRRRR ?
RRR + ? ? ??xi , w? ?? ?xi , u? ?? . (6.7)
2
RRR n i=1
RRR n i=1
?w? ?`2
with u? = (I ?
w? (w? )T
?w? ?2`
2
) u. Now note that
?
??xi ,w? ???xi ,w? ?
?w? ?2`
is sub-exponential and
2
??xi , w? ?? ?xi , w? ?
2
?w? ?`2
?
? c,
?1
with fixed numerical constant. Thus by Bernstein?s type inequality ([24][Proposition 5.16])
RRR n
R
??xi , w? ?? ?xi , w? ? RRRR
RRR 1
RRR ? t,
RRR n ?
2
RRR
?w? ?`2
RR i=1
7
(6.8)
holds with probability at least 1 ? 2e??n min(t
2
,t)
with ? a fixed numerical constant.. Also note that
?
?1 n
1 n
?
? ? ??xi , w? ??2 ?1 ?g, u? ?.
? ??xi , w ?? ?xi , u? ? ? ?
n i=1
n i=1
n
Furthermore,
1
n
2
2
n
?i=1 ??xi , w? ?? ? (1 + ?) ?w? ?`2 , holds with probability at least 1 ? 2e?n
sup
u?C?Sd?1
holds with probability at least 1 ? e?
?
??g, u? ?? ? (2? (C ? S
d?1
?2
2
. Combining the last two inequalities we conclude that
holds with probability at least 1 ? 2e?n
? ?
? = 6?
n into (6.7)
2
?2
8
? e?
?2
2
2
n
holds with probability at least 1 ? 3e??n? ? 2e? 8 as long as n ? 288
and (6.10) into (6.1) we conclude that for ? = 7/400
u?C?Bd
(6.9)
. Plugging (6.8) and (6.9) with t = 6? , ? = 1, and
1 n
?
?
?
? ??xi , w ?? ?xi , u?? ? ?w ?`2 ,
n i=1
2
?w1 ? w? ?`2 ? 2 ? sup uT (
and
) + ?),
?
(2? (C ? Sd?1 ) + ?)
1 n
?
?
?w? ?`2 ,
? ??xi , w ?? ?xi , u? ?? ? 1 + ?
n i=1
n
?
?2
8
(6.10)
? 2 (C?Sd?1 )
.
?2
Thus pluggin (6.6)
7
2 n
?
?
?
?w? ?`2 ,
? ReLU(?xi , w ?)xi ? w ) ? 2? ?w ?`2 ?
n i=1
200
holds with probability at least 1 ? 8e??n as long as n ? c? 2 (C ? Sd?1 ) for a fixed numerical constant
c. To introduce our general convergence analysis we begin by defining
7
.
200
To prove Theorem 3.1 we use [21][Page 25, inequality (7.34)] which shows that if we apply the
projected gradient descent update w? +1 = PK (w? ? ?L(w? )), the error h? = w? ? w? obeys
E() = {w ? Rd ? R(w) ? R(w? ), ?w ? w? ?`2 ? ?w? ?`2 } with =
?h? +1 ?`2 = ?w? +1 ? w? ?`2 ? 2 ? sup u? (h? ? ?L(w? )) .
(6.11)
u?C?Bn
To complete the convergence analysis it is then sufficient to prove
1
1
?h? ?`2 = ?w? ? w? ?`2 .
(6.12)
4
4
We will instead prove that the following stronger result holds for all u ? C ? B n and w ? E()
sup u? (h? ? ?L(w? )) ?
u?C?Bn
1
?w ? w? ?`2 .
(6.13)
4
The equation (6.13) above implies (6.12) which when combined with (6.11) proves the convergence
result of the Theorem (specifically equation (3.5)). The rest of this section is dedicated to proving
i ,w??
(6.13). To this aim note that ReLU(?xi , w?) = ?xi ,w?+??x
. Thus (see the extended version of this
2
paper [20] for more detailed derivation of the identity below)
u? (w ? w? ? ?L(w)) ?
??L(w), u? =
1 n
1 n
?
?
?
??xi , w ? w ??xi , u? + ? sgn(?xi , w ?)?xi , w ? w ??xi , u?
n i=1
n i=1
+
1 n
?
?
? (sgn(?xi , w?) ? sgn(?xi , w ?)) ?xi , w ? w ??xi , u?
n i=1
+
1 n
?
?
?
? (1 ? sgn(?xi , w ?)) (sgn(?xi , w ?) ? sgn(?xi , w?)) ??xi , w ?? ?xi , u?
2n i=1
8
Now defining h = w ? w? we conclude that ?u, w ? w? ? ?L(w)? = ?u, h ? ?L(w)? is equal to
1
1 n
?u, h ? ?L(w)? =uT (I ? XX T ) h ? ? sgn(?xi , w? ?)?xi , h??xi , u?,
n
n i=1
+
?h, w? ? 1 n
?
?
? (1 ? sgn(?xi , w?)sgn(?xi , w ?)) sgn(?xi , w ?)?xi , h??xi , u?,
2
?
n
?w ?`
i=1
2
sgn(?xi , w?) n
?
?
+
? (1 ? sgn(?xi , w ?)) (1 ? sgn(?xi , w?)sgn(?xi , w ?))
2n
i=1
??xi , w? ?? ?xi , u?.
2
Now define h? = h ? (hT w? )/(?w? ?`2 )w? . Using this we can rewrite the previous expression in
the form (see the proof in the extended version of this paper [20] for more detailed derivation)
1
1 n
?u, w ? w? ? ?L(w)? =uT (I ? XX T ) h ? ? sgn(?xi , w? ?)?xi , h??xi , u?,
n
n i=1
+
1 n
?
?
? (1 ? sgn(?xi , w?)sgn(?xi , w ?)) sgn(?xi , w ?)?xi , h? ??xi , u?,
n i=1
+
1 n sgn(?xi , w?)
?h, w? ?
(1 ? sgn(?xi , w? ?)) +
]
?[
2
n i=1
2
?w? ?`2
(1 ? sgn(?xi , w?)sgn(?xi , w? ?)) ??xi , w? ?? ?xi , u?
(6.14)
We now proceed by stating bounds on each of the four terms in (6.14). The detailed derivation of
these bounds appear in the the extended version of this paper [20].
Lemma 6.5 Assume the setup of Theorem 3.1. Then as long as n ? cn0 , we have
1
u? (I ? X ? X) h ? ? ?h?`2 ,
n
1 n
? ? sgn(?xi , w? ?)?xi , h??xi , u? ? ? ?h?`2 ,
n i=1
?
?
1 n
?
?
? (1 ? sgn(?xi , w?)sgn(?xi , w ?)) sgn(?xi , w ?)?xi , h? ??xi , u? ?2 1 + ? ? +
n i=1
?
(6.15)
(6.16)
?
21 ?
?h?`2 ,
20 ?
(6.17)
1 n sgn(?xi , w?)
?h, w? ?
(1 ? sgn(?xi , w? ?)) +
]
?[
2
n i=1
2
?w? ?`2
?
?
21 ?
4 1+? ?
(1 ? sgn(?xi , w?)sgn(?xi , w ?)) ??xi , w ?? ?xi , u? ?
?+
?h?`2 ,
2
(1 ? ) ?
20 ?
(6.18)
?
?
holds for all u ? C ? Sd?1 and w ? E() with probability at least 1 ? 9e??n .
Combining (6.15), (6.16), (6.17), and (6.18) we conclude that
?
? ?
?
2
21 ??
?
?u, w ? w ? ?L(w)? ? 2 ? + 1 + ? (1 +
) ?+
?w ? w? ?`2 ,
2
(1 ? ) ?
20 ??
?
2
holds for all u ? C ? Sd?1 and w ? E() with probability at least 1 ? 16e??? n ? (n + 10)e??n . Using
this inequality with ? = 10?4 and = 7/200 we conclude that ?u, w ? w? ? ?L(w)? ? 14 ?w ? w? ?`2 ,
holds for all u ? C ? Sd?1 and w ? E() with high probability.
Acknowledgements
This work was done in part while the author was visiting the Simon?s Institute for the Theory of
Computing. M.S. would like to thank Adam Klivans and Matus Telgarsky for discussions related to
[6] and the Isotron algorithm.
9
References
[1] D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp. Living on the edge: Phase transitions
in convex programs with random data. Information and Inference, 2014.
[2] A. Brutzkus and A. Globerson. Globally optimal gradient descent for a convnet with gaussian
inputs. International Conference on Machine Learning (ICML), 2017.
[3] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear
inverse problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[4] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep
neural networks with multitask learning. In Proceedings of the 25th international conference
on Machine learning, pages 160?167. ACM, 2008.
[5] R. Ganti, N. Rao, R. M. Willett, and R. Nowak. Learning single index models in high dimensions.
arXiv preprint arXiv:1506.08910, 2015.
[6] S. Goel, V. Kanade, A. Klivans, and J. Thaler. Reliably learning the ReLU in polynomial time.
arXiv preprint arXiv:1611.10258, 2016.
[7] Y. Gordon. On Milman?s inequality and random subspaces which escape through a mesh in Rn .
Springer, 1988.
[8] B. D. Haeffele and R. Vidal. Global optimality in tensor factorization, deep learning, and
beyond. arXiv preprint arXiv:1506.07540, 2015.
[9] J. L. Horowitz and W. Hardle. Direct semiparametric estimation of single-index models with
discrete covariates. Journal of the American Statistical Association, 91(436):1632?1640, 1996.
[10] H. Ichimura. Semiparametric least squares (SLS) and weighted SLS estimation of single-index
models. Journal of Econometrics, 58(1-2):71?120, 1993.
[11] S. M. Kakade, V. Kanade, O. Shamir, and A. Kalai. Efficient learning of generalized linear and
single index models with isotonic regression. In Advances in Neural Information Processing
Systems, pages 927?935, 2011.
[12] A. T. Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In
COLT, 2009.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105,
2012.
[14] Y. Li and Y. Yuan. Convergence analysis of two-layer neural networks with ReLU activation.
arXiv preprint arXiv:1705.09886, 2017.
[15] A. Mohamed, G. E. Dahl, and G. Hinton. Acoustic modeling using deep belief networks. IEEE
Transactions on Audio, Speech, and Language Processing, 20(1):14?22, 2012.
[16] Quynh Nguyen and Matthias Hein. The loss surface of deep and wide neural networks. arXiv
preprint arXiv:1704.08045, 2017.
[17] S. Oymak, B. Recht, and M. Soltanolkotabi. Sharp time?data tradeoffs for linear inverse
problems. arXiv preprint arXiv:1507.04793, 2015.
[18] S. Oymak and M. Soltanolkotabi. Fast and reliable parameter estimation from nonlinear
observations. arXiv preprint arXiv:1610.07108, 2016.
[19] T. Poston, C-N. Lee, Y. Choie, and Y. Kwon. Local minima and back propagation. In Neural
Networks, 1991., IJCNN-91-Seattle International Joint Conference on, volume 2, pages 173?176.
IEEE, 1991.
[20] M. Soltanolkotabi. Learning ReLUs via gradient descent. arXiv preprint arXiv:1705.04591,
2017.
10
[21] M. Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample
complexity barriers via nonconvex optimization. arXiv preprint arXiv:1702.06175, 2017.
[22] M. Soltanolkotabi, A. Javanmard, and J. D. Lee. Theoretical insights into the optimization
landscape of over-parameterized shallow neural networks. 07 2017.
[23] Y. Tian. An analytical formula of population gradient for two-layered relu network and its
applications in convergence and critical point analysis. International Conference on Machine
Learning (ICML), 2017.
[24] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint
arXiv:1011.3027, 2010.
[25] K. Zhong, Z. Song, P. Jain, P. L. Bartlett, and I. S. Dhillon. Recovery guarantees for one-hiddenlayer neural networks. arXiv preprint arXiv:1706.03175, 2017.
11
| 6796 |@word multitask:1 trial:4 version:4 polynomial:2 norm:2 stronger:1 nd:2 simulation:1 bn:2 hsieh:1 decomposition:1 carry:1 initial:1 configuration:1 contains:1 denoting:3 ours:1 ganti:1 surprising:1 si:1 activation:1 bd:8 mesh:3 numerical:10 confirming:1 plot:2 interpretable:1 update:5 n0:10 depict:3 fewer:1 yr:1 isotropic:1 parametrization:1 short:1 iterates:4 characterization:2 lipchitz:1 direct:3 yuan:1 prove:5 shorthand:1 fitting:9 introduce:1 javanmard:1 expected:1 indeed:1 behavior:4 growing:1 chi:1 ming:1 globally:1 automatically:1 begin:3 xx:2 bounded:2 what:1 unspecified:1 emerging:1 developed:1 unified:1 finding:1 guarantee:2 quantitative:1 exactly:1 demonstrates:1 unit:6 appear:1 engineering:1 local:8 sd:12 limit:1 consequence:1 despite:2 analyzing:2 id:1 approximately:2 initialization:2 studied:2 challenging:1 factorization:1 perpendicular:2 tian:1 obeys:1 unique:2 globerson:1 empirical:3 significantly:1 projection:2 get:1 onto:1 layered:1 context:1 applying:1 isotonic:2 starting:1 convex:10 recovery:3 insight:2 utilizing:1 population:2 proving:1 notion:2 shamir:1 play:1 suppose:1 us:1 econometrics:1 distributional:1 labeled:1 role:1 preprint:11 electrical:1 capture:2 verifying:1 counter:1 mentioned:2 complexity:6 covariates:1 dynamic:2 depend:1 rewrite:1 completely:1 joint:1 regularizer:7 derivation:3 jain:1 fast:1 effective:1 neighborhood:1 outside:1 emerged:1 heuristic:4 ability:1 statistic:1 nondecreasing:1 ip:1 sequence:1 differentiable:4 rr:3 net:1 matthias:1 analytical:1 combining:2 achieve:3 getting:1 los:1 sutskever:1 convergence:12 seattle:1 optimum:5 adam:1 converges:2 telgarsky:1 stating:1 pose:2 recovering:1 predicted:1 implies:1 quantify:3 differ:2 direction:3 filter:1 stochastic:2 hull:1 sgn:31 explains:1 require:3 generalization:2 preliminary:1 decompose:2 proposition:1 hold:20 ground:2 predict:1 matus:1 smallest:1 purpose:1 estimation:9 label:3 tool:1 weighted:1 hope:1 clearly:2 gaussian:8 aim:3 kalai:2 avoid:1 cr:10 zhong:1 mccoy:1 publication:1 corollary:2 focus:5 realizable:4 amelunxen:1 inference:1 minimizers:1 hidden:6 lotz:1 provably:1 overall:1 classification:1 colt:1 priori:2 development:1 constrained:1 breakthrough:1 art:1 initialize:1 equal:2 construct:1 beach:1 icml:2 nearly:1 np:1 gordon:3 escape:4 few:1 kwon:1 randomly:1 usc:1 brutzkus:1 phase:2 consisting:1 geometry:1 isotron:4 regularizers:3 hiddenlayer:1 accurate:3 edge:1 nowak:1 euclidean:1 initialized:1 desired:1 isolated:1 hein:1 theoretical:3 minimal:4 earlier:2 modeling:1 rao:1 corroborate:1 assertion:1 cost:2 entry:6 krizhevsky:1 characterize:1 combined:1 vershynin:1 st:1 recht:2 fundamental:2 international:4 oymak:2 lee:2 quickly:1 w1:3 squared:1 containing:2 dr:3 horowitz:1 american:1 leading:1 li:1 parrilo:1 bold:4 satisfy:1 depends:2 idealized:1 collobert:1 closed:3 sup:8 characterizes:2 start:1 relus:11 recover:1 capability:3 parallel:2 simon:1 square:2 ni:1 accuracy:1 convolutional:2 landscape:1 misfit:1 generalize:1 raw:1 none:2 rectified:2 history:1 definition:8 mysterious:1 mohamed:1 naturally:3 proof:5 couple:1 knowledge:3 color:4 ut:9 sophisticated:1 back:1 focusing:2 supervised:1 ichimura:1 response:2 done:1 generality:1 furthermore:3 hand:2 quynh:1 tropp:1 nonlinear:7 overlapping:2 propagation:1 perhaps:1 believe:1 usa:1 effect:1 true:1 regularization:9 hence:1 equality:1 nonzero:2 dhillon:1 eg:1 width:3 generalized:3 complete:1 dedicated:1 recently:1 common:1 superior:1 volume:1 discussed:1 association:1 willett:1 measurement:2 ai:1 rd:25 sastry:1 mathematics:1 soltanolkotabi:6 language:2 had:1 surface:1 recent:1 optimizing:1 belongs:1 nonconvex:8 cn0:2 inequality:6 success:1 yi:9 minimum:5 greater:1 goel:1 aggregated:1 converge:3 signal:3 living:1 infer:1 adapt:1 sphere:3 long:5 retrieval:1 plugging:2 regression:2 expectation:1 arxiv:22 iteration:13 achieved:1 semiparametric:2 crucial:1 rest:1 unlike:1 pass:1 subject:2 effectiveness:1 extracting:1 near:1 bernstein:1 variety:1 relu:19 fit:1 architecture:2 tradeoff:1 angeles:1 expression:1 bartlett:1 song:1 speech:1 proceed:2 deep:6 useful:1 clear:1 detailed:3 maybe:1 generate:2 sl:2 trapped:1 per:2 discrete:1 shall:4 four:1 dahl:1 ht:1 nonconvexity:1 cone:12 run:2 inverse:3 parameterized:1 powerful:1 striking:1 poston:1 place:1 throughout:2 arrive:2 chandrasekaran:1 capturing:3 bound:4 layer:5 guaranteed:1 haeffele:1 milman:1 quadratic:1 adapted:1 ijcnn:1 constraint:1 encodes:1 aspect:3 argument:1 min:2 klivans:2 optimality:1 department:1 structured:2 according:1 ball:1 smaller:2 slightly:1 kakade:1 shallow:3 rrr:22 heart:1 taken:1 resource:1 equation:2 remains:1 discus:1 loose:1 know:1 tractable:1 lieu:1 available:2 multiplied:1 apply:5 obey:1 vidal:1 away:2 batch:2 original:1 actionable:1 top:1 running:1 xw:6 restrictive:1 build:1 prof:2 implied:1 objective:2 tensor:2 question:1 quantity:1 planted:4 dependence:2 concentration:2 diagonal:1 unclear:1 southern:1 gradient:25 visiting:1 subspace:1 convnet:1 thank:1 w0:5 reason:1 fresh:2 willsky:1 index:5 minimizing:1 setup:1 mostly:1 stated:1 reliably:1 proper:1 unknown:3 observation:16 descent:23 defining:2 rrrr:3 extended:3 precise:3 hinton:2 rn:4 arbitrary:2 sharp:1 pair:1 required:7 connection:1 imagenet:1 california:1 acoustic:1 nip:1 address:1 able:1 beyond:1 below:2 regime:2 challenge:3 program:1 max:8 reliable:4 belief:1 event:2 critical:2 natural:2 scheme:2 thaler:1 conic:1 prior:5 understanding:1 literature:3 tangent:1 sg:1 multiplication:1 acknowledgement:1 relative:2 asymptotic:1 loss:6 interesting:6 foundation:1 gather:1 sufficient:1 thresholding:1 principle:1 surprisingly:1 last:1 transpose:1 side:1 deeper:1 institute:1 fall:1 wide:1 barrier:1 sparse:2 distributed:3 dimension:4 transition:1 rich:1 author:1 projected:11 nguyen:1 transaction:1 keep:1 global:5 corroborating:1 conclude:5 xi:106 search:3 iterative:1 continuous:1 why:1 kanade:2 ca:2 poly:1 necessarily:1 domain:1 diag:1 pk:8 dense:1 bounding:2 unites:1 body:1 sub:1 wish:2 obeying:1 exponential:1 answering:1 mahdi:1 breaking:1 third:1 learns:1 theorem:10 formula:1 specific:2 multitude:1 sims:1 exists:1 suited:2 depicted:2 saddle:1 applies:1 springer:1 truth:2 acm:1 succeed:1 weston:1 goal:1 identity:1 towards:1 hard:2 specifically:2 except:1 pgd:4 lemma:9 total:2 called:2 succeeds:1 highdimensional:1 rrrrr:4 audio:1 |
6,408 | 6,797 | Stabilizing Training of Generative Adversarial
Networks through Regularization
Kevin Roth
Department of Computer Science
ETH Z?rich
Aurelien Lucchi
Department of Computer Science
ETH Z?rich
[email protected]
[email protected]
Sebastian Nowozin
Microsoft Research
Cambridge, UK
[email protected]
Thomas Hofmann
Department of Computer Science
ETH Z?rich
[email protected]
Abstract
Deep generative models based on Generative Adversarial Networks (GANs) have
demonstrated impressive sample quality but in order to work they require a careful
choice of architecture, parameter initialization, and selection of hyper-parameters.
This fragility is in part due to a dimensional mismatch or non-overlapping support
between the model distribution and the data distribution, causing their density ratio
and the associated f -divergence to be undefined. We overcome this fundamental
limitation and propose a new regularization approach with low computational cost
that yields a stable GAN training procedure. We demonstrate the effectiveness
of this regularizer accross several architectures trained on common benchmark
image generation tasks. Our regularization turns GAN models into reliable building
blocks for deep learning. 1
1
Introduction
A recent trend in the world of generative models is the use of deep neural networks as data generating
mechanisms. Two notable approaches in this area are variational auto-encoders (VAEs) [14, 28] as
well as generative adversarial networks (GAN) [8]. GANs are especially appealing as they move
away from the common likelihood maximization viewpoint and instead use an adversarial game
approach for training generative models. Let us denote by P(x) and Q? (x) the data and model
distribution, respectively. The basic idea behind GANs is to pair up a ?-parametrized generator
network that produces Q? with a discriminator which aims to distinguish between P and Q? , whereas
the generator aims for making Q? indistinguishable from P. Effectively the discriminator represents
a class of objective functions F that measures dissimilarity of pairs of probability distributions. The
final objective is then formed via a supremum over F, leading to the saddle point problem
?
min `(Q? ; F) := sup F (P, Q? ) .
(1)
?
F 2F
The standard way of representing a specific F is through a family of statistics or discriminants 2 ,
typically realized by a neural network [8, 26]. In GANs, we use these discriminators in a logistic
classification loss as follows
F (P, Q; ) = EP [g( (x))] + EQ [g(
1
(x))] ,
Code available at https://github.com/rothk/Stabilizing_GANs
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
(2)
where g(z) = ln( (z)) is the log-logistic function (for reference, ( (x)) = D(x) in [8]).
As shown in [8], for the Bayes-optimal discriminator ? 2 , the above generator objective reduces
to the Jensen-Shannon (JS) divergence between P and Q. The work of [25] later generalized this to
a more general class of f -divergences, which gives more flexibility in cases where the generative
model may not be expressive enough or where data may be scarce.
We consider three different challenges for learning the model distribution:
(A) empirical estimation: the model family may contain the true distribution or a good approximation
thereof, but one has to identify it based on a finite training sample drawn from P. This is commonly
addressed by the use of regularization techniques to avoid overfitting, e.g. in the context of estimating
f -divergences with M -estimators [24]. In our work, we suggest a novel (Tikhonov) regularizer,
derived and motivated from a training-with-noise scenario, where P and Q are convolved with white
Gaussian noise [30, 3], namely
F (P, Q; ) := F (P ? ?, Q ? ?; ),
? = N (0, I) .
(3)
(B) density misspecification: the model distribution and true distribution both have a density function
with respect to the same base measure but there exists no parameter for which these densities are
sufficiently similar. Here, the principle of parameter estimation via divergence minimization is
provably sound in that it achieves a well-defined limit [1, 21]. It therefore provides a solid foundation
for statistical inference that is robust with regard to model misspecifications.
(C) dimensional misspecification: the model distribution and the true distribution do not have a
density function with respect to the same base measure or ? even worse ? supp(P) \ supp(Q) may
be negligible. This may occur, whenever the model and/or data are confined to low-dimensional
manifolds [3, 23]. As pointed out in [3], a geometric mismatch can be detrimental for f -GAN
models as the resulting f -divergence is not finite (the sup in Eq. (1) is +1). As a remedy, it has
been suggested to use an alternative family of distance functions known as integral probability
metrics [22, 31]. These include the Wasserstein distance used in Wasserstein GANs (WGAN) [3] as
well as RKHS-induced maximum mean discrepancies [9, 16, 6], which all remain well-defined. We
will provide evidence (analytically and experimentally) that the noise-induced regularization method
proposed in this paper effectively makes f -GAN models robust against dimensional misspecifications.
While this introduces some dependency on the (Euclidean) metric of the ambient data space, it does
so on a well-controlled length scale (the amplitude of noise or strength of the regularization ) and
by retaining the benefits of f -divergences. This is a rather gentle modification compared to the more
radical departure taken in Wasserstein GANs, which rely solely on the ambient space metric (through
the notion of optimal mass transport).
In what follows, we will take Eq. (3) as the starting point and derive an approximation via a regularizer
that is simple to implement as an integral operator penalizing the squared gradient norm. As opposed
to a na?ve norm penalization, each f -divergence has its own characteristic weighting function over
the input space, which depends on the discriminator output. We demonstrate the effectiveness
of our approach on a simple Gaussian mixture as well as on several benchmark image datasets
commonly used for generative models. In both cases, our proposed regularization yields stable GAN
training and produces samples of higher visual quality. We also perform pairwise tests of regularized
vs. unregularized GANs using a novel cross-testing protocol.
In summary, we make the following contributions:
? We systematically derive a novel, efficiently computable regularization method for f -GAN.
? We show how this addresses the dimensional misspecification challenge.
? We empirically demonstrate stable GAN training across a broad set of models.
2
Background
The fundamental way to learn a generative model in machine learning is to (i) define a parametric
family of probability densities {Q? }, ? 2 ? ? Rd , and (ii) find parameters ?? 2 ? such that Q? is
closest (in some sense) to the true distribution P. There are various ways to measure how close model
and real distribution are, or equivalently, various ways to define a distance or divergence function
between P and Q. In the following we review different notions of divergences used in the literature.
2
f -divergence. GANs [8] are known to minimize the Jensen-Shannon divergence between P and
Q. This was generalized in [25] to f -divergences induced by a convex functions f . An interesting
property of f -divergences is that they permit a variational characterization [24, 27] via
?
?
?
Z
dP
dP
=
sup u ?
f c (u) dQ,
(4)
Df (P||Q) := EQ f
dQ
dQ
X u
where dP/dQ is the Radon-Nikodym derivative and f c (t) ? supu2domf {ut f (u)} is the Fenchel
dual of f . By defining an arbitrary class of statistics 3 : X ! R we arrive at the bound
?
Z ?
dP
Df (P||Q) sup
?
fc
dQ = sup {EP [ ] EQ [f c
]} .
(5)
dQ
Eq. (5) thus gives us a variational lower bound on the f -divergence as an expectation over P and
Q, which is easier to evaluate (e.g. via sampling from P and Q, respectively) than the density
based formulation. We can see that by identifying = g
and with the choice of f such that
f c = ln(1 exp), we get f c
= ln(1
( )) = g( ) thus recovering Eq. (2).
Integral Probability Metrics (IPM). An alternative family of divergences are integral probability
metrics [22, 31], which find a witness function to distinguish between P and Q. This class of
methods yields an objective similar to Eq. (2) that requires optimizing a distance function between
two distributions over a function class F. Particular choices for F yield the kernel maximum mean
discrepancy approach of [9, 16] or Wasserstein GANs [3]. The latter distance is defined as
W (P, Q) = sup {EP [f ]
kf kL ?1
EQ [f ]},
(6)
where the supremum is taken over functions f which have a bounded Lipschitz constant.
As shown in [3], the Wasserstein metric implies a different notion of convergence compared to the
JS divergence used in the original GAN. Essentially, the Wasserstein metric is said to be weak as it
requires the use of a weaker topology, thus making it easier for a sequence of distribution to converge.
The use of a weaker topology is achieved by restricting the function class to the set of bounded
Lipschitz functions. This yields a hard constraint on the function class that is empirically hard to
satisfy. In [3], this constraint is implemented via weight clipping, which is acknowledged to be a
"terrible way" to enforce the Lipschitz constraint. As will be shown later, our regularization penalty
can be seen as a soft constraint on the Lipschitz constant of the function class which is easy to
implement in practice. Recently, [10] has also proposed a similar regularization; while their proposal
was motivated for Wasserstein GANs and does not extend to f -divergences it is interesting to observe
that both their and our regularization work on the gradient.
Training with Noise. As suggested in [3, 30], one can break the dimensional misspecification
discussed in Section 1 by adding continuous noise to the inputs of the discriminator, therefore
smoothing the probability distribution. However, this requires to add high-dimensional noise, which
introduces significant variance in the parameter estimation process. Counteracting this requires a
lot of samples and therefore ultimately leads to a costly or impractical solution. Instead we propose
an approach that relies on analytic convolution of the densities P and Q with Gaussian noise. As
we demonstrate below, this yields a simple weighted penalty function on the norm of the gradients.
Conceptually we think of this noise not as being part of the generative process (as in [3]), but rather
as a way to define a smoother family of discriminants for the variational bound of f -divergences.
Regularization for Mode Dropping. Other regularization techniques address the problem of mode
dropping and are complementary to our approach. This includes the work of [7] which incorporates a
supervised training signal as a regularizer on top of the discriminator target. To implement supervision
the authors use an additional auto-encoder as well as a two-step training procedure which might
be computationally expensive. A similar approach was proposed by [20] that stabilizes GANs by
unrolling the optimization of the discriminator. The main drawback of this approach is that the
computational cost scales with the number of unrolling steps. In general, it is not clear to what extent
these methods not only stabilize GAN training, but also address the conceptual challenges listed in
Section 1.
3
3
Noise-Induced Regularization
From now onwards, we consider the general f -GAN [25] objective defined as
EQ [f c
F (P, Q; ) ? EP [ ]
3.1
(7)
].
Noise Convolution
From a practitioners point of view, training with noise can be realized by adding zero-mean random
variables ? to samples x ? P, Q during training. Here we focus on normal white noise ? ? ? =
N (0, I) (the same analysis goes through with a Laplacian noise distribution for instance). From a
theoretical perspective, adding noise is tantamount to convolving the corresponding distribution as
Z
Z
Z
EP E? [ (x + ?)] =
(x) p(x ?) (?)d? dx =
(x)(p ? )(x)dx = EP?? [ ]. (8)
where p and are probability densities of P and ?, respectively, with regard to the Lebesgue measure.
The noise distribution ? as well as the resulting P?? are guaranteed to have full support in the ambient
space, i.e. (x) > 0 and (p ? )(x) > 0 (8x). Technically, applying this to both P and Q makes the
resulting generalized f -divergence well-defined, even when the generative model is dimensionally
misspecified. Note that approximating E? through sampling was previously investigated in [30, 3].
3.2
Convolved Discriminants
With symmetric noise, (?) = ( ?), we can write Eq. (8) equivalently as
Z
Z
EP?? [ ] = EP E? [ (x + ?)] = p(x)
(x ?) ( ?) d? dx = EP [ ? ].
For the Q-expectation in Eq. (7) one gets, by the same argument, EQ?? [f c
] = EQ [(f c
Formally, this generalizes the variational bound for f -divergences in the following manner:
F (P ? ?, Q ? ?; ) = F (P, Q;
? , (f c
) ? ),
F (P, Q; ?, ? ) := EP [?]
EQ [? ]
(9)
) ? ].
(10)
Assuming that F is closed under ? convolutions, the regularization will result in a relative weakening
of the discriminator as we take the sup over a smaller, more regular family. Clearly, the low-pass
effect of ?-convolutions can be well understood in the Fourier domain. In this equivalent formulation,
we leave P and Q unchanged, yet we change the view the discriminator can take on the ambient data
space: metaphorically speaking, the generator is paired up with a short-sighted adversary.
3.3
Analytic Approximations
In general, it may be difficult to analytically compute ? or ? equivalently ? E? [ (x + ?)].
However, for small we can use a Taylor approximation of around ? = 0 (cf. [5]):
(x + ?) = (x) + [r (x)]T ? +
1 T 2
? [r (x)] ? + O(? 3 )
2
(11)
where r2 denotes the Hessian, whose trace Tr(r2 ) = 4 is known as the Laplace operator. The
properties of white noise result in the approximation
4 (x) + O( 2 )
(12)
2
and thereby lead directly to an approximation of F (see Eq. (3)) via F = F0 plus a correction, i.e.
E? [ (x + ?)] = (x) +
{EP [4 ] EQ [4(f c
)]} + O( 2 ) .
(13)
2
We can interpret Eq. (13) as follows: the Laplacian measures how much the scalar fields and
fc
differ at each point from their local average. It is thereby an infinitesimal proxy for the (exact)
convolution.
F (P, Q; ) = F(P, Q; ) +
The Laplace operator is a sum of d terms, where d is the dimensionality of the ambient data space. As
such it does not suffer from the quadratic blow-up involved in computing the Hessian. If we realize
the discriminator via a deep network, however, then we need to be able to compute the Laplacian
of composed functions. For concreteness, let us assume that = h G, G = (g1 , . . . , gk ) and look
4
at a single input x, i.e. gi : R ! R, then
X
X
X
(h G)0 =
gi0 ? (@i h G), (h G)00 =
gi00 ? (@i h G) +
gi0 ? gj0 ? (@i @j h G) (14)
i
i
i,j
So at the intermediate layer, we would need to effectively operate with a full Hessian, which is
computationally demanding, as has already been observed in [5].
3.4
Efficient Gradient-Based Regularization
We would like to derive a (more) tractable strategy for regularizing , which (i) avoids the detrimental
variance that comes from sampling ?, (ii) does not rely on explicitly convolving the distributions
P and Q, and (iii) avoids the computation of Laplacians as in Eq. (13). Clearly, this requires to
make further simplifications. We suggest to exploit properties of the maximizer ? of F that can be
characterized by [24]
(f c 0
?
) dQ = dP =) EP [h] = EQ [(f c 0
?
(8h, integrable).
) ? h]
The relevance of this becomes clear, if we apply the chain rule to 4(f c
twice differentiable
4(f c
) = (f c 00
2
) ? ||r || + f c 0
(15)
), assuming that f c is
4 ,
(16)
as now we get a convenient cancellation of the Laplacians at =
+ O( )
h
i
2
?
F (P, Q; ? ) = F(P, Q; ? )
EQ (f c 00
) ? kr ? k + O( 2 ) .
(17)
2
We can (heuristically) turn this into a regularizer by taking the leading terms,
h
i
2
F (P, Q; ) ? F(P, Q; )
?f (Q; ), ?f (Q; ) := EQ (f c 00
) ? kr k .
(18)
2
Note that we do not assume that the Laplacian terms cancel far away from the optimum, i.e. we do not
assume Eq. (15) to hold for far away from ? . Instead, the underlying assumption we make is that
optimizing the gradient-norm regularized objective F (P, Q; ) makes converge to ? + O( ),
for which we know that the Laplacian terms cancel [5, 2].
?
The convexity of f c implies that the weighting function of the squared gradient norm is non-negative,
i.e. f c 00
0, which in turn implies that the regularizer 2 ?f (Q; ) is upper bounded (by zero).
Maximization of F (P, Q; ) with respect to is therefore well-defined. Further considerations
regarding the well-definedness of the regularizer can be found in sec. 7.2 in the Appendix.
4
Regularizing GANs
We have shown that training with noise is equivalent to regularizing the discriminator. Inspired by
the above analysis, we propose the following class of f -GAN regularizers:
Regularized f -GAN
F (P, Q; ) = EP [ ] EQ [f c
]
?f (Q; )
h
i2
2
?f (Q; ) := EQ (f c 00
) kr k
(19)
The regularizer corresponding to the commonly used parametrization of the Jensen-Shannon GAN
can be derived analogously as shown in the Appendix. We obtain,
Regularized Jensen-Shannon GAN
F (P, Q; ') = EP [ln(')] + EQ [ln(1 ')]
?JS (P, Q; ')
2 ?
?
?
?
?JS (P, Q; ') := EP (1 '(x))2 ||r (x)||2 + EQ '(x)2 ||r (x)||2
(20)
1
where =
(') denotes the logit of the discriminator '. We prefer to compute the gradient of
as it is easier to implement and more robust than computing gradients after applying the sigmoid.
5
Algorithm 1 Regularized JS-GAN. Default values:
0
annealing), n' = 1
= 2.0, ? = 0.01 (with annealing),
= 0.1 (without
Require: Initial noise variance 0 , annealing decay rate ?, number of discriminator update steps n'
per generator iteration, minibatch size m, number of training iterations T
Require: Initial discriminator parameters !0 , initial generator parameters ?0
for t = 1, ..., T do
t/T
# annealing
0??
for 1, ..., n' do
Sample minibatch of real data {x(1) , ..., x(m) } ? P.
Sample minibatch of latent variables from prior {z(1) , ..., z(m) } ? p(z).
F (!, ?) =
?(!, ?) =
m
?
?
1 X h ?
ln '! (x(i) ) + ln 1
m i=1
m ?
1 X ?
1
m i=1
'! (x(i) )
?2
r
'! (G? (z(i) ))
! (x
(i)
)
2
?i
+ '! G? (z(i) )
2
rx?
x) x? =G (z(i) )
! (?
?
2
?
?
!
! + r! F (!, ?)
?(!, ?) # gradient ascent
2
end for
Sample minibatch of latent variables from prior {z(1) , ..., z(m) } ? p(z).
m
m
?
?
1 X ?
1 X ?
F (!, ?) =
ln 1 '! (G? (z(i) )) or Falt (!, ?) =
ln '! (G? (z(i) ))
m i=1
m i=1
?
?
r? F(!, ?) # gradient descent
end for
The gradient-based updates can be performed with any gradient-based learning rule. We used
Adam in our experiments.
4.1
Training Algorithm
Regularizing the discriminator provides an efficient way to convolve the distributions and is thereby
sufficient to address the dimensional misspecification challenges outlined in the introduction. This
leaves open the possibility to use the regularizer also in the objective of the generator. On the one
hand, optimizing the generator through the regularized objective may provide useful gradient signal
and therefore accelerate training. On the other hand, it destabilizes training close to convergence
(if not dealt with properly), since the generator is incentiviced to put probability mass where the
discriminator has large gradients. In the case of JS-GANs, we recommend to pair up the regularized
objective of the discriminator with the ?alternative? or ?non-saturating? objective for the generator,
proposed in [8], which is known to provide strong gradients out of the box (see Algorithm 1).
4.2
Annealing
The regularizer variance lends itself nicely to annealing. Our experimental results indicate that a
reasonable annealing scheme consists in regularizing with a large initial early in training and then
(exponentially) decaying to a small non-zero value. We leave to future work the question of how to
determine an optimal annealing schedule.
5
5.1
Experiments
2D submanifold mixture of Gaussians in 3D space
To demonstrate the stabilizing effect of the regularizer, we train a simple GAN architecture [20] on a
2D submanifold mixture of seven Gaussians arranged in a circle and embedded in 3D space (further
details and an illustration of the mixture distribution are provided in the Appendix). We emphasize
that this mixture is degenerate with respect to the base measure defined in ambient space as it does
not have fully dimensional support, thus precisely representing one of the failure scenarios commonly
6
UNREG .
0.01
1.0
Figure 1: 2D submanifold mixture. The first row shows one of several unstable unregularized GANs
trained to learn the dimensionally misspecified mixture distribution. The remaining rows show
regularized GANs (with regularized objective for the discriminator and unregularized objective for
the generator) for different levels of regularization . Even for small but non-zero noise variance, the
regularized GAN can essentially be trained indefinitely without collapse. The color of the samples is
proportional to the density estimated from a Gaussian KDE fit. The target distribution is shown in
Fig. 5. GANs were trained with one discriminator update per generator update step (indicated).
described in the literature [3]. The results are shown in Fig. 1 for both standard unregularized GAN
training as well as our regularized variant.
While the unregularized GAN collapses in literally every run after around 50k iterations, due to the
fact that the discriminator concentrates on ever smaller differences between generated and true data
(the stakes are getting higher as training progresses), the regularized variant can be trained essentially
indefinitely (well beyond 200k iterations) without collapse for various degrees of noise variance, with
and without annealing. The stabilizing effect of the regularizer is even more pronounced when the
GANs are trained with five discriminator updates per generator update step, as shown in Fig. 6.
5.2
Stability across various architectures
To demonstrate the stability of the regularized training procedure and to showcase the excellent
quality of the samples generated from it, we trained various network architectures on the CelebA [17],
CIFAR-10 [15] and LSUN bedrooms [32] datasets. In addition to the deep convolutional GAN
(DCGAN) of [26], we trained several common architectures that are known to be hard to train
[4, 26, 19], therefore allowing us to establish a comparison to the concurrently proposed gradientpenalty regularizer for Wasserstein GANs [10]. Among these architectures are a DCGAN without
any normalization in either the discriminator or the generator, a DCGAN with tanh activations and a
deep residual network (ResNet) GAN [11]. We used the open-source implementation of [10] for our
experiments on CelebA and LSUN, with one notable exception: we use batch normalization also for
the discriminator (as our regularizer does not depend on the optimal transport plan or more precisely
the gradient penalty being imposed along it).
All networks were trained using the Adam optimizer [13] with learning rate 2 ? 10 4 and hyperparameters recommended by [26]. We trained all datasets using batches of size 64, for a total of
200K generator iterations in the case of LSUN and 100k iterations on CelebA. The results of these
experiments are shown in Figs. 3 & 2. Further implementation details can be found in the Appendix.
5.3
Training time
We empirically found regularization to increase the overall training time by a marginal factor
of roughly 1.4 (due to the additional backpropagation through the computational graph of the
discriminator gradients). More importantly, however, (regularized) f -GANs are known to converge
(or at least generate good looking samples) faster than their WGAN relatives [10].
7
R ES N ET
DCGAN
N O N ORMALIZATION
TANH
Figure 2: Stability accross various architectures: ResNet, DCGAN, DCGAN without normalization
and DCGAN with tanh activations (details in the Appendix). All samples were generated from
regularized GANs with exponentially annealed 0 = 2.0 (and alternative generator loss) as described
in Algorithm 1. Samples were produced after 200k generator iterations on the LSUN dataset (see also
Fig. 8 for a full-resolution image of the ResNet GAN). Samples for the unregularized architectures
can be found in the Appendix.
UNREG .
0.5
1.0
2.0
Figure 3: Annealed Regularization. CelebA samples generated by (un)regularized ResNet GANs.
The initial level of regularization 0 is shown below each batch of images. 0 was exponentially
annealed as described in Algorithm 1. The regularized GANs can be trained essentially indefinitely
without collapse, the superior quality is again evident. Samples were produced after 100k generator
iterations.
5.4
Regularization vs. explicitly adding noise
We compare our regularizer against the common practitioner?s approach to explicitly adding noise to
images during training. In order to compare both approaches (analytic regularizer vs. explicit noise),
we fix a common batch size (64 in our case) and subsequently train with different noise-to-signal
ratios (NSR): we take (batch-size/NSR) samples (both from the dataset and generated ones) to each
of which a number of NSR noise vectors is added and feed them to the discriminator (so that overall
both models are trained on the same batch size). We experimented with NSR 1, 2, 4, 8 and show the
best performing ratio (further ratios in the Appendix). Explicitly adding noise in high-dimensional
ambient spaces introduces additional sampling variance which is not present in the regularized variant.
The results, shown in Fig. 4, confirm that the regularizer stabilizes across a broad range of noise
levels and manages to produce images of considerably higher quality than the unregularized variants.
5.5
Cross-testing protocol
We propose the following pairwise cross-testing protocol to assess the relative quality of two GAN
models: unregularized GAN (Model 1) vs. regularized GAN (Model 2). We first report the confusion
matrix (classification of 10k samples from the test set against 10k generated samples) for each model
separately. We then classify 10k samples generated by Model 1 with the discriminator of Model 2
and vice versa. For both models, we report the fraction of false positives (FP) (Type I error) and false
negatives (FN) (Type II error). The discriminator with the lower FP (and/or lower FN) rate defines
the better model, in the sense that it is able to more accurately classify out-of-data samples, which
indicates better generalization properties. We obtained the following results on CIFAR-10:
8
UNREGULARIZED
EXPLICIT NOISE
0.01
0.1
1.0
0.1
1.0
REGULARIZED
0.001
0.01
Figure 4: CIFAR-10 samples generated by (un)regularized DCGANs (with alternative generator loss),
as well as by training a DCGAN with explicitly added noise (noise-to-signal ratio 4). The level of
regularization or noise is shown above each batch of images. The regularizer stabilizes across a
broad range of noise levels and manages to produce images of higher quality than the unregularized
variants. Samples were produced after 50 training epochs.
Regularized GAN ( = 0.1)
Unregularized GAN
True condition
Positive Negative
Predicted
Positive
Negative
0.9688
0.0312
True condition
Positive Negative
0.0002
0.9998
Predicted
Cross-testing: FP: 0.0
Positive
Negative
1.0
0.0
0.0013
0.9987
Cross-testing: FP: 1.0
For both models, the discriminator is able to recognize his own generator?s samples (low FP in the
confusion matrix). The regularized GAN also manages to perfectly classify the unregularized GAN?s
samples as fake (cross-testing FP 0.0) whereas the unregularized GAN classifies the samples of the
regularized GAN as real (cross-testing FP 1.0). In other words, the regularized model is able to fool
the unregularized one, whereas the regularized variant cannot be fooled.
6
Conclusion
We introduced a regularization scheme to train deep generative models based on generative adversarial
networks (GANs). While dimensional misspecifications or non-overlapping support between the
data and model distributions can cause severe failure modes for GANs, we showed that this can be
addressed by adding a penalty on the weighted gradient-norm of the discriminator. Our main result is
a simple yet effective modification of the standard training algorithm for GANs, turning them into
reliable building blocks for deep learning that can essentially be trained indefinitely without collapse.
Our experiments demonstrate that our regularizer improves stability, prevents GANs from overfitting
and therefore leads to better generalization properties (cf cross-testing protocol). Further research on
the optimization of GANs as well as their convergence and generalization can readily be built upon
our theoretical results.
9
Acknowledgements
We would like to thank Devon Hjelm for pointing out that the regularizer works well with ResNets.
KR is thankful to Yannic Kilcher, Lars Mescheder and the dalab team for insightful discussions. Big
thanks also to Ishaan Gulrajani and Taehoon Kim for their open-source GAN implementations. This
work was supported by Microsoft Research through its PhD Scholarship Programme.
References
[1] Shun-ichi Amari and Hiroshi Nagaoka. Methods of information geometry. American Mathematical Soc., 2007.
[2] Guozhong An. The effects of adding noise during backpropagation training on a generalization
performance. Neural Comput., pages 643?674, 1996.
[3] Martin Arjovsky and L?on Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017.
[4] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein generative adversarial
networks. Proceedings of Machine Learning Research. PMLR, 2017.
[5] Chris M Bishop. Training with noise is equivalent to tikhonov regularization. Neural computation, 7:108?116, 1995.
[6] Diane Bouchacourt, Pawan K Mudigonda, and Sebastian Nowozin. Disco nets: Dissimilarity
coefficients networks. In Advances in Neural Information Processing Systems, pages 352?360,
2016.
[7] Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized
generative adversarial networks. arXiv preprint arXiv:1612.02136, 2016.
[8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. In Advances in
Neural Information Processing Systems, pages 2672?2680, 2014.
[9] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch?lkopf, and Alexander
Smola. A kernel two-sample test. Journal of Machine Learning Research, 13:723?773, 2012.
[10] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville.
Improved training of wasserstein gans. In Advances in Neural Information Processing Systems,
2017.
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), page
770?778, 2016.
[12] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. Proceedings of Machine Learning Research, pages 448?456.
PMLR, 2015.
[13] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. The
International Conference on Learning Representations (ICLR), 2014.
[14] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. The International
Conference on Learning Representations (ICLR), 2013.
[15] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images.
2009.
[16] Yujia Li, Kevin Swersky, and Richard S Zemel. Generative moment matching networks. In
ICML, pages 1718?1727, 2015.
10
[17] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in
the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages
3730?3738, 2015.
[18] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the
wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
[19] Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. The numerics of gans. In Advances
in Neural Information Processing Systems, 2017.
[20] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial
networks. In International Conference on Learning Representations (ICLR), 2016.
[21] Tom Minka. Divergence measures and message passing. Technical report, Microsoft Research,
2005.
[22] Alfred M?ller. Integral probability metrics and their generating classes of functions. Advances
in Applied Probability, 29:429?443, 1997.
[23] Hariharan Narayanan and Sanjoy Mitter. Sample complexity of testing the manifold hypothesis.
In Advances in Neural Information Processing Systems, pages 1786?1794, 2010.
[24] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information
Theory, 56(11):5847?5861, 2010.
[25] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information
Processing Systems, pages 271?279, 2016.
[26] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with
deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[27] Mark D Reid and Robert C Williamson. Information, divergence and risk for binary experiments.
Journal of Machine Learning Research, 12:731?817, 2011.
[28] Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and
approximate inference in deep generative models. In Proceedings of the 31st International
Conference on Machine Learning, 2014.
[29] David W Scott. Multivariate density estimation: theory, practice, and visualization. John Wiley
& Sons, 2015.
[30] Casper Kaae S?nderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz?r. Amortised
map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
[31] Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Sch?lkopf, and Gert RG
Lanckriet. On integral probability metrics, phi-divergences and binary classification. arXiv
preprint arXiv:0901.2698, 2009.
[32] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction
of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint
arXiv:1506.03365, 2015.
11
| 6797 |@word norm:6 logit:1 open:3 heuristically:1 jacob:1 thereby:3 tr:1 solid:1 ipm:1 moment:1 initial:5 liu:2 rkhs:1 com:2 luo:2 activation:2 yet:2 dx:3 diederik:2 readily:1 john:1 realize:1 fn:2 hofmann:2 analytic:3 christian:1 yinda:1 update:6 v:4 kilcher:1 generative:22 leaf:1 alec:1 parametrization:1 short:1 indefinitely:4 provides:2 characterization:1 zhang:2 five:1 mathematical:1 along:1 wierstra:1 consists:1 wild:2 manner:1 pairwise:2 karsten:1 roughly:1 inspired:1 soumith:2 accross:2 unrolling:2 becomes:1 provided:1 estimating:2 bounded:3 underlying:1 classifies:1 mass:2 what:2 impractical:1 every:1 wenjie:1 uk:1 sherjil:1 faruk:1 reid:1 positive:5 negligible:1 understood:1 local:1 limit:1 encoding:1 solely:1 might:1 plus:1 twice:1 initialization:1 luke:2 collapse:5 range:2 testing:9 practice:2 block:2 implement:4 backpropagation:3 procedure:3 area:1 empirical:1 eth:3 convenient:1 matching:1 word:1 regular:1 suggest:2 get:3 cannot:1 close:2 selection:1 operator:3 put:1 context:1 applying:2 risk:2 equivalent:3 imposed:1 demonstrated:1 roth:2 mescheder:2 annealed:3 go:1 shi:1 starting:1 jimmy:1 convex:2 resolution:2 stabilizing:3 identifying:1 pouget:1 jascha:1 estimator:1 rule:2 importantly:1 his:1 stability:4 notion:3 gert:1 laplace:2 target:2 construction:1 exact:1 goodfellow:1 hypothesis:1 lanckriet:1 trend:1 expensive:1 recognition:2 nderby:1 showcase:1 ep:15 observed:1 preprint:5 wang:2 sun:1 principled:1 convexity:1 complexity:1 warde:1 ultimately:1 trained:13 depend:1 ferenc:1 technically:1 upon:1 accelerate:1 xiaoou:2 various:6 regularizer:20 train:4 effective:1 hiroshi:1 zemel:1 kevin:3 hyper:1 whose:1 jean:1 cvpr:1 amari:1 encoder:1 statistic:2 gi:1 g1:1 nagaoka:1 think:1 itself:1 final:1 bouchacourt:1 shakir:1 sequence:1 differentiable:1 net:1 propose:4 causing:1 loop:1 flexibility:1 degenerate:1 pronounced:1 gentle:1 getting:1 convergence:3 optimum:1 produce:4 generating:2 adam:3 leave:2 ben:1 resnet:4 thankful:1 derive:3 radical:1 progress:1 eq:27 strong:1 soc:1 recovering:1 implemented:1 predicted:2 implies:3 come:1 indicate:1 differ:1 concentrate:1 rasch:1 kaae:1 drawback:1 attribute:2 subsequently:1 lars:2 stochastic:2 human:1 shun:1 require:3 fix:1 generalization:4 correction:1 hold:1 sufficiently:1 around:2 normal:1 exp:1 caballero:1 pointing:1 stabilizes:3 achieves:1 early:1 optimizer:1 estimation:4 tanh:3 vice:1 weighted:2 minimization:3 fukumizu:1 clearly:2 concurrently:1 gaussian:4 aim:2 super:1 rather:2 husz:1 avoid:1 cseke:1 derived:2 focus:1 rezende:1 properly:1 likelihood:2 indicates:1 fooled:1 adversarial:11 kim:1 sense:2 inference:3 typically:1 weakening:1 provably:1 overall:2 classification:3 dual:1 among:1 retaining:1 lucas:1 plan:1 smoothing:1 marginal:1 field:1 nicely:1 beach:1 sampling:4 represents:1 broad:3 look:1 cancel:2 icml:1 unsupervised:1 yu:1 celeba:4 discrepancy:2 future:1 report:3 recommend:1 yoshua:2 mirza:1 richard:1 composed:1 divergence:26 wgan:2 ve:1 recognize:1 geometry:1 pawan:1 lebesgue:1 microsoft:4 onwards:1 message:1 possibility:1 severe:1 introduces:3 mixture:7 farley:1 undefined:1 behind:1 regularizers:1 chain:1 ambient:7 integral:6 arthur:2 shuran:1 literally:1 euclidean:1 taylor:1 circle:1 theoretical:2 fenchel:1 instance:1 soft:1 classify:3 maximization:2 clipping:1 cost:2 submanifold:3 krizhevsky:1 lsun:5 encoders:1 dependency:1 considerably:1 st:2 density:11 fundamental:2 thanks:1 borgwardt:1 international:6 michael:1 analogously:1 lucchi:2 gans:29 na:1 squared:2 again:1 yannic:1 opposed:1 worse:1 convolving:2 derivative:1 leading:2 american:1 li:3 supp:2 szegedy:1 blow:1 sec:1 stabilize:1 includes:1 coefficient:1 satisfy:1 notable:2 explicitly:5 depends:1 later:2 break:1 lot:1 view:2 closed:1 performed:1 sup:7 dumoulin:1 bayes:2 decaying:1 metz:2 contribution:1 ass:1 formed:1 hariharan:1 minimize:1 convolutional:2 variance:7 characteristic:1 efficiently:1 botond:1 yield:6 identify:1 conceptually:1 dealt:1 weak:1 lkopf:2 vincent:1 accurately:1 produced:3 manages:3 ren:1 rx:1 bharath:1 ping:2 sebastian:5 whenever:1 infinitesimal:1 against:3 failure:2 sriperumbudur:1 involved:1 minka:1 thereof:1 mohamed:1 chintala:2 associated:1 dataset:3 color:1 ut:1 dimensionality:1 improves:1 schedule:1 amplitude:1 feed:1 higher:4 supervised:1 danilo:1 tom:1 improved:1 formulation:2 arranged:1 box:1 smola:1 hand:2 expressive:1 transport:2 mehdi:1 overlapping:2 maximizer:1 minibatch:4 defines:1 logistic:2 mode:4 quality:7 indicated:1 gulrajani:2 nsr:4 usa:1 building:2 contain:1 true:7 remedy:1 effect:4 regularization:24 analytically:2 symmetric:1 i2:1 white:3 indistinguishable:1 game:1 during:3 seff:1 generalized:3 evident:1 demonstrate:7 confusion:2 image:12 variational:7 consideration:1 novel:3 recently:1 ari:1 misspecified:2 common:5 sigmoid:1 superior:1 discriminants:3 empirically:3 exponentially:3 extend:1 discussed:1 he:1 interpret:1 significant:1 ishaan:2 cambridge:1 versa:1 rd:1 outlined:1 pointed:1 cancellation:1 stable:3 f0:1 impressive:1 supervision:1 base:3 add:1 j:6 closest:1 own:2 recent:1 showed:1 perspective:1 optimizing:3 inf:3 multivariate:1 scenario:2 tikhonov:2 dimensionally:2 binary:2 integrable:1 seen:1 arjovsky:3 wasserstein:10 additional:3 converge:3 determine:1 ller:1 xiangyu:1 recommended:1 signal:4 ii:3 multiple:1 smoother:1 sound:1 full:3 reduces:1 gretton:2 technical:1 faster:1 characterized:1 ahmed:1 cross:8 long:1 cifar:3 paired:1 controlled:1 laplacian:5 variant:6 basic:1 vision:3 essentially:5 metric:9 df:2 expectation:2 iteration:8 sergey:1 kernel:2 normalization:4 resnets:1 confined:1 achieved:1 arxiv:10 proposal:1 whereas:3 background:1 addition:1 separately:1 addressed:2 annealing:9 source:2 jian:1 sch:2 operate:1 ascent:1 induced:4 incorporates:1 effectiveness:2 jordan:1 practitioner:2 counteracting:1 intermediate:1 iii:1 enough:1 easy:1 bengio:2 fit:1 bedroom:1 architecture:9 topology:2 perfectly:1 andreas:1 idea:1 regarding:1 computable:1 shift:1 motivated:2 accelerating:1 penalty:4 song:1 suffer:1 speaking:1 hessian:3 cause:1 shaoqing:1 passing:1 deep:15 useful:1 fake:1 clear:2 listed:1 fool:1 xuanlong:1 narayanan:1 http:1 terrible:1 generate:1 metaphorically:1 estimated:1 per:3 alfred:1 write:1 dropping:2 dickstein:1 ichi:1 acknowledged:1 drawn:1 penalizing:1 graph:1 concreteness:1 fraction:1 sum:1 run:1 jose:1 swersky:1 arrive:1 family:7 reasonable:1 geiger:1 appendix:7 prefer:1 radon:1 bound:4 layer:2 guaranteed:1 distinguish:2 simplification:1 courville:2 quadratic:1 strength:1 occur:1 xiaogang:2 constraint:4 precisely:2 alex:1 aurelien:2 fourier:1 argument:1 min:1 performing:1 martin:4 department:3 remain:1 across:4 smaller:2 son:1 appealing:1 making:2 modification:2 iccv:1 taken:2 unregularized:14 ln:9 computationally:2 visualization:1 previously:1 bing:1 turn:3 mechanism:1 know:1 tractable:1 end:2 available:1 generalizes:1 gaussians:2 permit:1 apply:1 observe:1 away:3 enforce:1 pmlr:2 alternative:5 batch:8 convolved:2 thomas:2 original:1 top:1 denotes:2 include:1 cf:2 gan:34 convolve:1 remaining:1 exploit:1 sighted:1 scholarship:1 especially:1 establish:1 approximating:1 yanran:1 unchanged:1 move:1 objective:12 already:1 realized:2 question:1 added:2 parametric:1 costly:1 strategy:1 map:1 said:1 che:1 gradient:18 detrimental:2 dp:5 distance:5 lends:1 thank:1 iclr:4 parametrized:1 chris:1 seven:1 manifold:2 extent:1 unreg:2 unstable:1 ozair:1 assuming:2 code:1 length:1 illustration:1 ratio:6 unrolled:1 equivalently:3 difficult:1 robert:1 kde:1 ryota:1 gk:1 trace:1 negative:6 ba:1 numerics:1 ziwei:2 implementation:3 perform:1 allowing:1 upper:1 convolution:5 datasets:3 benchmark:2 finite:2 daan:1 descent:1 defining:1 witness:1 ever:1 misspecification:5 looking:1 team:1 hinton:1 arbitrary:1 misspecifications:3 introduced:1 david:3 pair:3 namely:1 kl:1 discriminator:30 devon:1 pfau:1 kingma:2 nip:1 address:4 able:4 suggested:2 poole:1 adversary:1 below:2 mismatch:2 departure:1 beyond:1 laplacians:2 fp:7 challenge:4 pattern:1 yujia:1 scott:1 built:1 reliable:2 max:1 wainwright:1 demanding:1 malte:1 rely:2 regularized:27 turning:1 dcgans:1 scarce:1 residual:2 representing:2 scheme:2 github:1 auto:3 review:1 geometric:1 acknowledgement:1 literature:2 kf:1 prior:2 epoch:1 tantamount:1 relative:3 embedded:1 loss:3 fully:1 theis:1 generation:1 limitation:1 interesting:2 proportional:1 geoffrey:1 generator:20 penalization:1 foundation:1 degree:1 sufficient:1 proxy:1 destabilizes:1 xiao:1 principle:1 viewpoint:1 dq:7 systematically:1 nowozin:5 nikodym:1 gi0:2 tiny:1 row:2 casper:1 summary:1 supported:1 weaker:2 taking:1 face:2 amortised:1 benefit:1 regard:2 overcome:1 default:1 athul:1 world:1 avoids:2 rich:3 author:1 commonly:4 programme:1 far:2 nguyen:1 welling:1 transaction:1 functionals:1 approximate:1 emphasize:1 bernhard:2 supremum:2 confirm:1 overfitting:2 ioffe:1 conceptual:1 continuous:1 latent:2 un:2 learn:2 robust:3 fragility:1 ca:1 diane:1 investigated:1 excellent:1 bottou:2 williamson:1 protocol:4 domain:1 main:2 big:1 noise:36 hyperparameters:1 paul:1 wenzhe:1 complementary:1 kenji:1 xu:1 fig:6 mitter:1 tong:1 wiley:1 tomioka:1 explicit:2 comput:1 weighting:2 ian:1 tang:2 specific:1 bishop:1 covariate:1 insightful:1 jensen:4 r2:2 decay:1 experimented:1 abadie:1 evidence:1 exists:1 restricting:1 adding:8 effectively:3 kr:4 false:2 sohl:1 phd:1 dissimilarity:2 easier:3 rg:1 fc:2 saddle:1 visual:1 prevents:1 saturating:1 dcgan:8 kaiming:1 phi:1 scalar:1 radford:1 ch:3 relies:1 careful:1 towards:1 lipschitz:4 hjelm:1 fisher:1 experimentally:1 hard:3 change:1 reducing:1 sampler:1 disco:1 definedness:1 total:1 pas:1 sanjoy:1 experimental:1 stake:1 e:1 shannon:4 vaes:1 exception:1 formally:1 aaron:2 internal:1 support:4 mark:1 latter:1 jianxiong:1 alexander:1 relevance:1 ethz:3 evaluate:1 regularizing:5 |
6,409 | 6,798 | Expectation Propagation with Stochastic Kinetic
Model in Complex Interaction Systems
Le Fang, Fan Yang, Wen Dong, Tong Guan, and Chunming Qiao
Department of Computer Science and Engineering
University at Buffalo
{lefang, fyang24, wendong, tongguan, qiao}@buffalo.edu
Abstract
Technological breakthroughs allow us to collect data with increasing spatiotemporal resolution from complex interaction systems. The combination of highresolution observations, expressive dynamic models, and efficient machine learning
algorithms can lead to crucial insights into complex interaction dynamics and the
functions of these systems. In this paper, we formulate the dynamics of a complex
interacting network as a stochastic process driven by a sequence of events, and
develop expectation propagation algorithms to make inferences from noisy observations. To avoid getting stuck at a local optimum, we formulate the problem of
minimizing Bethe free energy as a constrained primal problem and take advantage
of the concavity of dual problem in the feasible domain of dual variables guaranteed by duality theorem. Our expectation propagation algorithms demonstrate
better performance in inferring the interaction dynamics in complex transportation
networks than competing models such as particle filter, extended Kalman filter, and
deep neural networks.
1
Introduction
We live in a complex world, where many collective systems are difficult to interpret. In this paper,
we are interested in complex interaction systems, also called complex interaction networks, which
are large systems of simple units linked by a network of interactions. Many research topics exemplify complex interaction systems in specific domains, such as neural activities in our brain, the
movement of people in an urban system, epidemic and opinion dynamics in social networks, and so
on. Modeling and inference for dynamics on these systems has attracted considerable interest since
it potentially provides valuable new insights, for example about functional areas of the brain and
relevant diagnoses[7], about traffic congestion and more efficient use of roads [19], and about where,
when and to what extent people are infected in an epidemic crisis [23]. Agent-based modeling and
simulation [22] is a classical way to address complex systems with interacting components to explore
general collective rules and principles, especially in the field of systems biology. However, the actual
underlying dynamics of a specific real system are not in the scope. People are not satisfied with only
a macroscopic general description but aims to track down an evolving system.
Unprecedented opportunities for researchers in these fields have recently emerged due to the prosperous of social media and sensor tools. For instance, the functional magnetic resonance imaging
(fMRI) and the electroencephalogram (EEG) can directly measure brain activity, something never
possible before. Similarly, signal sensing technologies can now easily track people?s movement and
interactions [12, 24]. Researchers no longer need to worry about acquiring abundant observation
data, and instead are pursuing more powerful theoretical tools to grasp the opportunities afforded by
that data. We, in the machine learning community, are interested in the inference problem ? that is
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
recovering the hidden dynamics of a system given certain observations. However, challenges still
exist in these efforts, especially when facing systems with a large number of components.
Statistical inference on complex interaction systems has a close relationship with the statistical physics
of disordered ensembles, for instance, the established equivalence between loopy belief propagation
and the Bethe free energy formulation [25]. In the past, the main interaction between statistical physics
and statistical inference has focused on building stationary and equilibrium probability distributions
over the state of a system. However, temporal dynamics is omitted when only equilibrium state
is pursued. This leads not only to the loss of a significant amount of interesting information, but
possibly also to qualitatively wrong conclusions. In terms of learning dynamics, one approach is
to solve stochastic differential equations (SDE) [20]. In each SDE, at least one term belongs to a
stochastic process, of which the most common is the Wiener process. The drift and diffusion terms in
these SDEs are what we need to recover from multiple realizations (sample paths) of the stochastic
process. Typically, an assumption of constant diffusion and linear drift makes the problem tractable,
but realistic dynamics generally cannot be modeled by rigid SDEs with simple assumptions.
Inference on complex interaction systems naturally corresponds to inference on large graphical
models, which is a classical topic in machine learning. Exact filtering and smoothing algorithms
are impractical due to the exploding computational cost to make inferences about complex systems.
The hidden Markov model [17] faces an exponentially exploding size of the state transition kernel.
The Kalman filter [15] and its variants, such as the extended Kalman filter [14], solves the linear or
nonlinear estimation problem assuming that the latent and observed variables are jointly Gaussian
distributions. Its scalability versus the number of components is O(M 3 ) due to the time cost in
matrix operations.
Approximate algorithms to make inferences with complex interaction systems can be divided roughly
into sampling-based and optimization-based methods. Among sampling based methods, particle filter
and smoother [4, 18] use particles to represent the posterior distribution of a stochastic process given
noisy observations. However, particle based methods show weak scalability in a complex system: a
large number of particles is needed, even in moderate size complex systems where the number of
components becomes over thousands. A variety of Markov Chain Monte Carlo (MCMC) methods
have been proposed [6, 5], but these generally have issues with rapid convergence in high-dimension
systems. Among optimization based methods, expectation propagation (EP) [16, 13] refers to a
family of approximate inference algorithms with local marginal projection. These methods adopt an
iterative approach to approximate each factor of the target distribution into a tractable family. EP
methods have been shown to be relatively efficient, faster than sampling in many low-dimension
examples[16, 13]. The equivalence between the EP energy minimization and Bethe free energy
minimization is justified [16]. Researches propose ?double loop? algorithm to minimize Bethe free
energy [13] in order to digest the non-convex term in the objective. They formulate a saddle point
problem where strictly speaking the inner loop should be converged before moving to the outer
loop. However, the stability of saddle points is an issue in general. There are also ad hoc energy
optimization methods for specific network structures, for instance [21] for binary networks, but the
generality of these methods is unknown.
In this paper, we present new formulation of EP and apply it to solve the inference problem in
general large complex interaction systems. This paper makes the following contributions. First, we
formulated expectation propagation as an optimization problem to maximize a concave dual function,
where its local maximum is also its global maximum and provides a solution for Bethe free energy
minimization problem. To this end, we transformed concave terms in the Bethe free energy into
its Legendre dual and added regularization constraint to the primal problem. Second, we designed
gradient ascent and fixed point algorithms to make inferences about complex interaction systems
with the stochastic kinetic model. In all the algorithms we make mean-field inferences about the
individual components from observations about them according to the average interactions of all other
components. Third, we conducted experiments on our transportation network data to demonstrate
the performance of our proposed algorithms over the state of the art algorithms in inferring complex
network dynamics from noisy observations.
The remainder of this paper is organized as follows. In Section 2, we briefly review some models
to specify complex system dynamics and the issues in minimizing Bethe free energy. In Section 3,
we formulate the problem of minimizing Bethe free energy as maximizing a concave dual function
satisfying dual feasible constraint, and develop gradient-based and fixed-point methods to make
2
tractable inferences with the stochastic kinetic model. In Section 4, we detail empirical results from
applying the proposed algorithms to make inferences about transportation network dynamics. Section
5 concludes.
2
Background
In this section, we provide brief background about describing complex system dynamics and typical
issues in minimizing Bethe free energy.
2.1
Dynamic Bayesian Network and State-Space Model
A dynamic Bayesian network (DBN) captures the dynamics of a complex interaction system by
specifying how the values of state variables at the current time are probabilistically dependent on
(1)
(M )
(1) (2)
(M )
the values at previous time. Let xt = (x1 ,..., xt ) be the values and yt = (yt , yt , ..., yt )
be the observations made at these M state variables at time t. The probability
Q measure of sample
path with observations p(x1,...T , y1,...T ) can be written as p(x1,...T , y1,...T ) = t p(xt | xt?1 )p(yt |
Q
Q
(m)
(m)
xt ) = t p(xt | xt?1 ) m p(yt | xt ), where p(xt | xt?1 ) is the state transition model and
p(yt | xt ) is observation model. We can factorize state transition into miniature kernels involving
(m)
(m)
only variable xt and its parents Pa(xt ). The DBN inference problem is to infer p(xt | y1,...T )
for given observations y1,...T .
State-space models (SSM) use state variables to describe a system by a set of first-order differential or
difference equations. For example, the state evolves as xt = Ft xt?1 + wt and we make observations
with yt = Ht xt + vt . Typical filtering and smoothing algorithms estimate series of xt from time
series of yt .
Both DBM and SSM face difficulties in directly capturing the complex interactions, since these
interactions seldom obey simple rigid equations and are too complex to be expressed by a joint
transition kernel, even allowing time-variance of such kernel. The SKM model that follows uses a
sequence of events to capture such nonlinear and time-variant dynamics.
2.2
Stochastic Kinetic Model
The stochastic kinetic model (SKM) [9, 23] has been successfully applied in many fields, especially
chemistry and system biology [1, 22, 8]. It describes the dynamics with chemical reactions occurring
stochastically at an adaptive rate. By analogy with a chemical reaction system, we consider a complex
interaction system involving M system components (species) and V types of events (reactions).
Generally, the system forms a Markov jump process [9] with a finite set of discrete events. Each
event v can be characterized by a ?chemical equation?:
(1)
) (M )
rv(1) X (1) + ... + rv(M ) X (M ) ? p(1)
+ ... + p(M
X
v X
v
(m)
(1)
(m)
where X (m) denotes the m-th component, rv and pv count the (relative) quantities of reactants
(m)
and products. Let xt be the population count (or continuous number as concentration) of m
(1)
(2)
(M )
(1)
(1) (2)
species at time t, an event will change populations (xt , xt , ..., xt ) by ?v = (pv ? rv , pv ?
(2)
(M )
(M )
rv , ..., pv ? rv ). Events occur mutually independently of each other and each event rate
hv (xt , cv ) is a function of the current state:
(M )
hv (xt , cv ) = cv
Y
(M )
(m)
gv(m) (xt
m=1
) = cv
Y x(m)
t
(m)
m=1
rv
(2)
Q(M ) x(m)
t
where cv denotes the rate constant and m=1 r(m)
counts the number of different ways for the
v
components to meet and trigger an event. When we consider time steps 1, 2, ., t, ..T with sufficiently
small time interval ? , the probability of two or more events happening in the interval is negligible
[11]. Consider a sample path p(x1,...T , v2,...T , y1,...T ) of the system with the sequence of states
3
x1 , . . . , xT , happened events v2 , . . . , vT and observations y1 , . . . , yT . We can express the eventbased state transition kernel P (xt , vt |xt?1 ) in terms of event rate hv (xt , cv ):
P (xt , vt |xt?1 ) = I (xt = xt?1 + ?vt and xt ? (xmin , xmax )) ? P (vt |xt?1 )
? hv (xt?1 , cv )
if vt = v
P
= I (xt = xt?1 + ?vt and xt ? (xmin , xmax )) ?
1 ? v ? hv (xt?1 , cv ) if vt = ?
(3)
where ? represents a null event that none of those V events happens and states don?t change; I(?)
is the indicator function; xmin , xmax are respectively lower bound and upper bound vectors, which
prohibit ?ghost? transitions between out-of-scope xt?1 and xt . For instance, we generally need
to bound xt be non-negative in realistic complex systems. This natural constraint on xt leads to a
linearly truncated state space that realistic events lie.
Instead of state transitions possibly from any state to any other in DBN and state updates with a linear
(or nonlinear) transformation, state in the SKM evolves according to finite number of events between
time steps. The transition kernel is dependent on underlying system state and so is adaptive for
capturing the underlying system dynamics. We can now consider the inference problem of complex
interaction systems in the context of general DBN, with a specific event-based transition kernel from
SKM.
2.3
Bethe Free Energy
In general DBN, the expectation propagation algorithm to make inference aims to minimize Bethe
free energy FBethe [16, 25, 13], subject to moment matching constraints. We have a non-convex prime
objective and its trivial dual function with dual variables in the full space is not concave. We take
the general notation that potential function is ?(xt?1,t ) = P (xt , yt | xt?1 ) and our optimization
problem becomes the following
minimize FBethe =
XZ
dxt?1,t p?t (xt?1,t ) log
t
p?t (xt?1,t ) X
?
?(xt?1,t )
t
Z
dxt qt (xt ) log qt (xt )
subject to : hf (xt )ip?t (xt?1,t ) = hf (xt )iqt (xt ) = hf (xt )ip?t+1 (xt,t+1 )
maximize FDual = ?
P
t
log
R
>
dxt?1,t exp(?>
t?1 f (xt?1 ))?(xt?1,t ) exp(?t f (xt ))+log
R
dxt exp((?t +?t )> f (xt ))
In the above, p?t (xt?1,t ) ? p(xt?1,t |y1,??? ,T ) are approximate two-slice probabilities, qt (xt ) ?
p(xt |y1,??? ,T ) are approximate one-slice probabilities. The vector-valued
function
f (xt ) maps a
R
R
random variable xt to its statistics. Integrals hf (xt )ip?t (xt?1,t ) = dxt f (xt ) dxt?1 p?t (xt?1,t ) and
so on are the mean parameters to be matched in the optimization. FBethe is the relative entropy
Q p? (xt?1,t )
(or K-L divergence) between the approximate distribution t tqt (x
and the true distribution
t)
Q
p(x1,??? ,T |y1,??? ,T ) = t ?(xt?1,t ) to be minimized. With the method of Lagrange multipliers, one
can find that p?t (xt?1,t ) and qt (xt ) are distributions in the exponential family parameterized either by
the mean parameters hf (xt )ip?t (xt?1,t ) and hf (xt )iqt (xt ) or by the natural parameters ?t?1 and ?t ,
and the trivial dual target FDual is the negative log partition of the dynamic Bayesian network.
The problem with minimizing FBethe or maximizing FDual is that both have multiple local optima and there is no guarantee how closely a local optimal solution approximates the true posR
p?t (xt?1,t )
terior probability of the latent state. In FBethe , dxt?1,t p?t (xt?1,t ) log ?(x
is a convex term,
t?1,t )
P R
? t dxt qt (xt ) log qt (xt ) is concave, and the sum is not guaranteed to be convex. Similarly in
FDual , the minus log partition function of p?t (first term) is concave, the log partition function of qt is
convex, and the sum is not guaranteed to be concave.
Another difficulty with expectation propagation is that the approximate probability distribution often
needs to satisfy some inequality constraints. For example, when approximating a target probability
distribution with the product of normal distributions in Gaussian expectation propagation, we require
that all factor normal distributions have positive variance. So far, the common heuristic is to set the
variances to very large numbers once they fall below zero.
4
3
Methodology
As noted in Subsection 2.3, the difficulty in minimizing Bethe free energy is that both the FPrimal
and FDual have many local optima in the full space. Our formulation starts with transforming the
concave term to its Legendre dual and taking dual variables as additional variables. Thereafter we
drop the dependence over qt (xt ) by utilizing the moment matching constraints, formulate EP as
a constrained minimization problem and derive its dual optimization problem (which is concave
under a dual feasible constraint). Our formulation also provides theoretical insights to avoid negative
variance in Gaussian expectation propagation.
We start by minimizing the Bethe free energy over the two-slice probabilities p?t and the one-slice
probabilities qt :
minimize over p?t (xt?1,t ), qt (xt ) :
Z
XZ
p?t (xt?1,t ) X
dxt qt (xt ) log qt (xt )
FBethe =
dxt?1,t p?t (xt?1,t ) log
?
?(xt?1,t )
t
t
subject to : hf (xt )ip?t (xt?1,t ) = hf (xt )iqt (xt ) = hf (xt )ip?t+1 (xt,t+1 ) ,
Z
Z
dxt qt (x) = 1 = dxt?1,t p?t (xt?1,t ).
n
(4)
o
We introduce the Legendre dual ? dxt qt log qt = min?t ??t> ? hf (xt )iqt + log dxt exp(?t> ? f (xt ))
and replace hf (xt )iq(xt ) in the target with hf (xt )ip?t (xt?1,t ) by utilizing the constraint
hf (xt )ip?t (xt?1,t ) = hf (xt )iqt (xt ) . Instead of searching ?t over the over-complete full space,
we add a regularization constraint to bound it:
R
R
minimize over p?t (xt?1,t ), ?t :
Z
XZ
X
p?t (xt?1,t ) X >
dxt?1,t p?t log
?
FPrimal =
?t ? hf (xt )ip?t +
log dxt exp(?t> ? f (xt ))
?(xt?1,t )
t
t
t
Z
subject to : hf (xt )ip?t (xt?1,t ) = hf (xt )ip?t+1 (xt,t+1 ) , dxt?1,t p?t (xt?1,t ) = 1, ?t> ?t ? ?t .
(5)
In the primal problem,
?t is the natural parameter of a probability in the exponential family: q(x; ?t ) =
R
exp(?t> f (xt ))/ dxt exp(?t> ? f (xt )). The primal problem (5) is equivalent with Bethe energy
minimization problem.
We solve the primal problem with the Lagrange duality theorem [3]. First, we define the Lagrangian function L by introducing the Lagrange multipliers ?t , ?t and ?t to incorporate the constraints. Second, we set the derivative over prime variables to zero. Third, we plug the optimum
point back into the Lagrangian. The Lagrange duality theorem implies that FDual (?t , ?t , ?t ) =
infp?t (xt?1,t ),?t L(?
pt (xt?1,t ), ?t , ?t , ?t , ?t ). Thus the dual problem is as follows
maximize over ?t , ?t ? 0 for all t :
Z
X
X
X ?t
FDual = ?
log Zt?1,t +
log dxt exp(?t> f (xt )) +
?t> ?t ? ?t
2
t
t
t
where ? hf (xt )ip?t + hf (xt )i?t + ?t ?t = 0
1
>
exp(?t?1
? f (xt?1 ))?(xt?1,t ) exp((?t> ? ?t> ) ? f (xt ))
p?t (xt?1,t ) =
Zt?1,t
(6)
(7)
(8)
In the dual problem, we drop the dual variable ?t since it takes value to normalize p?t (xt?1,t ) as a
valid primal probability. For any dual variable ?t , ?t , we map primal variables p?t (xt?1,t ) and ?t
as implicit functions defined by the extreme point conditions Eq. (7),(8). We have the following
theoretic
guarantee with
proofs in the supplementary material. We name cov?t (f (xt ), f (xt )) +
?t I ? f (xt ) ? f (xt )> p? (x
0 as the dual feasible constraint.
)
t
t?1,t
5
Proposition 1: The Lagrangian function has positive definite Hessian matrix under the dual
feasible constraint.
Proposition 1 ensures that the dual function is infimum of Lagrangian function, the point wise
infimum of a family of affine functions of ?t , ?t , ?t , thus is concave. Instead of a full space of dual
variables ?t , ?t , we only consider the domain constrained by the dual feasible constraint.
Proposition 2: Eq. (7) and (8) have an unique solution under the dual feasible constraint.
The Lagrange dual problem is a maximization problem with a bounded domain, which can be reduced
to an unconstrained problem through barrier method or through penalizing constraint violation, and
be solved with a gradient ascent algorithm or a fixed point algorithm. The partial derivatives of the
dual function over dual variables are the following:
?FDual
?FDual
1 >
= ? hf (xt )ip?t+1 (xt,t+1 ) + hf (xt )ip?t (xt?1,t ) ,
=
? ?t ? ?t
??t
??t
2 t
(9)
where p?t (xt?1,t ) and ?t are implicit functions defined by Eq. (7),(8). We can get a fixed point
iteration through setting the first derivatives to zero 1 . Here ?(?) converts mean parameters to natural
parameters.
?FDual set
(new)
(old)
(old)
= 0 ?forward:?t
= ?t + ? hf (xt )ip?t ? ?t
??t
(new)
backward:?t
= ? hf (xt )ip?t+1
In terms of Gaussian EP, the prime variables p?t (xt?1,t ), ?t correspond to multivariate
P PGaussian
distributions, which pose implicit constraints on the primal and dual domains. Let p?t , ?t be the
P
P
covariance matrix associated with p?t (xt?1,t ), ?t and it requires p?t 0, ?t 0. The domain of
dual variables is defined by the following constraints:
?t ? 0,
X
p?t
0,
X
0, cov?t (f (xt ), f (xt )) + ?t I ? f (xt ) ? f (xt )> p? (x
t
t?1,t )
0
?t
where ? hf (xt )ip?t + hf (xt )i?t + ?t ?t = 0
1
>
p?t (xt?1,t ) =
exp(?t?1
? f (xt?1 ))P (xt , yt |xt?1 ) exp((?t> ? ?t> ) ? f (xt ))
Zt?1,t
In this case, it is nontrivial to find a starting point of ?t , ?t . We develop a phase I stage to find a
strictly feasible starting point [3]. For convenience, we note ?t , ?t as x, rewrite above constraints
as inequality constraints gi (x) ? 0 and equality constraints gj (x) = 0. Start from a valid x0 , s that
gi (x0 ) ? s,gj (x0 ) = 0 and then solve the optimization problem
minimize s subject to gj (x0 ) = 0, gi (x0 ) ? s
over the variable s and x. The strict feasible point of x will be found when we arrive s < 0.
With the duality framework and SKM, we can solve the dual optimization problem to make inferences
about complex system dynamics from imperfect observations. The latent states (the populations in
SKM) can be formulated as either categorical or Gaussian random variables. In categorical case, the
(1)
(1)
(1)
(2)
(2)
(2)
statistics are f (xt ) = (I(xt = 1), ? ? ? , I(xt = xmax ), I(xt = 1), ? ? ? , I(xt = xmax ), ? ? ? ),
(1)
(M )
where xmax , ? ? ? , xmax are the maximum populations and I is the indicator function. In the Gaussian
(1)
(1) 2
(2)
(2) 2
case, the statistics are f (xt ) = (xt , xt , xt , xt , ? ? ? ) and we force the natural parameters
to satisfy the constraint that minus half of precision is negative. The potential ?(xt?1,t ) in the
1
Empirically, the fixed point iteration converges even without the dual feasible constraint (?t = 0); In
general, ?t is bounded by the dual feasible constraint and the derivative over ?t is not zero.
6
P
distribution p?t+1 (xt,t+1 ) (Eq. (8)) has specific form vt P (xt , vt |xt?1 )P (yt |xt ) as Eq. (3), which
facilitates a mean filed approximation to evaluate hf (xt )ip?(m) (x(m) ) ? hf (xt )ip?t+1 (xt,t+1 ) and
t+1
hf (xt )ip?(m) (x(m)
t?1,t )
t
t,t+1
(m)
(m)
(m)
? hf (xt )ip?t (xt?1,t ) for each species m, where p?t+1 (xt,t+1 ) and p?t
(m)
(xt?1,t ) are
the marginal two-slice distributions for m and derived explicitly in the supplementary material. As
such, we establish linear complexity over number of species m and tractable inference in general
complex system dynamics.
To summarize, Algorithm 1 gives the mean-field forward-backward algorithm and the gradient
ascent algorithm for making inferences with a stochastic kinetic model from noisy observations that
minimize Bethe free energy.
Algorithm 1 Make inference of a stochastic kinetic model with expectation propagation.
Input: Discrete time SKM model (Eqs. (1),(2),(3)); Observation probabilities P (yt |xt ) and initial
values of ?t , ?t , ?t for all populations m and time t.
Expectation Propagation fixed point: Alternate between forward and backward iterations until
convergence.
(new)
(old)
(old)
? For t = 1, ? ? ? , T , ?t
= ?t + ? hf (xt )ip?t (xt?1,t ) ? ?t .
(new)
? For t = T, ? ? ? , 1, ?t
= ? hf (xt )ip?t+1 (xt,t+1 ) .
Gradient ascent: Execute the following updates in alternating forward and backward sweeps, where
the gradients are defined in Eq. (9), under the dual feasible constraints.
(new)
? ?t
(new)
dual
? ?t + ?F
??t , ?t
Dual
? ?t + ?F
??t .
Output: Optimum p?t (xt?1,t ), hf (xt )ip?t as Eq. (7), (8) for all populations m and time t.
4
Experiments on Transportation Dynamics
In this section, we evaluate and benchmark the performance of our proposed algorithms (Algorithm 1)
against mainstream state-of-the-art approaches. We have the flexibility to specify species, states, and
events with different granularities in SKM, at either macroscopic or microscopic level. Consequently,
different levels of inference can be made by feeding in corresponding observations and model
specifications. For example, to track epidemics in a social network we can define each person as a
species and their health state as a hidden state, with infection and recovery as events. Using real-world
datasets about epidemic diffusion in a college campus, we efficiently inferred students? health states
compared with ground truth from surveys [23]. In this section, we demonstrate population level
inference in the context of transportation dynamics2 .
Transportation Dynamics A transportation system consists of residents and a network of locations.
The macroscopic description is the number of vehicles indexed by location and time, while the
microscopic description is the location of each vehicle at each time. Our goal is to infer the
macroscopic populations from noisy sensor network observations made at several selected roads.
Such inference problems in complex interaction networks are not trivial, for several reasons: the
system can be very large and contain large number of components (residents and locations) and
therefore many approaches fail due to resource costs; the interaction between components (i.e. the
mobility of residents) is by nature uncertain and time variant, and multiple variables (populations at
different locations) correlate together.
To model transportation dynamics, we classify people at the same location as one species. Let
(l)
l ? L index the locations and xt be the number of vehicles at location l at time t, which are
the latent states we want to identify. The events v that change system states can be generally
expressed as reaction li ? lj , which represents one vehicle moving from location li to location
(l )
(l)
lj . It decrease xt(li ) by 1, increase xt j by 1 and keep other xt the same. The event rate reads
Q(L) (l) (l)
(l )
(l )
hv (xt , cv ) = cv l=1 gv (xt ) = cv xt i , as there are xt i different possible vehicles to transit at li .
2
Source code and a general function interface for other domains at both levels are here online
7
Experiment Setup: We select a certain proportion, e.g. 20%, of vehicles as probe vehicles to build
the observation model, assuming that the probe vehicles are uniformly sampled from the system.
(l)
Let xttl be the total number of vehicles in the system, xp the total number of probe vehicles, xt
(l)
the number of vehicles at location l, yt the number of probe vehicles observed at l. A rough point
(l)
(l)
(l)
(l)
estimation of xt is xt = xttl yt /xp . More strictly, the likelihood
of observing yt probe vehicles
(l)
among xt vehicles at l is p(yt(l) | x(l)
t ) =
(l)
xt
(l)
yt
(l)
?
xttl ?xt
(l)
xp ?yt
xttl
(l)
yt
(l)
. Our hidden state xt can be
represented as either a discrete variable or a univariate gaussian.
Dataset Description: We implement and benchmark algorithms on two representative datasets. In
the SynthTown dataset, we synthesize a mini road network (Fig. 1(a)). Virtual residents go to work in
the morning and back home in the evening. We synthesize their itineraries from MATSIM, a common
Multi-agent transportation simulator[2]. The number of residents and locations are respectively 2,000
and 25. In the Berlin dataset, we have a larger real world road network with 1,539 locations derived
from Open Street Map and 9,178 people?s itineraries synthesized from MATSIM. Both two datasets
span a whole day, from midnight to midnight.
Evaluation Metrics: To evaluate the accuracy of the model, we need compare the series of inferred
populations against the series of ground truths. We choose three appropriate metrics: the ?coefficient
of determination? (R2 ), the mean percentage error (MPE) and mean squared
error (MSE). In statistics,
P
(yi ?fi )2
2
i
the R tells the goodness of fit of a model and is calculated as 1 ? P (yi ??y)2 , where yi are the
i
ground truth values, y? their mean and fi the inferred values. Typically, R2 ranges from 0 and 1: the
closer it is to 1, the better the inference is. The
computes average of percentage errors by which
P MPE
yi ?fi
fi differ from yi and is calculated as 100%
i yi . MPE can be either positive or negative and the
n
P
closer it is to 0, the better. The MSE is calculated as n1 i (yi ? fi )2 to measure the average deviation
between y and f . The lower the MSE, the better the inference. We also consider the runtime as an
important metric to research scalability of different approaches.
Approaches for Benchmark: We implement three algorithms to instantiate the procedures in
Algorithm 1: the fixed point algorithm with discrete latent state (DFP) or gaussian latent state (GFP)
and the gradient ascent algorithm with discrete latent state (DG). The pseudo codes are included in
the supplementary material. We also implement several other mainstream state-of-the-art approaches.
Particle Filter (PF): We implement a sampling importance resampling (SIR) [10] algorithm that
recursively approximates the posterior with a weighted set of particles, updates these particles and
resamples to cope with degeneracy problem. Performance is dependent on the number of particles
with a certain number is needed to achieve a good result. We selected the number of particles
empirically by increasing the number until no obvious accuracy improvement could be detected,
and ended up with thousands to tens of thousands of particles. Extended Kalman Filter (EKF):
We implement the standard EKF procedure with an alternating prediction step and update step.
Feedforward Neural Network (FNN): The FNN builds only a non-parametric model between input
nodes and output nodes, without ?actually? learning the dynamics of the system. We implement
a five-layer FNN: one input layer accepting the inference time point and observations in certain
previous period (e.g. one hour), three hidden layers and one output layer from which we directly
read the inference populations. The FNN and afterwards RNN are both trained by feeding ground
truth populations about each road into the network structures. We tune meta-parameters and train the
network with 30 days synthesized mobility data from MATSIM until obtaining optimum performance.
Recurrent Neural Network (RNN): The RNN is capable of exploiting previous inferred hidden states
recursively to improve current estimation. We implement a typical RNN, such that in each RNN
cell we take both the current observations and inferred population from a previous cell as input,
traverse one hidden layer, and then output the inferred populations. We train the RNN with 30 days
of synthesized mobility data from MATSIM until obtaining optimum performance.
Inference Performance and Scalability: Figure 1 plots the inferred population at several representative locations in Fig. 1(a). The lines above the shaded areas are the ground truths, and we plot the
error (i.e., inferred populations minus ground truth) with different scales. For GFP, the inference
within ? ? 3? confidence intervals is shown in the colored ?belt?. We can see that our proposed
algorithms generally deviate less from the ground truth than other approaches do.
8
Table 1: Performance and time scalability of all algorithms
Dataset
Metrics
DFP
GFP
DG
PF
EKF
FNN
RNN
R2
0.85
0.85
0.87
0.50
0.51
0.73
0.72
SynthTown
MPE MSE
Time
-3%
181
47 sec
-8%
161
42 sec
-5%
104
157 sec
-21%
663
15 sec
-19%
679
2 sec
11%
526
1 h training
-14%
407
8 h training
(a) Road Network
R2
0.66
0.62
0.61
0.50
0.45
0.31
0.51
MPE
3%
2.5%
2.8%
-6%
-40%
-14%
-9%
Berlin
MSE
20
27
26
678
1046
540
800
Time
29 min
21 min
56 min
71min
14 hour
11 h training
28 h training
(b) Inference results
Figure 1: Road network and inference results with the SynthTown Dataset
Table 1 summarizes the performances in different metrics (mean values). There is both a training
phase and a running phase in making inferences with neural networks, with the training phase taking
longer. The neural network training time shown in the table ranges from several hours to around one
day, and is quadratic in the number of system components per batch per epoch. The neural network
running times in our experiments are comparable with EP running times. Theoretically, neural
network running times are quadratic in the number of system components to make one prediction, and
EP running times are linear in the number of system components to propagate marginal probabilities
from one time step to the next (EP algorithms empirically converge within a few iterations), while PF
scales quadratically and EKF cubically with the number of locations.
Summary: Generally, our proposed algorithms have higher R2 , ?narrower? MPE and lower MSE,
followed by neural networks, PF and EKF. The neural networks sometimes provide comparable
performance. Our proposed algorithms, especially the DFP and GFP, experience lower time explosion
in bigger datasets. Overall, our algorithms generally outperform PF, EKF, FNN and RNN in terms of
accuracy metrics and scalability to a larger dataset.
5
Discussion
In this paper, we have introduced the stochastic kinetic model and developed expectation propagation
algorithms to make inferences about the dynamics of complex interacting systems from noisy
observations. To avoid getting stuck at a local optimum, we formulate the problem of minimizing
Bethe free energy as a maximization problem over a concave dual function in the feasible domain
of dual variables guaranteed by duality theorem. Our experiments show superior performance over
competing models such as particle filter, extended Kalman filter, and deep neural networks.
9
References
[1] Adam Arkin, John Ross, and Harley H McAdams. Stochastic kinetic analysis of developmental
pathway bifurcation in phage ?-infected escherichia coli cells. Genetics, 149(4):1633?1648,
1998.
[2] Michael Balmer, Marcel Rieser, Konrad Meister, David Charypar, Nicolas Lefebvre, and Kai
Nagel. Matsim-t: Architecture and simulation times. In Multi-agent systems for traffic and
transportation engineering, pages 57?78. IGI Global, 2009.
[3] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press,
2004.
[4] Pierre Del Moral. Non-linear filtering: interacting particle resolution. Markov processes and
related fields, 2(4):555?581, 1996.
[5] Wen Dong, Alex Pentland, and Katherine A Heller. Graph-coupled hmms for modeling the
spread of infection. arXiv preprint arXiv:1210.4864, 2012.
[6] Arnaud Doucet, Nando De Freitas, Kevin Murphy, and Stuart Russell. Rao-blackwellised
particle filtering for dynamic bayesian networks. In Proceedings of the Sixteenth conference on
Uncertainty in artificial intelligence, pages 176?183. Morgan Kaufmann Publishers Inc., 2000.
[7] Karl Friston. Learning and inference in the brain. Neural Networks, 16(9):1325?1352, 2003.
[8] Daniel T Gillespie. Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem.,
58:35?55, 2007.
[9] Andrew Golightly and Colin S Gillespie. Simulation of stochastic kinetic models. In Silico
Systems Biology, pages 169?187, 2013.
[10] Neil J Gordon, David J Salmond, and Adrian FM Smith. Novel approach to nonlinear/nongaussian bayesian state estimation. In IEE Proceedings F (Radar and Signal Processing),
volume 140, pages 107?113. IET, 1993.
[11] Winfried K Grassmann. Transient solutions in markovian queueing systems. Computers &
Operations Research, 4(1):47?53, 1977.
[12] Tong Guan, Wen Dong, Dimitrios Koutsonikolas, and Chunming Qiao. Fine-grained location
extraction and prediction with little known data. In Wireless Communications and Networking
Conference (WCNC), 2017 IEEE, pages 1?6. IEEE, 2017.
[13] Tom Heskes and Onno Zoeter. Expectation propagation for approximate inference in dynamic
bayesian networks. In Proceedings of the Eighteenth conference on Uncertainty in artificial
intelligence, pages 216?223. Morgan Kaufmann Publishers Inc., 2002.
[14] Simon J Julier and Jeffrey K Uhlmann. Unscented filtering and nonlinear estimation. Proceedings of the IEEE, 92(3):401?422, 2004.
[15] Rudolph Emil Kalman et al. A new approach to linear filtering and prediction problems. Journal
of basic Engineering, 82(1):35?45, 1960.
[16] Thomas P Minka. The ep energy function and minimization schemes. See www. stat. cmu. edu/?
minka/papers/learning. html, 2001.
[17] Lawrence R Rabiner. A tutorial on hidden markov models and selected applications in speech
recognition. Proceedings of the IEEE, 77(2):257?286, 1989.
[18] Vinayak Rao and Yee Whye Teh. Fast mcmc sampling for markov jump processes and continuous time bayesian networks. arXiv preprint arXiv:1202.3760, 2012.
[19] Claudia Tebaldi and Mike West. Bayesian inference on network traffic using link count data.
Journal of the American Statistical Association, 93(442):557?573, 1998.
10
[20] Michail D Vrettas, Manfred Opper, and Dan Cornford. Variational mean-field algorithm for
efficient inference in large systems of stochastic differential equations. Physical Review E,
91(1):012148, 2015.
[21] Max Welling and Yee Whye Teh. Belief optimization for binary networks: A stable alternative
to loopy belief propagation. In Proceedings of the Seventeenth conference on Uncertainty in
artificial intelligence, pages 554?561. Morgan Kaufmann Publishers Inc., 2001.
[22] Darren J Wilkinson. Stochastic modelling for systems biology. CRC press, 2011.
[23] Zhen Xu, Wen Dong, and Sargur N Srihari. Using social dynamics to make individual predictions: Variational inference with stochastic kinetic model. In Advances In Neural Information
Processing Systems, pages 2775?2783, 2016.
[24] Fan Yang and Wen Dong. Integrating simulation and signal processing with stochastic social
kinetic model. In International Conference on Social Computing, Behavioral-Cultural Modeling
and Prediction and Behavior Representation in Modeling and Simulation, pages 193?203.
Springer, Cham, 2017.
[25] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Understanding belief propagation and
its generalizations. Exploring artificial intelligence in the new millennium, 8:236?239, 2003.
11
| 6798 |@word briefly:1 proportion:1 open:1 adrian:1 simulation:6 propagate:1 covariance:1 minus:3 recursively:2 moment:2 initial:1 series:4 daniel:1 past:1 reaction:4 freitas:1 current:4 attracted:1 written:1 john:1 realistic:3 partition:3 sdes:2 gv:2 designed:1 drop:2 update:4 plot:2 resampling:1 congestion:1 stationary:1 pursued:1 half:1 selected:3 instantiate:1 intelligence:4 smith:1 accepting:1 colored:1 manfred:1 provides:3 node:2 location:16 traverse:1 ssm:2 belt:1 five:1 differential:3 midnight:2 consists:1 pathway:1 dan:1 behavioral:1 introduce:1 theoretically:1 x0:5 rapid:1 behavior:1 roughly:1 xz:3 multi:2 brain:4 simulator:1 freeman:1 actual:1 little:1 pf:5 increasing:2 becomes:2 underlying:3 notation:1 matched:1 medium:1 bounded:2 null:1 what:2 crisis:1 sde:2 rieser:1 cultural:1 developed:1 transformation:1 impractical:1 ended:1 guarantee:2 temporal:1 pseudo:1 blackwellised:1 concave:11 runtime:1 wrong:1 unit:1 before:2 negligible:1 engineering:3 local:7 positive:3 meet:1 path:3 equivalence:2 collect:1 specifying:1 pgaussian:1 shaded:1 escherichia:1 hmms:1 range:2 seventeenth:1 unique:1 iqt:5 definite:1 implement:7 procedure:2 area:2 empirical:1 evolving:1 rnn:8 projection:1 matching:2 confidence:1 road:7 refers:1 boyd:1 integrating:1 get:1 cannot:1 close:1 convenience:1 context:2 live:1 applying:1 silico:1 yee:2 www:1 equivalent:1 map:3 lagrangian:4 eighteenth:1 transportation:10 maximizing:2 yt:21 go:1 starting:2 independently:1 convex:6 focused:1 resolution:2 formulate:6 survey:1 recovery:1 insight:3 rule:1 utilizing:2 vandenberghe:1 fang:1 stability:1 population:16 searching:1 target:4 trigger:1 pt:1 exact:1 us:1 arkin:1 pa:1 synthesize:2 satisfying:1 recognition:1 observed:2 ep:10 ft:1 preprint:2 mike:1 solved:1 capture:2 hv:6 thousand:3 cornford:1 ensures:1 wendong:1 movement:2 technological:1 xmin:3 valuable:1 decrease:1 russell:1 transforming:1 fbethe:6 complexity:1 developmental:1 wilkinson:1 dynamic:32 radar:1 trained:1 rewrite:1 easily:1 joint:1 represented:1 train:2 fast:1 describe:1 monte:1 detected:1 artificial:4 tell:1 kevin:1 emerged:1 heuristic:1 solve:5 valued:1 supplementary:3 larger:2 kai:1 epidemic:4 statistic:4 cov:2 gi:3 neil:1 jointly:1 noisy:6 rudolph:1 ip:24 online:1 dfp:3 hoc:1 sequence:3 advantage:1 unprecedented:1 mcadams:1 emil:1 propose:1 interaction:23 product:2 remainder:1 relevant:1 loop:3 realization:1 flexibility:1 achieve:1 sixteenth:1 description:4 normalize:1 scalability:6 getting:2 exploiting:1 convergence:2 double:1 optimum:8 parent:1 adam:1 converges:1 derive:1 develop:3 iq:1 stat:1 pose:1 recurrent:1 andrew:1 qt:15 eq:8 solves:1 recovering:1 marcel:1 implies:1 differ:1 closely:1 filter:9 stochastic:20 disordered:1 nando:1 transient:1 opinion:1 material:3 virtual:1 crc:1 require:1 feeding:2 generalization:1 proposition:3 strictly:3 kinetics:1 unscented:1 exploring:1 sufficiently:1 around:1 ground:7 normal:2 exp:12 equilibrium:2 scope:2 dbm:1 lawrence:1 miniature:1 adopt:1 omitted:1 estimation:5 uhlmann:1 ross:1 successfully:1 tool:2 weighted:1 minimization:6 rough:1 sensor:2 gaussian:8 aim:2 ekf:6 avoid:3 probabilistically:1 derived:2 improvement:1 modelling:1 likelihood:1 inference:40 dependent:3 rigid:2 cubically:1 typically:2 lj:2 hidden:8 transformed:1 interested:2 issue:4 dual:37 among:3 overall:1 html:1 resonance:1 constrained:3 breakthrough:1 smoothing:2 art:3 marginal:3 field:7 once:1 never:1 gfp:4 beach:1 sampling:5 extraction:1 biology:4 represents:2 stuart:1 fmri:1 minimized:1 gordon:1 few:1 wen:5 dg:2 divergence:1 individual:2 dimitrios:1 murphy:1 itinerary:2 phase:4 jeffrey:1 n1:1 william:1 harley:1 interest:1 evaluation:1 grasp:1 violation:1 extreme:1 primal:8 chain:1 integral:1 closer:2 partial:1 capable:1 experience:1 explosion:1 mobility:3 indexed:1 phage:1 old:4 abundant:1 reactant:1 theoretical:2 uncertain:1 instance:4 classify:1 modeling:5 rao:2 markovian:1 infected:2 goodness:1 vinayak:1 maximization:2 loopy:2 cost:3 introducing:1 deviation:1 conducted:1 too:1 iee:1 spatiotemporal:1 st:1 person:1 international:1 filed:1 physic:2 dong:5 michael:1 together:1 nongaussian:1 squared:1 satisfied:1 choose:1 possibly:2 stochastically:1 coli:1 derivative:4 american:1 li:4 potential:2 de:1 chemistry:1 student:1 sec:5 coefficient:1 inc:3 satisfy:2 explicitly:1 igi:1 ad:1 vehicle:14 linked:1 traffic:3 observing:1 start:3 recover:1 hf:32 mpe:6 zoeter:1 simon:1 bifurcation:1 contribution:1 minimize:7 accuracy:3 wiener:1 variance:4 kaufmann:3 efficiently:1 ensemble:1 correspond:1 identify:1 rabiner:1 weak:1 bayesian:8 none:1 carlo:1 researcher:2 converged:1 phys:1 networking:1 infection:2 against:2 energy:19 minka:2 obvious:1 naturally:1 proof:1 associated:1 degeneracy:1 sampled:1 dataset:6 exemplify:1 subsection:1 organized:1 actually:1 back:2 worry:1 higher:1 day:4 methodology:1 specify:2 tom:1 wei:1 formulation:4 execute:1 generality:1 implicit:3 stage:1 until:4 expressive:1 nonlinear:5 propagation:16 resident:5 morning:1 del:1 infimum:2 usa:1 building:1 name:1 contain:1 true:2 multiplier:2 regularization:2 equality:1 chemical:4 alternating:2 read:2 arnaud:1 konrad:1 onno:1 noted:1 prohibit:1 claudia:1 whye:2 highresolution:1 complete:1 demonstrate:3 electroencephalogram:1 theoretic:1 interface:1 resamples:1 wise:1 variational:2 novel:1 recently:1 fi:5 common:3 superior:1 functional:2 empirically:3 physical:1 exponentially:1 volume:1 julier:1 association:1 approximates:2 lieven:1 interpret:1 synthesized:3 significant:1 cambridge:1 cv:11 seldom:1 dbn:5 unconstrained:1 similarly:2 heskes:1 particle:14 moving:2 specification:1 stable:1 longer:2 mainstream:2 gj:3 add:1 something:1 posterior:2 multivariate:1 belongs:1 driven:1 moderate:1 prime:3 certain:4 inequality:2 binary:2 meta:1 vt:11 yi:7 cham:1 morgan:3 additional:1 michail:1 converge:1 maximize:3 period:1 colin:1 signal:3 exploding:2 smoother:1 multiple:3 rv:7 full:4 infer:2 afterwards:1 stephen:1 faster:1 characterized:1 plug:1 determination:1 long:1 divided:1 grassmann:1 bigger:1 prediction:6 variant:3 involving:2 basic:1 expectation:13 metric:6 cmu:1 arxiv:4 iteration:4 kernel:7 represent:1 xmax:7 sometimes:1 cell:3 justified:1 background:2 want:1 fine:1 interval:3 vrettas:1 source:1 crucial:1 macroscopic:4 publisher:3 ascent:5 strict:1 subject:5 nagel:1 facilitates:1 yang:2 granularity:1 feedforward:1 variety:1 fit:1 architecture:1 competing:2 fm:1 inner:1 imperfect:1 effort:1 moral:1 speech:1 speaking:1 hessian:1 deep:2 generally:8 tune:1 amount:1 ten:1 reduced:1 outperform:1 exist:1 percentage:2 tutorial:1 happened:1 track:3 per:2 diagnosis:1 discrete:5 express:1 thereafter:1 urban:1 queueing:1 penalizing:1 diffusion:3 ht:1 backward:4 imaging:1 graph:1 tqt:1 sum:2 convert:1 parameterized:1 powerful:1 uncertainty:3 arrive:1 family:5 pursuing:1 home:1 summarizes:1 comparable:2 capturing:2 bound:4 layer:5 guaranteed:4 followed:1 fan:2 quadratic:2 activity:2 nontrivial:1 tebaldi:1 occur:1 constraint:24 alex:1 afforded:1 skm:8 min:5 span:1 relatively:1 department:1 according:2 alternate:1 combination:1 legendre:3 describes:1 lefebvre:1 evolves:2 making:2 happens:1 rev:1 equation:5 mutually:1 resource:1 describing:1 count:4 fail:1 needed:2 tractable:4 end:1 meister:1 operation:2 yedidia:1 apply:1 obey:1 probe:5 v2:2 appropriate:1 magnetic:1 pierre:1 batch:1 alternative:1 yair:1 thomas:1 denotes:2 running:5 graphical:1 opportunity:2 especially:4 establish:1 approximating:1 classical:2 build:2 sweep:1 objective:2 added:1 quantity:1 digest:1 parametric:1 concentration:1 dependence:1 microscopic:2 gradient:7 link:1 berlin:2 street:1 outer:1 topic:2 transit:1 extent:1 trivial:3 reason:1 assuming:2 kalman:6 code:2 modeled:1 relationship:1 index:1 mini:1 minimizing:8 difficult:1 setup:1 katherine:1 potentially:1 negative:5 collective:2 zt:3 unknown:1 allowing:1 upper:1 teh:2 observation:22 markov:6 datasets:4 benchmark:3 finite:2 buffalo:2 pentland:1 truncated:1 extended:4 communication:1 y1:9 interacting:4 community:1 drift:2 inferred:8 introduced:1 david:2 qiao:3 quadratically:1 established:1 hour:3 nip:1 address:1 below:1 ghost:1 challenge:1 summarize:1 max:1 belief:4 gillespie:2 event:21 difficulty:3 natural:5 force:1 friston:1 indicator:2 scheme:1 improve:1 golightly:1 technology:1 brief:1 millennium:1 concludes:1 categorical:2 zhen:1 coupled:1 health:2 deviate:1 review:2 epoch:1 heller:1 understanding:1 relative:2 sir:1 loss:1 dxt:19 interesting:1 filtering:6 analogy:1 facing:1 versus:1 agent:3 affine:1 xp:3 principle:1 karl:1 genetics:1 summary:1 wireless:1 campus:1 free:15 allow:1 salmond:1 fall:1 face:2 taking:2 barrier:1 slice:5 opper:1 dimension:2 calculated:3 world:3 transition:9 valid:2 concavity:1 computes:1 stuck:2 qualitatively:1 made:3 adaptive:2 jump:2 forward:4 far:1 social:6 correlate:1 cope:1 welling:1 approximate:8 keep:1 global:2 doucet:1 factorize:1 don:1 continuous:2 latent:7 iterative:1 evening:1 iet:1 table:3 bethe:16 nature:1 ca:1 nicolas:1 obtaining:2 eeg:1 mse:6 complex:31 domain:8 main:1 spread:1 linearly:1 whole:1 x1:6 xu:1 fig:2 representative:2 west:1 tong:2 precision:1 inferring:2 pv:4 exponential:2 lie:1 guan:2 third:2 grained:1 theorem:4 down:1 annu:1 specific:5 xt:201 sensing:1 r2:5 importance:1 occurring:1 entropy:1 explore:1 saddle:2 univariate:1 srihari:1 happening:1 sargur:1 lagrange:5 expressed:2 terior:1 acquiring:1 springer:1 corresponds:1 truth:7 darren:1 kinetic:12 fnn:6 goal:1 formulated:2 narrower:1 consequently:1 replace:1 feasible:13 considerable:1 change:3 included:1 typical:3 uniformly:1 wt:1 called:1 specie:7 total:2 duality:5 select:1 college:1 people:6 winfried:1 chem:1 jonathan:1 incorporate:1 evaluate:3 mcmc:2 |
6,410 | 6,799 | Data-Efficient Reinforcement Learning in
Continuous State-Action Gaussian-POMDPs
Rowan Thomas McAllister
Department of Engineering
Cambridge University
Cambridge, CB2 1PZ
[email protected]
Carl Edward Rasmussen
Department of Engineering
University of Cambridge
Cambridge, CB2 1PZ
[email protected]
Abstract
We present a data-efficient reinforcement learning method for continuous stateaction systems under significant observation noise. Data-efficient solutions under
small noise exist, such as PILCO which learns the cartpole swing-up task in
30s. PILCO evaluates policies by planning state-trajectories using a dynamics
model. However, PILCO applies policies to the observed state, therefore planning
in observation space. We extend PILCO with filtering to instead plan in belief
space, consistent with partially observable Markov decisions process (POMDP)
planning. This enables data-efficient learning under significant observation noise,
outperforming more naive methods such as post-hoc application of a filter to
policies optimised by the original (unfiltered) PILCO algorithm. We test our
method on the cartpole swing-up task, which involves nonlinear dynamics and
requires nonlinear control.
1
Introduction
The Probabilistic Inference and Learning for COntrol (PILCO) [5] framework is a reinforcement
learning algorithm, which uses Gaussian Processes (GPs) to learn the dynamics in continuous state
spaces. The method has shown to be highly efficient in the sense that it can learn with only very
few interactions with the real system. However, a serious limitation of PILCO is that it assumes
that the observation noise level is small. There are two main reasons which make this assumption
necessary. Firstly, the dynamics are learnt from the noisy observations, but learning the transition
model in this way doesn?t correctly account for the noise in the observations. If the noise is assumed
small, then this will be a good approximation to the real transition function. Secondly, PILCO uses
the noisy observation directly to calculate the action, which is problematic if the observation noise is
substantial. Consider a policy controlling an unstable system, where high gain feed-back is necessary
for good performance. Observation noise is amplified when the noisy input is fed directly to the high
gain controller, which in turn injects noise back into the state, creating cycles of increasing variance
and instability.
In this paper we extend PILCO to address these two shortcomings, enabling PILCO to be used in
situations with substantial observation noise. The first issue is addressed using the so-called Direct
method for training the transition model, see section 3.3. The second problem can be tackled by
filtering the observations. One way to look at this is that PILCO does planning in observation space,
rather than in belief space. In this paper we extend PILCO to allow filtering of the state, by combining
the previous state distribution with the dynamics model and the observation using Bayes rule. Note,
that this is easily done when the controller is being applied, but to gain the full benefit, we have to
also take the filter into account when optimising the policy.
PILCO trains its policy through minimising the expected predicted loss when simulating the system
and controller actions. Since the dynamics are not known exactly, the simulation in PILCO had to
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
simulate distributions of possible trajectories of the physical state of the system. This was achieved
using an analytical approximation based on moment-matching and Gaussian state distributions. In
this paper we thus need to augment the simulation over physical states to include the state of the
filter, an information state or belief state. A complication is that the belief state is itself a probability
distribution, necessitating simulating distributions over distributions. This allows our algorithm to
not only apply filtering during execution, but also anticipate the effects of filtering during training,
thereby learning a better policy.
We will first give a brief outline of related work in section 2 and the original PILCO algorithm
in section 3, including the proposed use of the ?Direct method? for training dynamics from noisy
observations in section 3.3. In section 4 will derive the algorithm for POMDP training or planning
in belief space. Note an assumption is that we observe noisy versions of the state variables. We
do not handle more general POMDPs where other unobserved states are also learnt nor learn any
other mapping from the state space to observations other than additive Gaussian noise. In the final
sections we show experimental results of our proposed algorithm handling observation noise better
than competing algorithms.
2
Related work
Implementing a filter is straightforward when the system dynamics are known and linear, referred to
as Kalman filtering. For known nonlinear systems, the extended Kalman filter (EKF) is often adequate
(e.g. [13]), as long as the dynamics are approximately linear within the region covered by the belief
distribution. Otherwise, the EKF?s first order Taylor expansion approximation breaks down. Larger
nonlinearities warrant the unscented Kalman filter (UKF) ? a deterministic sampling technique to
estimate moments ? or particle methods [7, 12]. However, if moments can be computed analytically
and exactly, moment-matching methods are preferred. Moment-matching using distributions from
the exponential family (e.g. Gaussians) is equivalent to optimising the Kullback-Leibler divergence
KL(p||q) between the true distribution p and an approximate distribution q. In such cases, momentmatching is less susceptible to model bias than the EKF due to its conservative predictions [4].
Unfortunately, the literature does not provide a continuous state-action method that is both data
efficient and resistant to noise when the dynamics are unknown and locally nonlinear. Model-free
methods can solve many tasks but require thousands of trials to solve the cartpole swing-up task [8],
opposed to model-based methods like PILCO which requires about six. Sometimes the dynamics are
partially-known, with known functional form yet unknown parameters. Such ?grey-box? problems
have the aesthetic solution of incorporating the unknown dynamics parameters into the state, reducing
the learning task to a POMDP planning task [6, 12, 14]. Finite state-action space tasks can be similarly
solved, perhaps using Dirichlet parameters to model the finitely-many state-action-state transitions
[10]. However, such solutions are not suitable for continuous-state ?black-box? problems with no prior
dynamics knowledge. The original PILCO framework does not assume task-specific prior dynamics
knowledge (only that the prior is vague, encoding only time-independent dynamics and smoothness
on some unknown scale) yet assumes full state observability, failing under moderate sensor noise.
One proposed solution is to filter observations during policy execution [4]. However, without also
predicting system trajectories w.r.t. the filtering process, a policy is merely optimised for unfiltered
control, not filtered control. The mismatch between unfiltered-prediction and filtered-execution
restricts PILCO?s ability to take full advantage of filtering. Dallaire et al. [3] optimise a policy using
a more realistic filtered-prediction. However, the method neglects model uncertainty by using the
maximum a posteriori (MAP) model. Unlike the method of Deisenroth and Peters [4] which gives a
full probabilistic treatment of the dynamics predictions, work by Dallaire et al. [3] is therefore highly
susceptible to model error, hampering data-efficiency.
We instead predict system trajectories using closed loop filtered control precisely because we execute
closed loop filtered control. The resulting policies are thus optimised for the specific case in which
they are used. Doing so, our method retains the same data-efficiency properties of PILCO whilst
applicable to tasks with high observation noise. To evaluate our method, we use the benchmark
cartpole swing-up task with noisy sensors. We show that realistic and probabilistic prediction enable
our method to outperform the aforementioned methods.
2
Algorithm 1 PILCO
1: Define policy?s functional form: ? : zt ? ? ? ut .
2: Initialise policy parameters ? randomly.
3: repeat
4:
Execute policy, record data.
5:
Learn dynamics model p(f ).
6:
Predict state trajectories from
0 ) to p(XT ).
Pp(X
T
7:
Evaluate policy: J(?) = t=0 ? t Et ,
Et = EX [cost(Xt )|?].
8:
Improve policy: ? ? argmin? J(?).
9: until policy parameters ? converge
3
The PILCO algorithm
PILCO is a model-based policy-search RL algorithm, summarised by Algorithm 1. It applies to
continuous-state, continuous-action, continuous-observation and discrete-time control tasks. After
the policy is executed, the additional data is recorded to train a probabilistic dynamics model. The
probabilistic dynamics model is then used to predict one-step system dynamics (from one timestep
to the next). This allows PILCO to probabilistically predict multi-step system trajectories over an
arbitrary time horizon T , by repeatedly using the predictive dynamics model?s output at one timestep,
as the (uncertain) input in the following timestep. For tractability PILCO uses moment-matching to
keep the latent state distribution Gaussian. The result is an analytic distribution of state-trajectories,
approximated as a joint Gaussian distribution over T states. The policy is evaluated as the expected
total cost of the trajectories, where the cost function is assumed to be known. Next, the policy is
improved using local gradient-based optimisation, searching over policy-parameter space. A distinct
advantage of moment-matched prediction for policy search instead of particle methods is smoother
policy gradients and fewer local optima [9]. This process then repeats a small number of iterations
before converging to a locally optimal policy. We now discuss details of each step in Algorithm 1
below, with policy evaluation and improvement discussed Appendix B.
3.1
Execution phase
Once a policy is initialised, PILCO can execute the system (Algorithm 1, line 4). Let the latent state
iid
of the system at time t be xt ? RD , which is noisily observed as zt = xt + t , where t ? N (0, ? ).
The policy ?, parameterised by ?, takes observation zt as input, and outputs a control action
ut = ?(zt , ?) ? RF . Applying action ut to the dynamical system in state xt , results in a new system
state xt+1 . Repeating until horizon T results in a new single state-trajectory of data.
3.2
Learning dynamics
To learn the unknown dynamics (Algorithm 1, line 5), any probabilistic model flexible enough
to capture the complexity of the dynamics can be used. Bayesian nonparametric models are
particularly suited given their resistance to overfitting and underfitting respectively. Overfitting
otherwise leads to model bias - the result of optimising the policy on the erroneous model. Underfitting limits the complexity of the system this method can learn to control. In a nonparametric model no prior dynamics knowledge is required, not even knowledge of how complex the
unknown dynamics might be since the model?s complexity grows with the available data. We
.
> >
define the latent dynamics f : x
?t ? xt+1 , where x
?t = [x>
t , ut ] . PILCO models the dynamics with D independent Gaussian process (GP) priors, one for each dynamics output variable:
fa : x
?t ? xat+1 , where a ? [1, D] is the a?th dynamics output, and f a ? GP(?>
?, k a (?
xi , x
?j )).
ax
1
>
Note we implement PILCO with a linear mean function , ?a x
?, where ?a are additional hyperparameters trained by optimising the marginal likelihood [11, Section 2.7]. The covariance function
2
2
2
k is squared exponential, with length scales ?a = diag([l
a,1 , ..., la,D+F ]), and signal variance sa :
1
a
2
> ?1
k (?
xi , x
?j ) = sa exp ? 2 (?
xi ? x
?j ) ?a (?
xi ? x
?j ) .
3.3
Learning dynamics from noisy observations
The original PILCO algorithm ignored sensor noise when training each GP by assuming each
observation zt to be the latent state xt . However, this approximation breaks down under significant
noise. More complex training schemes are required for each GP that correctly treat each training
1
The original PILCO [5] instead uses a zero mean function, and instead predicts relative changes in state.
3
datum xt as latent, yet noisily-observed as zt . We resort to GP state space model methods, specifically
the ?Direct method? [9, section 3.5]. The Direct method infers the marginal likelihood p(z1:N )
approximately using moment-matching in a single forward-pass. Doing so, it specifically exploits
the time series structure that generated observations z1:N . We use the Direct method to set the
GP?s training data {x1:N , u1:N } and observation noise variance ? to the inducing point parameters
and noise parameters that optimise the marginal likelihood. In this paper we use the superior
Direct method to train GPs, both in our extended version of PILCO presented section 4, and in our
implementation of the original PILCO algorithm for fair comparison in the experiments.
3.4
Prediction phase
In contrast to the execution phase, PILCO also predicts analytic distributions of state-trajectories
(Algorithm 1, line 6) for policy evaluation. PILCO does this offline, between the online system executions. Predicted control is identical to executed control except each aforementioned quantity is instead
? t and Xt+1 , all approximated as
now a random variable, distinguished with capitals: Xt , Zt , Ut , X
jointly Gaussian. These variables interact both in execution and prediction according to Figure 1. To
? t is uncertain PILCO uses the iterated law of expectation and variance:
predict Xt+1 now that X
? t ) = N (?xt+1 = E ? [Ef [f (X
? t )]], ?xt+1 = V ? [Ef [f (X
? t )]] + E ? [Vf [f (X
? t )]]). (1)
p(Xt+1 |X
X
X
X
After a one-step prediction from X0 to X1 , PILCO repeats the process from X1 to X2 , and up to XT ,
resulting in a multi-step prediction whose joint we refer to as a distribution over state-trajectories.
4
Our method: PILCO extended with Bayesian filtering
Here we describe the novel aspects of our method. Our method uses the same high-level algorithm
as PILCO (Algorithm 1). However, we modify (using PILCO?s source code http://mlg.eng.
cam.ac.uk/pilco/) two subroutines to extend PILCO from MDPs to a special-case of POMDPs
(specifically where the partial observability has the form of additive Gaussian noise on the unobserved
state X). First, we filter observations during system execution (Algorithm 1, line 4), detailed in
Section 4.1. Second, we predict belief -trajectories instead of state-trajectories (line 6), detailed
section 4.2. Filtering maintains a belief posterior of the latent system state. The belief is conditioned
on, not just the most recent observation, but all previous observations (Figure 2). Such additional
conditioning has the benefit of providing a less-noisy and more-informed input to the policy: the
filtered belief-mean instead of the raw observation zt . Our implementation continues PILCO?s
distinction between executing the system (resulting in a single real belief-trajectory) and predicting
the system?s responses (which in our case yields an analytic distribution of multiple possible future
belief-trajectories). During the execution phase, the system reads specific observations zt . Our
method additionally maintains a belief state b ? N (m, V ) by filtering observations. This belief
state b can be treated as a random variable with a distribution parameterised by belief-mean m and
belief-certainty V seen Figure 3. Note both m and V are functions of previous observations z1:t .
Now, during the (probabilistic) prediction phase, future observations are instead random variables
(since they have not been observed yet), distinguished as Z. Since the belief parameters m and V are
Bt|t?1
Xt+1
Xt
Bt|t
f
Bt+1|t
f
?
Zt
?
Ut
Zt+1
Figure 1: The original (unfiltered) PILCO,
as a probabilistic graphical model. At each
timestep, the latent system Xt is observed noisily as Zt which is inputted directly into policy
function ? to decide action Ut . Finally, the latent system will evolve to Xt+1 , according to
the unknown, nonlinear dynamics function f
of the previous state Xt and action Ut .
Zt
Ut
Zt+1
Figure 2: Our method (PILCO extended with Bayesian
filtering). Our prior belief Bt|t?1 (over latent system
Xt ), generates observation Zt . The prior belief Bt|t?1
then combines with observation Zt resulting in posterior
belief Bt|t (the update step). Then, the mean posterior
belief E[Bt|t ] is inputted into policy function ? to decide
action Ut . Finally, the next timestep?s prior belief Bt+1|t
is predicted using dynamics model f (the prediction step).
4
m
V
B
?m
?m
M
V?
B
Figure 3: Belief in execution phase: a Gaussian random variable parameterised by mean m and variance
V.
Figure 4: Belief in prediction phase: a Gaussian
random variable with random mean M and nonrandom variance V? , where M is itself a Gaussian
random variable parameterised by mean ?m and variance ?m .
functions of the now-random observations, the belief parameters must be random also, distinguished
as M and V 0 . Given the belief?s distribution parameters are now random, the belief is hierarchicallyrandom, denoted B ? N (M, V 0 ) seen Figure 4. Our framework allows us to consider multiple
possible future belief-states analytically during policy evaluation. Intuitively, our framework is an
analytical analogue of POMDP policy evaluation using particle methods. In particle methods, each
particle is associated with a distinct belief, due to each conditioning on independent samples of
future observations. A particle distribution thus defines a distribution over beliefs. Our method is the
analytical analogue of this particle distribution, and requires no sampling. By restricting our beliefs
as (parametric) Gaussian, we can tractably encode a distribution over beliefs by a distribution over
belief-parameters.
4.1
Execution phase with a filter
When an actual filter is applied, it starts with three pieces of information: mt|t?1 , Vt|t?1 and a noisy
observation of the system zt (the dual subscript means belief of the latent physical state x at time t
given all observations up until time t ? 1 inclusive). The filtering ?update step? combines prior belief
bt|t?1 = Xt |z1:t?1 , u1:t?1 ? N (mt|t?1 , Vt|t?1 ) with observational likelihood p(zt ) = N (Xt , ? )
using Bayes rule to yield posterior belief bt|t = Xt |z1:t , u1:t?1 :
bt|t ? N (mt|t , Vt|t ),
mt|t = Wm mt|t?1 + Wz zt ,
?1
Vt|t = Wm Vt|t?1 ,
(2)
?1
with weight matrices Wm = ? (Vt|t?1 +? ) and Wz = Vt|t?1 (Vt|t?1 +? ) computed from the
standard result Gaussian conditioning. The policy ? instead uses updated belief-mean mt|t (smoother
and better-informed than zt ) to decide the action: ut = ?(mt|t , ?). Thus, the joint distribution over
the updated (random) belief and the (non-random) action is
bt|t
mt|t
Vt|t 0
.
.
.
?bt|t =
?
? N m
? t|t =
, Vt|t =
.
(3)
ut
ut
0 0
Next, the filtering ?prediction step? computes the predictive-distribution of bt+1|t = p(xt+1 |z1:t , u1:t )
from the output of dynamics model f given random input ?bt|t . The distribution f (?bt|t ) is nonGaussian yet has analytically computable moments [5]. For tractability, we approximate bt+1|t as
Gaussian-distributed using moment-matching:
bt+1|t ? N (mt+1|t , Vt+1|t ),
mat+1|t = E?bt|t [f a (?bt|t )],
ab
Vt+1|t
= C?bt|t [f a (?bt|t ), f b (?bt|t )], (4)
ab
where a and b refer to the a?th and b?th dynamics output. Both mat+1|t and Vt+1|t
are derived in
Appendix D. The process then repeats using the predictive belief (4) as the prior belief in the following
timestep. This completes the specification of the system in execution.
4.2
Prediction phase with a filter
During the prediction phase, we compute the probabilistic behaviour of the filtered system via an analytic distribution of belief states (Figure 4). We begin with a prior belief at time t = 0 before any observations are recorded (symbolised by ??1?), setting the prior Gaussian belief to have a distribution equal
5
to the known initial Gaussian state distribution: B0|?1 ? N (M0|?1 , V?0|?1 ), where M0|?1 ? N (?x0 , 0)
and V?0|?1 = ?x0 . Note the variance of M0|?1 is zero, corresponding to a single prior belief at the
beginning of the prediction phase. We probabilistically predict the yet-unobserved observation Zt
using our belief distribution Bt|t?1 and the known additive Gaussian observation noise t as per
Figure 2. Since we restrict both the belief mean M and observation Z to being Gaussian random
variables, we can express their joint distribution:
m
m
?t|t?1
?t|t?1 ?m
Mt|t?1
t|t?1
?N
,
,
(5)
m
m
z
?t|t?1
?t|t?1
?t
Zt
?
where ?zt = ?m
t|t?1 + Vt|t?1 + ? .
The filtering ?update step? combines prior belief Bt|t?1 with observation Zt using the same logic
as (2), the only difference being Zt is now random. Since the updated posterior belief mean Mt|t is
a (deterministic) function of random Zt , then Mt|t is necessarily random (with non-zero variance
unlike M0|?1 ). Their relationship, Mt|t = Wm Mt|t?1 + Wz Zt , results in the updated hierarchical
belief posterior:
m
Bt|t ? N Mt|t , V?t|t , where Mt|t ? N ?m
,
?
(6)
t|t
t|t ,
?m
t|t
m
m
= Wm ?m
t|t?1 + Wz ?t|t?1 = ?t|t?1 ,
>
W m ?m
t|t?1 Wm
?m
t|t
=
V?t|t
= Wm V?t|t?1 .
+
>
Wm ? m
t|t?1 Wz
+
(7)
>
W z ?m
t|t?1 Wm
+
Wz ?zt Wz> ,
(8)
(9)
The policy now has a random input Mt|t , thus the control output must also be random (even though ? is
a deterministic function): Ut = ?(Mt|t , ?), which we implement by overloading the policy function:
m
u
u
mu
(?ut , ?ut , Ctmu ) = ?(?m
t|t , ?t|t , ?), where ?t is the output mean, ?t the output variance and Ct
.
?1
input-output covariance with premultiplied inverse input variance, Ctmu = (?m
CM [Mt|t , Ut ].
t|t )
Making a moment-matched approximation yields a joint Gaussian:
m
mu
?m
?m
?t|t
Mt|t
.
t|t
t|t Ct
m
? .
m
? .
?
Mt|t =
? N ?t|t =
, ?t|t =
.
(10)
Ut
(Ctmu )> ?m
?ut
?ut
t|t
m
Finally, we probabilistically predict the belief-mean Mt+1|t ? N (?m
t+1|t , ?t+1|t ) and the expected
0
belief-variance V?t+1|t = EM? t|t [Vt+1|t
]. To do this we use a novel generalisation of Gaussian process
moment matching with uncertain inputs by Candela et al. [1] generalised to hierarchically-uncertain
inputs detailed in Appendix E. We have now discussed the one-step prediction of the filtered system,
from Bt|t?1 to Bt+1|t . Using this process repeatedly, from initial belief B0|?1 we one-step predict to
B1|0 , then to B2|1 , up to BT |T ?1 .
5
Experiments
We test our algorithm on the cartpole swing-up problem (shown in Appendix A), a benchmark for
comparing controllers of nonlinear dynamical systems. We experiment using a physics simulator by
solving the differential equations of the system. Each episode begins with the pendulum hanging
downwards. The goal is then to swing the pendulum upright, thereafter continuing to balance it. The
use a cart mass of mc = 0.5kg. A zero-order hold controller applies horizontal forces to the cart
within range [?10, 10]N. The policy is a linear combination of 100 radial basis functions. Friction resists the cart?s motion with damping coefficient b = 0.1Ns/m. Connected to the cart is a pole of length
l = 0.2m and mass mp = 0.5kg located at its endpoint, which swings due to gravity?s acceleration
g = 9.82m/s2 . An inexpensive camera observes the system. Frame rates of $10 webcams are typically
30Hz at maximum resolution, thus the time discretisation is ?t = 1/30s. The state x comprises
? > . We both randomlythe cart position, pendulum angle, and their time derivatives x = [xc , ?, x? c , ?]
initialise the system and set the initial belief of the system according to B0|?1 ? N (M0|?1 , V0|?1 )
1/2
where M0|?1 ? ?([0, ?, 0, 0]> ) and V0|?1 = diag([0.2m, 0.2rad, 0.2m/s, 0.2rad/s]). The camera?s
0.03
1/2
noise standard deviation is: (? ) = diag([0.03m, 0.03rad, 0.03
?t m/s, ?t rad/s]), noting 0.03rad ?
1.7? . We use the 0.03
terms
since
using
a
camera
we
cannot
observe
velocities directly but can
?t
estimate them with finite differences. Each episode
has
a
two
second
time
horizon
(60 timesteps). The
cost function we impose is 1 ? exp ? 12 d2 /?c2 where ?c = 0.25m and d2 is the squared Euclidean
distance between the pendulum?s end point and its goal.
6
We compare four algorithms: 1) PILCO by Deisenroth and Rasmussen [5] as a baseline (unfiltered
execution, and unfiltered full-prediction); 2) the method by Dallaire et al. [3] (filtered execution,
and filtered MAP-prediction); 3) the method by Deisenroth and Peters [4] (filtered execution, and
unfiltered full-prediction); and lastly 4) our method (filtered execution, and filtered full-prediction).
For clear comparison we first control for data and dynamics models, where each algorithm has access
to the exact same data and exact same dynamics model. The reason is to eliminate variance in
performance caused by different algorithms choosing different actions. We generate a single dataset
by running the baseline PILCO algorithm for 11 episodes (totalling 22 seconds of system interaction).
The independent variables of our first experiment are 1) the method of system prediction and 2) the
method of system execution. Each policy is then optimised from the same initialisation using their
respective prediction methods, before comparing performances. Afterwards, we experiment allowing
each algorithm to collect its own data, and also experiment with various noise level.
6
6.1
Results and analysis
Results using a common dataset
We now compare algorithm performance, both predictive (Figure 5) and empirical (Figure 6). First,
we analyse predictive costs per timestep (Figure 5). Since predictions are probabilistic, the costs
have distributions, with the exception of Dallaire et al. [3] which predicts MAP trajectories and
therefore has deterministic cost. Even though we plot distributed costs, policies are optimised w.r.t.
expected total cost only. Using the same dynamics, the different prediction methods optimise different
policies (with the exception of Deisenroth and Rasmussen [5] and Deisenroth and Peters [4], whose
prediction methods are identical). During the first 10 timesteps, we note identical performance with
maximum cost due to the non-zero time required to physically swing the pendulum up near the goal.
Performances thereafter diverge. Since we predict w.r.t. a filtering process, less noise is predicted to
be injected into the policy, and the optimiser can thus afford higher gain parameters w.r.t. the pole at
balance point. If we linearise our policy around the goal point, our policy has a gain of -81.7N/rad
w.r.t. pendulum angle, a larger-magnitude than both Deisenroth method gains of -39.1N/rad (negative
values refer to left forces in Figure 11). This higher gain is advantageous here, corresponding to a
more reactive system which is more likely to catch a falling pendulum. Finally, we note Dallaire et al.
[3] predict very high performance. Without balancing the costs across multiple possible trajectories,
the method instead optimises a sequence of deterministic states to near perfection.
To compare the predictive results against the empirical, we used 100 executions of each algorithm
(Figure 6). First, we notice a stark difference between predictive and executed performances from
Dallaire et al. [3], due to neglecting model uncertainty, suffering model bias. In contrast, the other
methods consider uncertainty and have relatively unbiased predictions, judging by the similarity
between predictive-vs-empirical performances. Deisenroth?s methods, which differ only in execution,
illustrate that filtering during execution-only can be better than no filtering at all. However, the major
benefit comes when the policy is evaluated from multi-step predictions of a filtered system. Opposed
to Deisenroth and Peters [4], our method?s predictions reflect reality closer because we both predict
and execute system trajectories using closed loop filtering control.
To test statistical significance of empirical cost differences given 100 executions, we use a Wilcoxon
rank-sum test at each time step. Excluding time steps ranging t = [0, 29] (whose costs are similar),
the minimum z-score over timesteps t = [30, 60] that our method has superior average-cost than each
other methods follows: Deisenroth 2011 min(z) = 4.99, Dallaire 2009?s min(z) = 8.08, Deisenroth
2012?s min(z) = 3.51. Since the minimum min(z) = 3.51, we have p > 99.9% certainty our
method?s average empirical cost is superior than each other method.
6.2
Results of full reinforcement learning task
In the previous experiment we used a common dataset to compare each algorithm, to isolate and focus
on how well each algorithm makes use of data, rather than also considering the different ways each
algorithm collects different data. Here, we remove the constraint of a common dataset, and test the
full reinforcement learning task by allowing each algorithm to collect its own data over repeated trials
of the cart-pole task. Each algorithm is allowed 15 trials (episodes), repeated 10 times with different
random seeds. For a particular re-run experiment and episode number, an algorithm?s predicted loss
is unchanged when repeatedly computed, yet the empirical loss differs due to random initial states,
observation noise, and process noise. We therefore average the empirical results over 100 random
executions of the controller at each episode and seed.
7
1
1
Deisenroth 2011
Dallaire 2009
Deisenroth 2012
Our Method
0.8
0.8
0.6
0.4
0.4
0.2
0.2
Cost
0.6
0
0
10
20
30
Timestep
40
50
0
60
Figure 5: Predictive cost per timestep. The error
bars show ?1 standard deviation. Each algorithm has
access to the same data set (generated by baseline
Deisenroth 2011) and dynamics model. Algorithms
differ in their multi-step prediction methods (except
Deisenroth?s algorithms whose predictions overlap).
0
10
20
30
Timestep
40
50
60
Figure 6: Empirical cost per timestep. We generate
empirical cost distributions from 100 executions per
algorithm. Error bars show ?1 standard deviation.
The plot colours and shapes correspond to the legend
in Figure 5.
60
40
40
Loss
60
20
20
Deisenroth 2011
Dallaire 2009
Deisenroth 2012
Our Method
0
1
2
3
4
5
6
Deisenroth 2011
Dallaire 2009
Deisenroth 2012
Our Method
0
7 8 9 10 11 12 13 14
Episode
Loss
Figure 7: Predictive loss per episode. Error bars
show ?1 standard error of the mean predicted loss
given 10 repeats of each algorithm.
1
60
50
50
40
40
30
30
k
k
k
k
k
10
0
1
2
=
=
=
=
=
3
1
2
4
8
16
4
3
4
5
6
7 8 9 10 11 12 13 14
Episode
Figure 8: Empirical loss per episode. Error bars
show ?1 standard error of the mean empirical loss
given 10 repeats of each algorithm. In each repeat we
computed the mean empirical loss using 100 independent executions of the controller.
60
20
2
20
10
5
6
0
7 8 9 10 11 12 13 14
Episode
Figure 9: Empirical loss of Deisenroth 2011 for various noise levels. The error bars show ?1 standard
deviation of the empirical loss distribution based on
100 repeats of the same learned controller, per noise
level.
1
2
3
4
5
6
7 8 9 10 11 12 13 14
Episode
Figure 10: Empirical loss of Filtered PILCO for
various noise levels. The error bars show ?1 standard deviation of the empirical loss distribution based
on 100 repeats of the same learned controller, per
noise level.
8
The predictive loss (cumulative cost) distributions of each algorithm are shown Figure 7. Perhaps
the most striking difference between the full reinforcement learning predictions and those made
with a controlled dataset (Figure 5) is that Dallaire does not predict it will perform well. The
quality of the data collected by Dallaire within the first 15 episodes is not sufficient to predict
good performance. Our Filtered PILCO method accurately predicts its own strong performance and
additionally outperforms the competing algorithm seen in Figure 8. Of interest is how each algorithm
performs equally poorly during the first four episodes, with Filtered PILCO?s performance breaking
away and learning the task well by the seventh trial. Such a learning rate was similar to the original
PILCO experiment with the noise-free cartpole.
6.3
Results with various observation noises
Different observation noise levels were also tested, comparing PILCO (Figure 9) with Filtered
PILCO (Figure 10). Both figures show a noise factors k, such that the observation noise is:
?
0.01
? = k ? diag([0.01m, 0.01rad, 0.01
?t m/s, ?t rad/s]). For reference, our previous experiments used
a noise factor of k = 3. At low noise factor k = 1, both algorithms perform similarly-well, since
observations are precise enough to control a system without a filter. As observations noise increases,
the performance of unfiltered PILCO soon drops, whilst the Filtered PILCO can successfully control
the system under higher noise levels (Figure 10).
6.4
Training time complexity
Training the GP dynamics model involved N = 660 data points, M = 50 inducing points under
a sparse GP Fully Independent Training Conditional (FITC) [2], P = 100 policy RBF centroids,
D = 4 state dimensions, F = 1 action dimensions, and T = 60 timestep horizon, with time
complexity O(DN M 2 ). Policy optimisation (with 300 steps, each of which require trajectory
prediction with gradients) is the most intense part: our method and both Deisenroth?s methods scale
O(M 2 D2 (D + F )2 T + P 2 D2 F 2 T ), whilst Dallaire?s only scales O(M D(D + F )T + P DF T ).
Worst case we require M = O(exp(D + F )) inducing points to capture dynamics, the average case
is unknown. Total training time was four hours to train the original PILCO method with an additional
one hour to re-optimise the policy.
7
Conclusion and future work
In this paper, we extended the original PILCO algorithm [5] to filter observations, both during system
execution and multi-step probabilistic prediction required for policy evaluation. The extended framework enables learning in a special case of partially-observed MDP environments (POMDPs) whilst
retaining PILCO?s data-efficiency property. We demonstrated successful application to a benchmark
control problem, the noisily-observed cartpole swing-up. Our algorithm learned a good policy under
significant observation noise in less than 30 seconds of system interaction. Importantly, our algorithm
evaluates policies with predictions that are faithful to reality: we predict w.r.t. closed loop filtered
control precisely because we execute closed loop filtered control. We showed experimentally that
faithful and probabilistic predictions improved performance with respect to the baselines. For clear
comparison we first constrained each algorithm to use the same dynamics dataset to demonstrate superior data-usage of our algorithm. Afterwards we relaxed this constraint, and showed our algorithm
was able to learn from fewer data.
Several more challenges remain for future work. Firstly the assumption of zero variance of the
belief-variance could be relaxed. A relaxation allows distributed trajectories to more accurately
consider belief states having various degrees of certainty (belief-variance). For example, system
trajectories have larger belief-variance when passing though data-sparse regions of state-space, and
smaller belief-variance in data-dense regions. Secondly, the policy could be a function of the full
belief distribution (mean and variance) rather than just the mean. Such flexibility could help the policy
make more ?cautious? actions when more uncertain about the state. A third challenge is handling
non-Gaussian noise and unobserved state variables. For example, in real-life scenarios using a camera
sensor for self-driving, observations are occasionally fully or partially occluded, or limited by weather
conditions, where such occlusions and limitations change, opposed to assuming a fixed Gaussian
addition noise. Lastly, experiments with a real robot would be important to show the usefulness in
practice.
9
References
[1] Joaquin Candela, Agathe Girard, Jan Larsen, and Carl Rasmussen. Propagation of uncertainty in Bayesian
kernel models-application to multiple-step ahead forecasting. In International Conference on Acoustics,
Speech, and Signal Processing, volume 2, pages 701?704, 2003.
[2] Lehel Csat? and Manfred Opper. Sparse on-line Gaussian processes. Neural Computation, 14(3):641?668,
2002.
[3] Patrick Dallaire, Camille Besse, Stephane Ross, and Brahim Chaib-draa. Bayesian reinforcement learning
in continuous POMDPs with Gaussian processes. In International Conference on Intelligent Robots and
Systems, pages 2604?2609, 2009.
[4] Marc Deisenroth and Jan Peters. Solving nonlinear continuous state-action-observation POMDPs for
mechanical systems with Gaussian noise. In European Workshop on Reinforcement Learning, 2012.
[5] Marc Deisenroth and Carl Rasmussen. PILCO: A model-based and data-efficient approach to policy search.
In International Conference on Machine Learning, pages 465?472, New York, NY, USA, 2011.
[6] Michael Duff. Optimal Learning: Computational procedures for Bayes-adaptive Markov decision processes. PhD thesis, Department of Computer Science, University of Massachusetts Amherst, 2002.
[7] Jonathan Ko and Dieter Fox. GP-BayesFilters: Bayesian filtering using Gaussian process prediction and
observation models. Autonomous Robots, 27(1):75?90, 2009.
[8] Timothy Lillicrap, Jonathan Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver,
and Daan Wierstra. Continuous control with deep reinforcement learning. In arXiv preprint, arXiv
1509.02971, 2015.
[9] Andrew McHutchon. Nonlinear modelling and control using Gaussian processes. PhD thesis, Department
of Engineering, University of Cambridge, 2014.
[10] Pascal Poupart, Nikos Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discrete Bayesian
reinforcement learning. International Conference on Machine learning, pages 697?704, 2006.
[11] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge,
MA, USA, 1 2006.
[12] Stephane Ross, Brahim Chaib-draa, and Joelle Pineau. Bayesian reinforcement learning in continuous
POMDPs with application to robot navigation. In International Conference on Robotics and Automation,
pages 2845?2851, 2008.
[13] Jur van den Berg, Sachin Patil, and Ron Alterovitz. Efficient approximate value iteration for continuous
Gaussian POMDPs. In Association for the Advancement of Artificial Intelligence, 2012.
[14] Dustin Webb, Kyle Crandall, and Jur van den Berg. Online parameter estimation via real-time replanning of
continuous Gaussian POMDPs. In International Conference Robotics and Automation, pages 5998?6005,
2014.
10
| 6799 |@word trial:4 version:2 advantageous:1 d2:4 grey:1 simulation:2 covariance:2 eng:1 thereby:1 moment:12 initial:4 series:1 score:1 initialisation:1 rowan:1 outperforms:1 comparing:3 yet:7 must:2 realistic:2 additive:3 shape:1 enables:2 analytic:5 remove:1 plot:2 drop:1 update:3 v:1 intelligence:1 fewer:2 advancement:1 beginning:1 record:1 manfred:1 filtered:21 premultiplied:1 complication:1 ron:1 firstly:2 bayesfilters:1 wierstra:1 dn:1 c2:1 direct:6 differential:1 pritzel:1 alterovitz:1 combine:3 underfitting:2 x0:3 expected:4 planning:6 nor:1 multi:5 simulator:1 actual:1 considering:1 increasing:1 begin:2 matched:2 mass:2 kg:2 argmin:1 cm:1 informed:2 whilst:4 unobserved:4 nonrandom:1 certainty:3 stateaction:1 gravity:1 exactly:2 uk:3 control:21 before:3 generalised:1 engineering:3 local:2 treat:1 modify:1 limit:1 encoding:1 optimised:5 subscript:1 approximately:2 black:1 might:1 collect:3 limited:1 hunt:1 range:1 faithful:2 camera:4 practice:1 implement:2 differs:1 cb2:2 procedure:1 jan:2 empirical:16 weather:1 matching:7 radial:1 cannot:1 applying:1 instability:1 equivalent:1 deterministic:5 map:3 demonstrated:1 jesse:1 straightforward:1 williams:1 pomdp:4 resolution:1 rule:2 importantly:1 initialise:2 handle:1 searching:1 autonomous:1 updated:4 controlling:1 exact:2 carl:4 us:7 gps:2 velocity:1 approximated:2 particularly:1 located:1 continues:1 predicts:4 observed:7 preprint:1 solved:1 capture:2 worst:1 calculate:1 thousand:1 region:3 cycle:1 connected:1 episode:14 observes:1 substantial:2 environment:1 mu:2 complexity:5 occluded:1 cam:3 dynamic:42 trained:1 solving:2 predictive:11 efficiency:3 basis:1 vague:1 easily:1 joint:5 various:5 train:4 distinct:2 shortcoming:1 describe:1 artificial:1 crandall:1 kevin:1 choosing:1 whose:4 larger:3 solve:2 otherwise:2 ability:1 gp:9 jointly:1 noisy:9 itself:2 final:1 online:2 analyse:1 hoc:1 advantage:2 sequence:1 analytical:3 dallaire:14 interaction:3 combining:1 loop:5 poorly:1 flexibility:1 amplified:1 inducing:3 cautious:1 optimum:1 silver:1 executing:1 help:1 derive:1 illustrate:1 ac:3 andrew:1 finitely:1 mchutchon:1 b0:3 sa:2 strong:1 edward:1 predicted:6 involves:1 come:1 differ:2 stephane:2 filter:13 enable:1 observational:1 implementing:1 require:3 behaviour:1 brahim:2 anticipate:1 secondly:2 unscented:1 hold:1 around:1 exp:3 seed:2 mapping:1 predict:15 m0:6 major:1 driving:1 inputted:2 failing:1 estimation:1 applicable:1 ross:2 replanning:1 successfully:1 mit:1 sensor:4 gaussian:31 ekf:3 rather:3 probabilistically:3 encode:1 ax:1 derived:1 focus:1 improvement:1 rank:1 likelihood:4 modelling:1 contrast:2 centroid:1 baseline:4 sense:1 posteriori:1 inference:1 bt:29 typically:1 eliminate:1 lehel:1 subroutine:1 issue:1 aforementioned:2 flexible:1 dual:1 augment:1 denoted:1 retaining:1 pascal:1 plan:1 constrained:1 special:2 marginal:3 equal:1 once:1 optimises:1 having:1 beach:1 sampling:2 optimising:4 identical:3 look:1 mcallister:1 warrant:1 future:6 intelligent:1 serious:1 few:1 randomly:1 divergence:1 phase:11 occlusion:1 ukf:1 ab:2 interest:1 cer54:1 highly:2 evaluation:5 navigation:1 xat:1 closer:1 neglecting:1 necessary:2 partial:1 respective:1 cartpole:7 intense:1 draa:2 damping:1 discretisation:1 taylor:1 continuing:1 euclidean:1 re:2 fox:1 uncertain:5 retains:1 cost:20 tractability:2 pole:3 deviation:5 usefulness:1 successful:1 seventh:1 learnt:2 st:1 international:6 amherst:1 probabilistic:12 physic:1 diverge:1 michael:1 nongaussian:1 squared:2 reflect:1 recorded:2 thesis:2 opposed:3 creating:1 resort:1 derivative:1 stark:1 account:2 nonlinearities:1 b2:1 automation:2 coefficient:1 mp:1 caused:1 piece:1 break:2 closed:5 candela:2 doing:2 pendulum:7 start:1 bayes:3 hampering:1 maintains:2 wm:9 variance:19 yield:3 correspond:1 bayesian:8 raw:1 iterated:1 accurately:2 iid:1 mc:1 trajectory:21 pomdps:9 mlg:1 evaluates:2 inexpensive:1 against:1 pp:1 initialised:1 involved:1 larsen:1 associated:1 gain:7 chaib:2 dataset:6 treatment:1 massachusetts:1 knowledge:4 ut:20 infers:1 back:2 feed:1 higher:3 tom:1 response:1 improved:2 done:1 box:2 execute:5 evaluated:2 though:3 parameterised:4 just:2 lastly:2 until:3 agathe:1 joaquin:1 horizontal:1 nonlinear:8 propagation:1 defines:1 pineau:1 quality:1 perhaps:2 grows:1 mdp:1 usa:3 effect:1 usage:1 lillicrap:1 true:1 unbiased:1 swing:9 analytically:3 read:1 leibler:1 during:12 self:1 outline:1 demonstrate:1 necessitating:1 performs:1 motion:1 ranging:1 ef:2 novel:2 kyle:1 superior:4 common:3 functional:2 mt:22 physical:3 rl:1 conditioning:3 endpoint:1 volume:1 tassa:1 extend:4 discussed:2 association:1 significant:4 refer:3 cambridge:6 smoothness:1 rd:1 similarly:2 erez:1 particle:7 had:1 resistant:1 specification:1 access:2 similarity:1 v0:2 robot:4 patrick:1 wilcoxon:1 posterior:6 own:3 recent:1 showed:2 noisily:4 moderate:1 scenario:1 occasionally:1 outperforming:1 vt:15 joelle:1 life:1 optimiser:1 seen:3 minimum:2 additional:4 relaxed:2 impose:1 nikos:1 converge:1 signal:2 pilco:57 smoother:2 full:11 multiple:4 afterwards:2 minimising:1 long:2 post:1 equally:1 controlled:1 prediction:38 converging:1 ko:1 controller:9 optimisation:2 expectation:1 df:1 physically:1 iteration:2 sometimes:1 kernel:1 arxiv:2 achieved:1 robotics:2 addition:1 addressed:1 completes:1 source:1 unlike:2 cart:6 hz:1 isolate:1 legend:1 near:2 noting:1 aesthetic:1 enough:2 timesteps:3 competing:2 restrict:1 observability:2 computable:1 six:1 colour:1 forecasting:1 peter:5 resistance:1 speech:1 york:1 passing:1 afford:1 action:18 adequate:1 repeatedly:3 ignored:1 heess:1 deep:1 covered:1 detailed:3 clear:2 repeating:1 nonparametric:2 locally:2 sachin:1 http:1 generate:2 outperform:1 exist:1 restricts:1 problematic:1 notice:1 judging:1 correctly:2 per:9 csat:1 summarised:1 discrete:2 mat:2 express:1 thereafter:2 four:3 falling:1 capital:1 timestep:12 relaxation:1 merely:1 injects:1 sum:1 run:1 inverse:1 angle:2 uncertainty:4 injected:1 striking:1 family:1 decide:3 decision:2 appendix:4 vf:1 ct:2 datum:1 tackled:1 ahead:1 precisely:2 constraint:2 inclusive:1 x2:1 generates:1 u1:4 simulate:1 aspect:1 friction:1 min:4 relatively:1 department:4 according:3 hanging:1 combination:1 across:1 remain:1 em:1 smaller:1 making:1 intuitively:1 den:2 hoey:1 handling:2 dieter:1 equation:1 turn:1 discus:1 fed:1 end:1 available:1 gaussians:1 apply:1 observe:2 hierarchical:1 away:1 linearise:1 simulating:2 distinguished:3 thomas:1 original:10 assumes:2 dirichlet:1 include:1 running:1 graphical:1 patil:1 xc:1 neglect:1 exploit:1 webcam:1 unchanged:1 quantity:1 fa:1 parametric:1 gradient:3 distance:1 poupart:1 chris:1 collected:1 unstable:1 reason:2 assuming:2 kalman:3 length:2 code:1 relationship:1 providing:1 balance:2 susceptible:2 unfortunately:1 executed:3 webb:1 negative:1 implementation:2 zt:28 policy:55 unknown:8 perform:2 allowing:2 observation:54 markov:2 benchmark:3 enabling:1 finite:2 daan:1 situation:1 extended:6 excluding:1 precise:1 vlassis:1 frame:1 duff:1 arbitrary:1 camille:1 david:1 required:4 kl:1 mechanical:1 z1:6 rad:9 acoustic:1 momentmatching:1 distinction:1 learned:3 hour:2 nip:1 tractably:1 address:1 able:1 bar:6 below:1 dynamical:2 mismatch:1 challenge:2 rf:1 including:1 optimise:4 wz:7 belief:59 analogue:2 suitable:1 overlap:1 treated:1 force:2 predicting:2 resists:1 scheme:1 improve:1 fitc:1 brief:1 mdps:1 catch:1 naive:1 perfection:1 prior:14 literature:1 evolve:1 relative:1 law:1 loss:15 fully:2 regan:1 limitation:2 filtering:20 unfiltered:8 degree:1 sufficient:1 consistent:1 balancing:1 totalling:1 repeat:9 rasmussen:6 free:2 soon:1 offline:1 bias:3 allow:1 sparse:3 benefit:3 distributed:3 van:2 dimension:2 opper:1 transition:4 cumulative:1 doesn:1 computes:1 forward:1 made:1 reinforcement:11 adaptive:1 approximate:3 observable:1 preferred:1 kullback:1 keep:1 logic:1 overfitting:2 b1:1 assumed:2 xi:4 continuous:14 search:3 latent:10 reality:2 additionally:2 learn:7 ca:1 nicolas:1 interact:1 expansion:1 complex:2 necessarily:1 european:1 marc:2 diag:4 significance:1 main:1 hierarchically:1 dense:1 s2:1 noise:43 hyperparameters:1 fair:1 suffering:1 repeated:2 allowed:1 x1:3 girard:1 referred:1 downwards:1 besse:1 ny:1 n:1 position:1 comprises:1 exponential:2 breaking:1 third:1 dustin:1 learns:1 down:2 erroneous:1 specific:3 xt:26 pz:2 incorporating:1 workshop:1 restricting:1 overloading:1 phd:2 magnitude:1 execution:25 conditioned:1 horizon:4 suited:1 timothy:1 likely:1 partially:4 applies:3 ma:1 conditional:1 goal:4 acceleration:1 rbf:1 change:2 experimentally:1 specifically:3 except:2 reducing:1 generalisation:1 upright:1 yuval:1 conservative:1 called:1 total:3 pas:1 experimental:1 la:1 exception:2 deisenroth:22 berg:2 jonathan:2 alexander:1 reactive:1 evaluate:2 tested:1 ex:1 |
6,411 | 68 | 367
SCHEMA
OT ILl ZING
A
I'OR
NETWORK
MOTOR
MODEL
CONTROL
01'
THE
CEREBELLUM
James C. Houk, Ph.D.
Northwestern University Medical School, Chicago, Illinois
60201
ABSTRACT
This paper outlines a schema for movement control
based on two stages of signal processing. The higher stage
is a neural network model that treats the cerebellum as an
array of adjustable motor pattern generators. This network
uses sensory input to preset and to trigger elemental
pattern generators and to evaluate their performance. The
actual patterned outputs, however, are produced by intrinsic circuitry that includes recurrent loops and is thus
capable of self-sustained activity. These patterned
outputs are sent as motor commands to local feedback
systems called motor servos. The latter control the forces
and lengths of individual muscles. Overall control is thus
achieved in two stages:
(1) an adaptive cerebellar network
generates an array of feedforward motor commands and (2) a
set of local feedback systems translates these commands
into actual movements.
INTRODUCTION
There is considerable evidence that the cerebellum is
involved in the adaptive control of movement 1 , although the
manner in which this control is achieved is not well understood. As a means of probing these cerebellar mechanisms,
my colleagues and I have been conducting microelectrode
studies of the neural messages that flow through the intermediate division of the cerebellum and onward to limb
muscles via the rubrospinal tract. We regard this cerebellorubrospinal pathway as a useful model system for studying
general problems of sensorimotor integration and adaptive
brain function. A summary of our findings has been published as a book chapter 2 .
On the basis of these and other neurophysiological
results, I recently hypothesized that the cerebellum functions as an array of adjustable motor pattern generators 3 .
The outputs from these pattern generators are assumed to
function as motor commands, i.e., as neural control signals
that are sent to lower-level motor systems where they
produce movements. According to this hypothesis, the
cerebellum uses its extensive sensory input to preset the
? American Institute of Physics 1988
368
pattern generators, to trigger them to initiate the
production of patterned outputs and to evaluate the success
or failure of the patterns in controlling a motor behavior.
However, sensory input appears not to playa major role in
shaping the waveforms of the patterned outputs. Instead,
these waveforms seem to be produced by intrinsic circuity.
The initial purpose of the present paper is to provide
some ideas for a neural network model of the cerebellum
that might be capable of accounting for adjustable motor
pattern generation. Several previous authors have
described network models of the cerebellum that, like the
present model, are based on the neuroanatomical organization of this brain structure 4 ,5,6. While the present model
borrows heavily from these previous models, it has some
additional features that may explain the unique manner in
which the cerebellum processes sensory input to produce
motor commands. A second purpose of this paper is to
outline how this network model fits within a broader schema
for motor control that I have been developing over the past
several years 3,7. Before presenting these ideas, let me
first review some basic physiology and anatomy of the
cerebelluml .
SIGNALS
AND
CIRCUITS
IN
TRB
CBRBBBLLUM
There are three main categories of input fibers to the
cerebellum, called mossy fibers, climbing fibers and
noradrenergic fibers. As illustrated in Fig. 1, the mossy
fiber input shows considerable fan-out via granule cells
and parallel fibers. The parallel fibers in turn are
arranged to provide a high degree of fan-in to individual
Purkinje cells (P). These P cells are the sole output
elements of the cortical portion of the cerebellum. Via
the parallel fiber input, each P cell is exposed to
approximately 200,000 potential messages. In marked
contrast, the climbing fiber input to P cells is highly
focused. Each climbing fiber branches to only 10 P cells,
and each cell receives input from only one climbing fiber.
Although less is known about input via noradrenergic
fibers, it appears to be diffuse and even more divergent
than the mossy fiber input.
Mossy fibers originate from several brain sites transmitting a diversity of information about the external world
and the internal state of the body. Some mossy fiber
inputs are clearly sensory. They come fairly directly from
cutaneous, muscle or vestibular receptors. Others are
routed via the cerebral cortex where they represent highly
processed visual, auditory or somatosensory information.
Yet another category of mossy fiber transmits information
about central motor commands (Fig. 1 shows one such pathway, from collaterals of the rubrospinal tract relayed
369
through the lateral reticular nucleus (L?. The discharge
rates of mossy fibers are modulated over a wide dynamic
range which permits them to transmit detailed parametric
information about the state of the body and its external
environment.
noradrenergic
fibers
Sensory
Inputs
sensorimotor
cortex
--o~Ll
Motor
______________________________~)--~C~o~m~m=M~d=s~
rubrospinal tract
Figure 1: Pathways through the cerebellum. This diagram,
which highlights the cerebellorubrospinal system, also
constitutes a circuit diagram for the model of an
elemental pattern generator.
The sole source of climbing fibers is from cells
located in the inferior olivary nucleus. Olivary neurons
are selectively sensitive to sensory events. These cells
have atypical electrical properties which limit their
discharge to rates less than 10 impulses/sec, and usual
rates are closer to 1 impulse/sec. As a consequence,
370
individual climbing fibers transmit very little parametric
information about the intensity and duration of a stimulus;
instead, they appear to be specialized to detect simply the
occurrences of sensory events. There are also motor inputs
to this pathway, but they appear to be strictly inhibitory.
The motor inputs gate off responsiveness to self-induced
(or expected) stimuli, thus converting olivary neurons into
detectors of unexpected sensory events.
Given the abundance of sensory input to P cells via
mossy and climbing fibers, it is remarkable that these
cells respond so weakly to sensory stimulation. Instead,
they discharge vigorously during active movements. P cells
send abundant collaterals to their neighbors, while their
main axons project to the cerebellar nuclei and then onward
to several brain sites that in turn relay motor commands to
the spinal cord.
Fig. 1 shows P cell projections to the intermediate
cerebellar nucleus (I), also called the interpositus
nucleus. The red nucleus (R) receives its main input from
the interpositus nucleus, and it then transmits motor
commands to the spinal cord via the rubrospinal tract.
Other premotor nuclei that are alternative sources of motor
commands receive input from alternative cerebellar output
circuits. Fig. 1 thus specifically illustrates the
cerebellorubrospinal system, the portion of the cerebellum
that has been emphasized in my laboratory.
Microelectrode recordings from the red nucleus have
demonstrated signals that appear to represent detailed
velocity commands for distal limb movements.
Bursts of
discharge precede each movement, the frequency of discharge
within the burst corresponds to the velocity of movement,
and the duration of the burst corresponds to the duration
of movement. These velocity signals are not shaped by
continuous feedback from peripheral receptors; instead,
they appear to be produced centrally. An important goal of
the modelling effort outlined here is to explain how these
velocity commands might be produced by cerebellar circuits
that function as elemental pattern generators. I will then
discuss how an array of these pattern generators might
serve well in an overall schema of motor control.
ELEMENTAL
PATTBRN
GBNERATORS
The motivation for proposing pattern generators rather
than more conventional network designs derives from the
experimental observation that motor commands, once initiated, are not affected, or are only minimally affected, by
alterations in sensory input. This observation indicates
that the temporal features of these motor commands are
produced by self-sustained activity within the neural
network rather than by the time courses of network inputs.
371
Two features of the intrinsic circuitry of the cerebellum may be particularly instrumental in explaining selfsustained activity. One is a recurrent pathway from cerebellar nuclei that returns back to cerebellar nuclei. In
the case of the cerebellorubrospinal system in Fig. 1, the
recurrent pathway is from the interpositus nucleus to red
nucleus to lateral reticular nucleus and back to interpositus, what I will call the IRL loop. The other feature of
intrinsic cerebellar circuitry that may be of critical
importance in pattern generation is mutual inhibition
between P cells. Fig. 1 shows how mutual inhibition
results from the recurrent collaterals of P-cell axons.
Inhibitory interneurons called basket and stellate cells
(not shown in Fig. 1) provide additional pathways for
mutual inhibition. Both the IRL loop and mutual inhibition
between P cells constitute positive feedback circuits and,
as such, are capable of self-sustained activity.
Self-sustained activity in the form of high-frequency
spontaneous discharge has been observed in the IRL loop
under conditions in which the inhibitory P-cell input to I
cells is blocked 3. Trace A in Fig. 2 shows this unrestrained discharge schematically, and the other traces
illustrate how a motor command might be sculpted out of
this tendency toward high-frequency, repetitive discharge.
Trace B shows a brief burst of input presumed to be
sent from the sensorimotor cortex to the R cell in Fig. 1.
This burst serves as a trigger that initiates repetitive
discharge in an IRL loop, and trace D illustrates the
discharge of an I cell in the active loop. The intraburst
discharge frequency of this cell is presumed to be
determined by the summed magnitude of inhibitory input
(shown in trace C) from the set of P cells that project to
it (Fig. 1 shows only a few P cells from this set). Since
the inhibitory input to I was reduced to an appropriate
magnitude for controlling this intraburst frequency some
time prior to the arrival of the trigger event, this
example illustrates a mechanism for presetting the pattern
generator. Note that the same reduction of inhibition that
presets the intraburst frequency would bring the loop
closer to the threshold for repetitive firing, thus serving
to enable the triggering operation. The I-cell burst,
after continuing for a duration appropriate for the desired
motor behavior, is assumed to be terminated by an abrupt
increase in inhibitory input from the set of P cells that
project to I (trace C).
The time course of bursting discharge illustrated in
Fig. 2D would be expected to propagate throughout the IRL
loop and be transmitted via the rubrospinal tract to the
spinal cord where it could serve as a motor command.
Bursts of R-cell discharge similar to this are observed to
precede movements in trained monkey subjects2.
372
A.
111111111111111111111111111111111111111111111111111111111111111111111111
B.
1111
c.
D.
111111111111111
time -
Figure 2: Signals Contributing to Pattern Generation. A.
Repetitive discharge of I cell in the absence of Pcell inhibition. B. Trigger burst sent to the IRL
loop from sensorimotor cortex. C. Summed inhibition
produced by the set of P cells projecting to the I
cell. D. Resultant motor pattern in I cell.
The sculpting of a motor command out of a repetitive
firing tendency in the IRL loop clearly requires timed
transitions in the discharge rates of specific P cells.
The present model postulates that the latter result from
state transitions in the network of P cells. Bell and
Grimm8 described spontaneous transitions in P-cell firing
that occur intermittently, and I have frequently observed
them as well. These transitions appear to be produced by
intrinsic mechanisms and are difficult to influence with
sensory stimulation. The mutual recurrent inhibition
between P cells might explain this tendency toward state
transitions.
Recurrent inhibition between P cells is mediated by
synapses near the cell bodies and primary dendrites of the
P cells whereas parallel fiber input extends far out on the
dendritic tree. This arrangement may explain why sensory
input via parallel fibers does not have a strong, continuous effect on P cell discharge. This sensory input may
serve mainly to promote state transitions in the network of
P cells, perhaps by modulating the likelihood that a given
P cell would participate in a state transition. Once the
373
transition starts, the activity of the P cell may be dominated by the recurrent inhibition close to the cell body.
The mechanism responsible for the adaptive adjustment
of these elemental pattern generators may be a change in
the synaptic strengths of parallel fiber input to P cells 9 .
Such alterations in the efficacy of sensory input would
influence the state transitions discussed in the previous
paragraph, thus mediating adaptive adjustments in the
amplitude and timing of patterned output. Elsewhere I have
suggested that this learning process is analogous to operant conditioning and includes both positive and negative
reinforcement 3 . Noradrenergic fibers might mediate positive reinforcement, whereas climbing fibers might mediate
negative reinforcement. For example, if the network were
controlling a limb movement, negative reinforcement might
occur when the limb bumps into an object in the work space
(climbing fibers fire in response to unexpected somatic
events such as this), whereas positive reinforcement might
occur whenever the limb successfully acquires the desired
target (the noradrenergic fibers to the cerebellum are
thought to receive input from reward centers in the brain) .
Positive reinforcement may be analogous to the associative
reward-punishment algorithm described by BartolO which
would fit with the diffuse projections of noradrenergic
fibers. Negative reinforcement might be capable of a
higher degree of credit assignment in view of the more
focused projections of climbing fibers.
In summary, the previous paragraphs outline some ideas
that may be useful in developing a network model of the
cerebellum. This particular set of ideas was motivated by
a desire to explain the unique manner in which the cerebellum uses sensory input to control patterned output. The
model deals explicitly with small circuits within a much
larger network. The small circuits are considered elemental pattern generators, whereas the larger network can be
considered an array of these pattern generators. The
assembly of many elements into an array may give rise to
some emergent properties of the network, due to
interactions between the elements. However, the highly
compartmentalized anatomical structure of the cerebellum
fosters the notion of relatively independent elemental
pattern generators as hypothesized in the schema for
movement control presented in the next section.
SCHEMA
I'OR MOTOR
CONTROL
A major aim in developing the elemental pattern
generator model described in the previous section was to
explain the intriguing manner in which the cerebellum uses
sensory input. Stated succinctly, sensory input is used to
preset and to trigger each elemental pattern generator and
374
to evaluate the success of previous output patterns in
controlling motor behavior. However, sensory input is not
used to shape the waveform of an ongoing output pattern.
This means that continuous feedback is not available, at
the level of the cerebellum, for any immediate adjustments
of motor commands.
Is this kind of behavior actually advantageous in the
control of movement? I would propose the affirmative,
particularly on the grounds that this strategy seems to
have withstood the test of evolution. Elsewhere I have
reviewed the global strategies that are used to control
several different types of body function 11 ?
A common
theme in each of these physiological control systems is the
use of negative feedback only as a low-level strategy, and
this coupled with a high-level stage of adaptive
feedforward control. It was argued that this particular
two-stage control strategy is well suited for utilizing the
advantageous features of feedback, feedforward and adaptive
control in combination.
The adjustable pattern generator model of the cerebellum outlined in the previous section is a prime example of
an adaptive, feedforward controller. In the subsequent
paragraphs I will outline how this high-level feedforward
controller communicates with low-level feedback systems
called motor servos to produce limb movements (Fig. 3).
The array of adjustable pattern generators (PGn) in
the first column of .Fig. 3 produce an array of elemental
commands that are transmitted via descending fibers to the
spinal cord. The connectivity matrix for descending fibers
represents the consequences of their branching patterns.
Any given fiber is likely to branch to innervate several
motor servos. Similarly, each member of the array of motor
servos (MS m) receives convergent input from a large number
of pattern generators, and the summed total of this input
constitutes its overall motor command.
A motor servo consists of a muscle, its stretch receptors and the spinal reflex pathways back to the same muscle 12 ? These reflex pathways constitute negative feedback
loops that interact with the motor command to control the
discharge of the motor neuron pool innervating the
particular muscle. Negative feedback from the muscle
receptors functions to maintain the stiffness of the muscle
relatively constant, thus providing a spring-like interface
between the body and its mechanical environment 13 ? The
motor command acts to set the slack length of this
equivalent spring and, in this way, influences motion of
the limb. Feedback also gives rise to an unusual type of
damping proportional to a low fractional power of
velocit y 14. The individual motor servos interact with each
other and with external loads via the trigonometric
relations of the musculoskeletal matrix to produce
resultant joint positions.
375
Cerebellar
Network
elemental
commands
PG 1
r----
Motor
Servos
......... --
PG 2
...... -- ......
PG 3
.. -- .......
...
(II
forces,
lengths
motor
commands
CD
MS 1
u:
._.. -------
..c
joint
positions
shoulder
CI
c
'6
c
CD
&lCD
MS2
..........
)(
0
...0
?C
u.
~
)(
"ii
a;
-
?C
a;
CD
G)
~
~
a
"3
f
~C
C
8
elbow
wrist
&l
:J
...........
~
finger
MSM
....... -...
PG N
---
External
Load
-----
Figure 3: Schema for Motor Control Utilizing Pattern Generator Model of Cerebellum. An array of elemental
pattern generators (PGn ) operate in an adaptive, feedforward manner to produce motor commands. These outputs of the high-level stage are sent to the spinal
cord where they serve as inputs to a low-level array
of negative feedback systems called motor servos
(MS m). The latter regulate the forces and lengths of
individual muscles to control joint angles.
While the schema for motor control presented here is
based on a considerable body of experimental data, and it
also seems plausible as a strategy for motor control, it
will be important to explore its capabilities for human
limb control with simulation studies. It may also be
fruitful to apply this schema to problems in robotics.
Since I am mainly an experimentalist, my authorship of this
paper is meant as an entre for collaborative work with
neural network modelers that may be interested in these
problems.
376
RJ:I'J:RJ:HCJ:S
1. M. Ito, The Cerebellum and Neural Control (Raven
Press, N. Y., 1984).
2. J. C. Houk & A. R. Gibson, In: J. S. King, New
Concepts in Cerebellar Neurobiology (Alan R. Liss,
Inc., N. Y., 1987), p. 387.
3. J. C. Houk, In: M. Glickstein & C. Yeo, Cerebellum and
Neuronal Plasticity (Plenum Press, N. Y., 1988), in
press.
4. D. Marr, J. Physiol. (London) 2D2, 437 (1969).
5. J. S. Albus, Math. Biosci. lQ, 25 (1971).
6. C. C. Boylls, A Theory of Cerebellar Function with
Applications to Locomotion (COINS Tech. Rep., U. Mass.
Amherst), 76-1.
7. J. C. Houk, In: J. E. Desmedt, Cerebral Motor Control
in Man: Long Loop Mechanisms (Karger, Basel, 1978), p.
193.
8. C. C. Bell & R. J. Grimm, J. Neurophysiol., J2, 1044
(1969) .
9 C.-F. Ekerot & M. Kano, Brain Res., ~, 357 (1985).
10. A. G. Barto, Human Neurobiol., ~, 229 (1985).
11. J. C. Houk, FASEB J., Z, 97-107 (1988).
12. J. C. Houk & W. Z. Rymer, In: V. B. Brooks, Handbook
of Physiology, Vol. 1 of Sect. 1 (American Physiological Society, Bethesda, 1981), p.257.
13. J. C. Houk, Annu. Rev. Physiol., ~, 99 (1979).
14. C. C. A. M. Gielen & J. C. Houk, BioI. Cybern., 52,
217 (1987).
| 68 |@word noradrenergic:6 seems:2 instrumental:1 advantageous:2 d2:1 simulation:1 propagate:1 accounting:1 pg:4 innervating:1 reduction:1 vigorously:1 initial:1 efficacy:1 karger:1 past:1 yet:1 intriguing:1 physiol:2 subsequent:1 chicago:1 plasticity:1 shape:1 motor:47 math:1 relayed:1 burst:8 consists:1 sustained:4 pathway:9 paragraph:3 manner:5 presumed:2 expected:2 behavior:4 frequently:1 brain:6 actual:2 little:1 elbow:1 project:3 circuit:7 mass:1 what:1 kind:1 neurobiol:1 monkey:1 affirmative:1 proposing:1 finding:1 temporal:1 act:1 olivary:3 control:26 medical:1 appear:5 before:1 positive:5 understood:1 local:2 treat:1 timing:1 limit:1 consequence:2 receptor:4 initiated:1 firing:3 approximately:1 might:10 minimally:1 bursting:1 patterned:6 range:1 unique:2 responsible:1 wrist:1 gibson:1 bell:2 physiology:2 thought:1 projection:3 close:1 presets:1 influence:3 cybern:1 descending:2 conventional:1 equivalent:1 demonstrated:1 center:1 fruitful:1 send:1 duration:4 focused:2 abrupt:1 array:11 utilizing:2 marr:1 mossy:8 notion:1 analogous:2 discharge:17 transmit:2 controlling:4 trigger:6 heavily:1 spontaneous:2 target:1 plenum:1 us:4 hypothesis:1 locomotion:1 element:3 velocity:4 particularly:2 located:1 observed:3 role:1 electrical:1 cord:5 sect:1 movement:14 servo:8 environment:2 reward:2 dynamic:1 lcd:1 trained:1 weakly:1 exposed:1 serve:4 division:1 basis:1 neurophysiol:1 joint:3 emergent:1 chapter:1 fiber:33 finger:1 london:1 premotor:1 larger:2 plausible:1 reticular:2 withstood:1 associative:1 propose:1 interaction:1 grimm:1 j2:1 loop:12 trigonometric:1 albus:1 elemental:12 produce:6 tract:5 object:1 illustrate:1 recurrent:7 school:1 sole:2 strong:1 come:1 somatosensory:1 waveform:3 anatomy:1 musculoskeletal:1 human:2 enable:1 argued:1 stellate:1 dendritic:1 pgn:2 strictly:1 onward:2 stretch:1 credit:1 considered:2 ground:1 houk:8 bump:1 circuitry:3 major:2 sculpting:1 relay:1 purpose:2 precede:2 sensitive:1 modulating:1 successfully:1 clearly:2 aim:1 rather:2 command:24 broader:1 barto:1 entre:1 modelling:1 indicates:1 mainly:2 unrestrained:1 likelihood:1 contrast:1 tech:1 detect:1 am:1 relation:1 interested:1 microelectrode:2 overall:3 ill:1 integration:1 fairly:1 mutual:5 summed:3 once:2 shaped:1 represents:1 constitutes:2 promote:1 others:1 stimulus:2 few:1 individual:5 fire:1 maintain:1 organization:1 message:2 interneurons:1 highly:3 capable:4 closer:2 collateral:3 damping:1 tree:1 continuing:1 abundant:1 desired:2 timed:1 re:1 column:1 purkinje:1 assignment:1 my:3 punishment:1 amherst:1 physic:1 off:1 pool:1 transmitting:1 connectivity:1 central:1 postulate:1 external:4 book:1 american:2 return:1 yeo:1 li:1 potential:1 diversity:1 alteration:2 sec:2 includes:2 inc:1 explicitly:1 experimentalist:1 view:1 schema:9 portion:2 red:3 start:1 parallel:6 capability:1 collaborative:1 conducting:1 climbing:10 produced:7 published:1 explain:6 detector:1 synapsis:1 basket:1 synaptic:1 interpositus:4 whenever:1 failure:1 colleague:1 sensorimotor:4 involved:1 james:1 frequency:6 resultant:2 transmits:2 modeler:1 auditory:1 fractional:1 shaping:1 amplitude:1 actually:1 back:3 appears:2 higher:2 response:1 compartmentalized:1 arranged:1 stage:6 receives:3 irl:7 perhaps:1 impulse:2 effect:1 hypothesized:2 concept:1 evolution:1 laboratory:1 illustrated:2 deal:1 distal:1 cerebellum:24 ll:1 during:1 self:5 branching:1 inferior:1 acquires:1 authorship:1 m:3 presenting:1 outline:4 motion:1 bring:1 interface:1 intermittently:1 recently:1 common:1 specialized:1 stimulation:2 spinal:6 conditioning:1 cerebral:2 discussed:1 rubrospinal:5 blocked:1 biosci:1 outlined:2 similarly:1 illinois:1 innervate:1 cortex:4 inhibition:10 playa:1 prime:1 rep:1 success:2 muscle:9 responsiveness:1 transmitted:2 additional:2 converting:1 signal:6 ii:2 branch:2 rj:2 alan:1 long:1 basic:1 controller:2 repetitive:5 cerebellar:12 represent:2 achieved:2 cell:44 robotics:1 receive:2 schematically:1 whereas:4 diagram:2 source:2 ot:1 operate:1 induced:1 recording:1 sent:5 member:1 flow:1 seem:1 call:1 near:1 feedforward:6 intermediate:2 fit:2 triggering:1 idea:4 translates:1 motivated:1 effort:1 routed:1 constitute:2 useful:2 detailed:2 ph:1 processed:1 category:2 reduced:1 inhibitory:6 serving:1 anatomical:1 vol:1 affected:2 threshold:1 year:1 ms2:1 angle:1 respond:1 extends:1 throughout:1 presetting:1 centrally:1 convergent:1 fan:2 activity:6 strength:1 occur:3 diffuse:2 dominated:1 generates:1 spring:2 relatively:2 developing:3 according:1 peripheral:1 combination:1 kano:1 bethesda:1 rev:1 cutaneous:1 projecting:1 operant:1 turn:2 discus:1 mechanism:5 slack:1 initiate:2 serf:1 unusual:1 studying:1 available:1 operation:1 permit:1 stiffness:1 apply:1 limb:8 appropriate:2 regulate:1 occurrence:1 alternative:2 coin:1 gate:1 neuroanatomical:1 assembly:1 granule:1 society:1 arrangement:1 parametric:2 primary:1 strategy:5 usual:1 lateral:2 me:1 originate:1 participate:1 toward:2 length:4 providing:1 difficult:1 mediating:1 trace:6 negative:8 rise:2 stated:1 design:1 adjustable:5 basel:1 neuron:3 observation:2 trb:1 immediate:1 neurobiology:1 shoulder:1 somatic:1 intensity:1 mechanical:1 extensive:1 vestibular:1 brook:1 suggested:1 pattern:29 power:1 event:5 critical:1 force:3 brief:1 mediated:1 coupled:1 review:1 prior:1 contributing:1 highlight:1 rymer:1 northwestern:1 generation:3 msm:1 proportional:1 borrows:1 remarkable:1 generator:21 nucleus:14 degree:2 foster:1 cd:3 production:1 course:2 summary:2 elsewhere:2 succinctly:1 institute:1 wide:1 neighbor:1 explaining:1 regard:1 feedback:12 cortical:1 world:1 transition:9 sensory:20 author:1 zing:1 adaptive:9 reinforcement:7 far:1 global:1 active:2 handbook:1 assumed:2 continuous:3 why:1 reviewed:1 dendrite:1 interact:2 main:3 terminated:1 motivation:1 arrival:1 mediate:2 hcj:1 body:7 neuronal:1 fig:13 site:2 probing:1 axon:2 theme:1 position:2 lq:1 atypical:1 communicates:1 ito:1 abundance:1 annu:1 load:2 specific:1 emphasized:1 divergent:1 physiological:2 evidence:1 derives:1 intrinsic:5 raven:1 importance:1 ci:1 magnitude:2 faseb:1 illustrates:3 suited:1 simply:1 likely:1 explore:1 gielen:1 neurophysiological:1 visual:1 unexpected:2 adjustment:3 desire:1 reflex:2 corresponds:2 bioi:1 marked:1 goal:1 king:1 absence:1 considerable:3 change:1 man:1 specifically:1 determined:1 preset:3 called:6 total:1 experimental:2 tendency:3 selectively:1 internal:1 latter:3 modulated:1 meant:1 ongoing:1 evaluate:3 |
6,412 | 680 | Information Theoretic Analysis of
Connection Structure from Spike Trains
Satoru Shiono?
Satoshi Yamada
Cen tral Research Laboratory
Mi tsu bishi Electric Corporation
Amagasaki, Hyogo 661, Japan
Central Research Laboratory
Mitsu bishi Electric Corporation
Amagasaki, Hyogo 661, Japan
Michio Nakashima
Kenji Matsumoto
Cen tral Research Laboratory
Mi tsu bishi Electric Corporation
Amagasaki, Hyogo 661, Japan
Facul ty of Pharmaceu tical Science
Hokkaidou University
Sapporo, Hokkaidou 060, Japan
Abstract
We have attempted to use information theoretic quantities for analyzing neuronal connection structure from spike trains. Two point
mu tual information and its maximum value, channel capacity, between a pair of neurons were found to be useful for sensitive detection of crosscorrelation and for estimation of synaptic strength,
respectively. Three point mutual information among three neurons
could give their interconnection structure. Therefore, our information theoretic analysis was shown to be a very powerful technique
for deducing neuronal connection structure. Some concrete examples of its application to simulated spike trains are presented.
1
INTRODUCTION
The deduction of neuronal connection structure from spike trains, including synaptic
strength estimation, has long been one of the central issues for understanding the
structure and function of the neuronal circuit and thus the information processing
?corresponding author
515
516
Shiono, Yamada, Nakashima, and Matsumoto
mechanism at the neuronal circuitry level. A variety of crosscorrelational techniques
for two or more neurons have been proposed and utilized (e.g., Melssen and Epping,
1987; Aertsen et. ai., 1989). There are, however, some difficulties with those
techniques, as discussed by, e.g., Yang and Shamma (1990). It is sometimes difficult
for the method to distinguish a significant crosscorrelation from noise, especially
when the amount of experimental data is limited. The quantitative estimation
of synaptic connectivity is another difficulty. And it is impossible to determine
whether two neurons are directly connected or not, only by finding a significant
crosscorrelation between them.
The information theory has been shown to afford a powerful tool for the description
of neuronal input-output relations, such as in the investigation on the neuronal coding of the visual cortex (Eckhorn et. ai., 1976; Optican and Richmond, 1987). But
there has been no extensive study to apply it to the correlational analysis of action
potential trains. Because a correlational method using information theoretic quantities is considered to give a better correlational measure, the information theory is
expected to offer a unique correlational method to overcome the above difficulties.
In this paper, we describe information theory-based correlational analysis for action
potential trains, using two and three point mutual information (MI) and channel
capacity. Because the information theoretic analysis by two point MI and channel
capacity will be published in near future (Yamada et. ai., 1993a), more detailed description is given here on the analysis by three point MI for infering the relationship
among three neurons.
2
2.1
CORRELATIONAL ANALYSIS BASED ON
INFORMATION THEORY
INFORMATION THEORETIC QUANTITIES
According to the information theory, the n point mutual information expresses the
amount of information shared among n processes (McGill, 1955). Let X, Y and
Z be processes, and t and s be the time delays of X and Y from Z, respectively.
Using Shannon entropies H, two point MI between X and Y and three point MI,
are defined (Shannon, 1948; Ikeda et. ai., 1989):
I(Xt : Y s )
I(Xt : Y, : Z)
H(X t ) + H(Y,) - H(Xt, Y,),
H(X t ) + H(Y,) + H(Z) - H(Xt, y,)
-H(Y" Z) - H(Z, X t ) + H(Xt, Y" Z).
(1 )
(2)
I(X t : Y s : Z) is related to I(X t : Y,) as follows:
(3)
where I(Xt : YsIZ) means the two point conditional MI between X and Y if the
state of Z is given. On the other hand, channel capacity is given by (r
s - t),
=
CC(X: Y r ) = maxI(X: Yr).
p(x,)
(4)
We consider now X, Y and Z to be neurons whose spike activity has been measured.
Information Theoretic Analysis of Connection Structure from Spike Trains
Two point MI and two point conditional MI are obtained by (i, j, k = 0, 1),
~ ( I ) ( )1 p(Yj,Tlxi)
I (X : Y)
T = L....J P Yj,T Xi P Xi og ( . ) '
..
P YJ,T
I,J
I(Xt :YIZ)
,
~(
= .L....J
P Xi,t, Yj"
. ?.
(5)
I)()l
Zk
I,J,'"
P
Zk
p(xi,t,Yj"lzk)
og (x. Iz ) ( . Iz ).
P I,t k P YJ,s k
(6)
where x, Y and z mean the states of neurons, e.g., Xl for the firing state and
for the non-firing state of X, and p( ) denotes probability. And three point
MI is obtained by using Equation (3). Those information theoretic quantities are
calculated by using the probabilities estimated from the spike trains of X, Y and Z
after the spike trains are converted into time sequences consisting of 0 and 1 with
discrete time steps, as described elswhere (Yamada et. al., 1993a).
Xo
2.2
PROCEDURE FOR THREE POINT MUTUAL INFORMATION
ANALYSIS
Suppose that a three point MI peak is found at (to, so) in the t, s-plane (see Figure 1).
The three time delays, to, So and r
So - to, are obtained. They are supposed to be
time delays in three possible interconnections between any pair of neurons. Because
the peak is not significant if only one pair of the three neurons is interconnected, two
or three of the possible interconnections with corresponding time delays should truly
work to produce the peak. We will utilize I(n : m) and I(n : mil) (n, m, I = X, Y or
Z) at the peak to find working interconnections out of them. These quantities are
obtained by recalculating each probability in Equations (5) and (6) over the whole
peak region.
=
If two neurons, e.g., X and Y, are not interconnected either I(X : Y) or I(X : YIZ)
is equal to zero. The reverse proposition, however, is not true. The necessary
and sufficient condition for having no interconnection is obtained by calculating
I( n : m) and I( n : mil) for all possible interconnection structures. The neurons are
rearranged and renamed A, Band C in the order of the time delays. There are only
four interconnection structures, as shown in Table 1.
I: No interconnection between A and B. A and B are statistically independent, i. e.,
p(aj,bj ) p(aj)p(bj ), I(A: B) O. The three point MI peak is negative.
II: No interconnection between A and C. The states of A and C are statistically independent when the state of B is given, i.e., p(ai' cklbj) p(adbj)p(Cklbj),
I(A : CIB) O. The peak is positive.
=
=
=
=
III: No interconnection between Band C. Similar to case II, because p(bj , cklai) =
p(bjlai)p(cklai), I(B: CIA) O. The peak is positive.
IV: Three in terconnections. The above three cases are considered to occur concomitantly in this case. The peak is positive or negative, depending on their
relative contributions. Because A and B should have an apparent effect on
the firing-probability of the postsynaptic neurons, I(A : B), I(A : CIB)
and I(B : CIA) are all non-zero except for the case where the activity of
B completely coincides with that of A with the specified time delay (in
this case, both I(A : CIB) and I(B : CIA) are zero (see Yamada et. al.,
1993b)).
=
517
518
Shiono, Yamada, Nakashima, and Matsumoto
Table 1. Interconnection Structure and Information Theoretic Quantities
Interconnection
Structure
2 point MI
I(A:B)
I(A:C)
I(B:C)
2 point condition MI
I(A:B I C)
I(A:CIB)
I(B:C I A)
3 point MI
I(A:B:C)
I:
~
~
III:
@
~@cl@ ~@
II:
=0
~O
~O
>0
>0
>0
IV:
~
ctJ@
>0
>0
>0
>0
>0
>0
>0
~O
~O
~O
>0
>0
=0
~O
~O
=0
+
+
~O
~O
+ or
-
From what we have described above, the interconnection structure for a three point
MI peak is deduced utilizing the following procedure;
(a) A negative 3pMI peak: it corresponds to case I or IV. The problem is to
determine whether A and B are interconnected or not.
(1) If I(A : B) = 0, case I.
(2) If I(A : B)
> 0, case IV.
(b) A positive 3pMI peak: it corresponds to case II, III or IV. The existence of the
A-C and B-C interconnections has to be checked.
?
?
?
=?
(1) If I(A : CIB) > and I(B : CIA) > 0, case IV.
(2) If I(A : CIB) = and I(B : CIA) > 0, case II.
(3) If I(A : CIB) > and I(B : CIA)
0, case III.
(4) If I(A : CIB)
and I(B : CIA)
0, the interconnection structure
cannot be ded nced except for the A - B interconnection.
=
=
This procedure is applicable, if all the time delays are non-zero. If otherwise, some
of the interconnections cannot be determined (Yamada et. ai., 1993b).
3
SIMULATED SPIKE TRAINS
In order to characterize our information theoretic analysis, simulations of neuronal network models were carried out. We used a model neuron described by
Information Theoretic Analysis of Connection Structure from Spike Trains
the Hodgkin-Huxley equations (Yamada et. ai., 1989). The used equations and parameters were described (Yamada et. al., 1993a). The Hodgkin-Huxley equations
were mathematically integrated by the Runge-Kutta-Gill technique.
4
4.1
RESULTS AND DISCUSSION
ANALYSIS BY TWO POINT MUTUAL INFORMATION AND
CHANNEL CAPACITY
The performance was previously reported of the information theoretic analysis by
two point MI and channel capacity (Yamada et. ai., 1993a).
Briefly, this anlytical method was compared with some conventional ones for both
excitatory and inhibitory connections using action potential trains obtained by the
simulation of a model neuronal network. It was shown to have the following advantages. First, it reduced correlational measures within the bounds of noise and
simultaneously amplified beyond the bounds by its nonlinear function. It should be
easier in its crosscorrelation graph to find a neuron pair having a weak but significant interaction, especially when the synaptic strength is small or the amount of
experimental data is limited. Second, channel capacity was shown to allow fairly
effective estimation of synaptic strength, being independent of the firing probability
of a presynaptic neuron, as long as this firing probability was not large enough to
have the overlap of two successive postsynaptic potentials.
4.2
ANALYSIS BY THREE POINT MUTUAL INFORMATION
The practical application of the analysis by three point MI is shown below in detail,
using spike trains obtained by simulation of the three-neuron network models shown
in Figures 1 and 2 (Yamada et. ai., 1993b).
The network model in Figure 1(1) has three interconnections. In Figure 1(2), three
point MI has two positive peaks at (17ms, 12ms) (unit "ms" is omitted hereafter)
and (17,30), and one negative peak at (0,12). For the peak at (17,12), the neurons
are renamed A, B and C from the time delays (Z as A, Y as B and X as C), as
in Table 1. Because only I(B : CIA) ..:. 0 (see Figure 1 legend), the peak indicates
12) and A-+C (Z-+X) (t = 17) interconnections.
case III with A-+B (Z-+Y) (s
13) interconSimilarly, the peak at (17,30) indicates Z -+X and X -+Y (s - t
nections, and the peak at (0,12) indicates Z-+Y and X -+Y interconnections. The
interconnection structure deduced from each three point MI peak is consistent with
each other, and in agreement with the network model.
=
=
Alternatively, the three point MI graphical presentation such as shown in Figure
1(2) itself gives indication of some truly existing interconnections. If more than two
three point MI peaks are found on one of the three lines, t = to, s = So and s-t = TO,
the interconnection with the time delay represented by this line is considered to be
real. For example, because the peaks at (17, 12) and (17, 30) are on the line of t = 17
(Figure 1(2)), the interconnection represented by t
17 (Z-+X) are considered to
be real. In a similar manner, the interconnections of s = 12 (Z-+Y) and s - t 12
(X -+Y) are obtained. But this graphical indication is not complete, and thus the
calculation of two point MI's and two point conditional MI's should be always
=
=
519
520
5hiono ? Yamada. Nakashima. and Matsumoto
(1)
Neuron X
~ Neuron Y
DNeuronZ
(2)
0.0010
-so
o
so
t (ms)
~.oo10
Figure 1. Three point Ml analYsis of simulated spike trains. (1) A. three-neuron
network model with Z .... X Z ....Y and X ....Y interconnections. The total number of
spikes; X:40 ? y:54 ? Z:3150. (2) Three poinl Ml analysis of spike trains. Three
00
point Ml has00two positive peaks al (17.12) and (17 .30). and one negative peak at
(0.12). For the peak al (17. 12) the neurons are renamed (Z as A. Y as B and X
as C). Two point Ml and two point conditional M1 for the peak at (17. 12) are:
I(A: B) == 0.03596. I(A: C) == 0.06855 ? I(B : C) == 0.01375 ? I(A : BIC) == 0.02126.
I(A : CIB) == 0.05376. I(B : CIA) == 0.00011. So. I( B : CIA) .:. o. indicating case
111 (see Table 1) with A.... B (Z ....Y) and A....C (Z .... X) interconnections. Similarly,
for the peaks at (1 ,30) and at (0,12). Z .... X and X ....Y interConnections. and Z ....Y
7
and X....Y interconnections are obtalned, respectively.
n.
performed
for model
connrma.tio
The nelwork
in Figure
2(1) has four interCOnnections. Three
12
12point Ml7 has
ftVe major peaks: four positive peaks at (17. -12). (17. 30). (_24.- ) and (1 ? )
and one negative peak at (0.10). The peaks at (17. -12). (17. 12) and (17. 30) Me
on the line 01 t == 17 (Z .... X). the peaks at (17, -12) and (-24. -12) are on Il\e
line 01 s == -12 (Z ...Y). the peaks at (17.12) and (0. 10) are on the line of s == 12
(Z ....Y). and the peaks at (-24. -12), (0.10) and (17. 30) are on the line of
Information Theoretic Analysis of Connection Structure from Spike Trains
(1)
Neuron X ~ Neuron Y
~euronz
(2)
0.0008
o
t (ms)
50
-0.0008
Figure 2. Three point MI analysis of simulated spike trains. (1) A three-neuron
network model with Z-+X Z-+Y, Z~Y and X-+Y interconnections. The total
number of spikes; X:4300, Y:5150, Z:4850. (2) Three point MI analysis of spike
trains. Three point MI has five major peaks, four positive peaks at (17, -12),
(17,12), (17,30) and (-24, -12), and one negative peak at (0,10).
s - t = 12 (X -+Y). The calculation of two point MI and two point conditional MI
for each peak gives the confirmation that each three point MI peak was produced
by two interconnections. Namely, their calculation indicates Z-+X (t = 17), Z~Y
(s
-12), Z-+Y (s = 12) and X-+Y (s - t
12) interconnections. There are
also some small peaks. They are considered to be ghost peaks due to two or three
interconnections, at least one of wllich is a combination of two interconnections
found by analyzing the major peaks. For example, the positive peak at (-7, -12)
indicates Z~Y and X-+Y interconnections, but the latter (s - t = -5) is the
combination of the Z -+ X interconnection (t = 17) and the Z -+ Y interconnection
(s = 12).
=
=
The interconnection structure of a network containing an inhibitory intercolLllectioll
or consisting of more than four neurons can also be deduced, although it becomes
more difficult to perform the three point MI analysis.
521
522
Shiono, Yamada, Nakashima, and Matsumoto
References
A. M. H. J. Aertsen, G. L. Gerstein, M. K. Habib & G. Palm. (1989) Dynamics of
neuronal firing correlation: modulation of" effective connectivity". J. N europhY6iol.
61: 900-917.
R. Eckhorn, O. J. Griisser, J. Kremer, K. Pellnitz & B. Popel. (1976) Efficiency
of different neuronal codes: information transfer calculations for three different
neuronal systems. Bioi. Cyhern. 22: 49-60.
K. Ikeda, K. Otsuka & K. Matsumoto. (1989) Maxwell-Bloch turbulence. Prog.
Theor. Phys., Suppl. 99: 295-324.
W. J. McGill. (1955) Multivariate information transmission.
Theory 1: 93-111.
IRE Tran6.
Inf.
W. J. Melssen & W. J. M. Epping. (1987) Detection and estimation of neural
connectivity based on crosscorrelation analysis. Bioi. Cyhern. 57: 403-414.
L. M. Optican & B. J. Richmond. (1987) Temporal encoding of two-dimensional
patterns by single units primate inferior temporal cortex. III. Information theoretic
analysis. J. NeurophY6iol. 57: 162-178.
C. E. Shannon. (1948) A mathematical theory of communication.
Techn. J. 27: 379-423.
Bell.
Syst.
S. Yamada, M. Nakashima, K. Matsumoto & S. Shiono. (1993a) Information theoretic analysis of action potential trains: 1. Analysis of correlation between two
neurons. Bioi. Cyhern., in press.
S. Yamada, M. Nakashima, K. Matsumoto & S. Shiono. (1993b) Information theoretic analysis of action potential trains: II. Analysis of correlation among three
neurons. submitted to BioI. Cyhern.
W. M. Yamada, C. Koch & P. R. Adams. (1989) Multiple channels and calcium
dynamics. In C. Koch & I. Segev (ed), Methods in Neuronal Modeling: From
Synapses to Neurons, 97-133, Cambridge, MA, USA: MIT Press.
X. Yang & S. A. Shamma. (1990) Identification of connectivity in neural networks.
Biophys. J. 57: 987-999.
| 680 |@word effect:1 especially:2 kenji:1 briefly:1 true:1 quantity:6 laboratory:3 spike:18 simulation:3 aertsen:2 inferior:1 kutta:1 simulated:4 coincides:1 capacity:7 m:5 me:1 hereafter:1 investigation:1 proposition:1 theoretic:16 complete:1 theor:1 optican:2 mathematically:1 presynaptic:1 existing:1 code:1 koch:2 considered:5 hyogo:3 relationship:1 ikeda:2 bj:3 difficult:2 circuitry:1 cen:2 cib:9 major:3 negative:7 omitted:1 yiz:2 estimation:5 discussed:1 m1:1 applicable:1 calcium:1 yr:1 perform:1 significant:4 plane:1 sensitive:1 cambridge:1 ai:9 neuron:28 matsumoto:8 yamada:16 tool:1 eckhorn:2 pmi:2 similarly:1 ire:1 mit:1 communication:1 always:1 successive:1 ctj:1 five:1 cortex:2 mathematical:1 otsuka:1 og:2 popel:1 mil:2 multivariate:1 pair:4 namely:1 specified:1 extensive:1 inf:1 connection:8 reverse:1 indicates:5 manner:1 richmond:2 expected:1 beyond:1 below:1 pattern:1 integrated:1 gill:1 ghost:1 relation:1 deduction:1 determine:2 becomes:1 ii:6 nelwork:1 multiple:1 circuit:1 issue:1 among:4 including:1 what:1 overlap:1 difficulty:3 calculation:4 offer:1 fairly:1 mutual:6 long:2 equal:1 finding:1 corporation:3 having:2 temporal:2 quantitative:1 carried:1 future:1 tral:2 sometimes:1 unit:2 suppl:1 understanding:1 simultaneously:1 positive:9 relative:1 consisting:2 encoding:1 analyzing:2 firing:6 modulation:1 detection:2 legend:1 sufficient:1 consistent:1 recalculating:1 truly:2 near:1 shamma:2 limited:2 yang:2 iii:6 statistically:2 tical:1 enough:1 bloch:1 unique:1 tsu:2 practical:1 yj:6 variety:1 bic:1 excitatory:1 necessary:1 kremer:1 allow:1 procedure:3 iv:6 concomitantly:1 whether:2 bell:1 overcome:1 calculated:1 modeling:1 author:1 cannot:2 satoru:1 afford:1 turbulence:1 action:5 impossible:1 useful:1 detailed:1 infering:1 conventional:1 delay:9 amount:3 ml:4 band:2 characterize:1 reported:1 rearranged:1 reduced:1 xi:4 alternatively:1 tual:1 inhibitory:2 deduced:3 utilizing:1 peak:42 estimated:1 table:4 channel:8 zk:2 transfer:1 discrete:1 confirmation:1 iz:2 mcgill:2 concrete:1 suppose:1 connectivity:4 express:1 central:2 four:5 cl:1 containing:1 electric:3 agreement:1 utilize:1 utilized:1 whole:1 noise:2 crosscorrelation:5 graph:1 ded:1 japan:4 syst:1 potential:6 converted:1 neuronal:13 powerful:2 coding:1 hodgkin:2 prog:1 region:1 connected:1 amagasaki:3 gerstein:1 performed:1 easier:1 lzk:1 xl:1 mu:1 tran6:1 bound:2 distinguish:1 dynamic:2 contribution:1 activity:2 il:1 strength:4 occur:1 xt:7 huxley:2 segev:1 maxi:1 efficiency:1 completely:1 satoshi:1 weak:1 identification:1 biophys:1 represented:2 produced:1 train:20 cc:1 tio:1 describe:1 sapporo:1 effective:2 published:1 wllich:1 submitted:1 synapsis:1 phys:1 palm:1 according:1 combination:2 synaptic:5 checked:1 whose:1 apparent:1 ed:1 ty:1 renamed:3 postsynaptic:2 visual:1 interconnection:39 otherwise:1 primate:1 deducing:1 mi:32 nakashima:7 xo:1 itself:1 corresponds:2 runge:1 sequence:1 advantage:1 indication:2 equation:5 previously:1 ma:1 mechanism:1 conditional:5 interconnected:3 interaction:1 bioi:4 presentation:1 shared:1 habib:1 maxwell:1 determined:1 except:2 apply:1 amplified:1 supposed:1 correlational:7 description:2 total:2 techn:1 cia:10 experimental:2 attempted:1 shannon:3 correlation:3 hand:1 transmission:1 working:1 existence:1 produce:1 denotes:1 adam:1 nonlinear:1 indicating:1 latter:1 graphical:2 depending:1 aj:2 measured:1 calculating:1 entropy:1 usa:1 |
6,413 | 6,800 | Compatible Reward Inverse Reinforcement Learning
Alberto Maria Metelli
DEIB
Politecnico di Milano, Italy
Matteo Pirotta
SequeL Team
Inria Lille, France
Marcello Restelli
DEIB
Politecnico di Milano, Italy
[email protected]
[email protected]
[email protected]
Abstract
Inverse Reinforcement Learning (IRL) is an effective approach to recover a reward
function that explains the behavior of an expert by observing a set of demonstrations.
This paper is about a novel model-free IRL approach that, differently from most
of the existing IRL algorithms, does not require to specify a function space where
to search for the expert?s reward function. Leveraging on the fact that the policy
gradient needs to be zero for any optimal policy, the algorithm generates a set of
basis functions that span the subspace of reward functions that make the policy
gradient vanish. Within this subspace, using a second-order criterion, we search
for the reward function that penalizes the most a deviation from the expert?s policy.
After introducing our approach for finite domains, we extend it to continuous ones.
The proposed approach is empirically compared to other IRL methods both in the
(finite) Taxi domain and in the (continuous) Linear Quadratic Gaussian (LQG) and
Car on the Hill environments.
1
Introduction
Imitation learning aims to learn to perform a task by observing only expert?s demonstrations. We
consider the settings where only expert?s demonstrations are given, no information about the dynamics
and the objective of the problem is provided (e.g., reward) or ability to query for additional samples.
The main approaches solving this problem are behavioral cloning [1] and inverse reinforcement
learning [2]. The former recovers the demonstrated policy by learning the state-action mapping in a
supervised learning way, while inverse reinforcement learning aims to learn the reward function that
makes the expert optimal. Behavioral Cloning (BC) is simple, but its main limitation is the intrinsic
goal, i.e., to replicate the observed policy. This task has several limitations: it requires a huge amount
of data when the environment (or the expert) is stochastic [3]; it does not provide good generalization
or a description of the expert?s goal. On the contrary, Inverse Reinforcement Learning (IRL) accounts
for generalization and transferability by directly learning the reward function. This information can
be transferred to any new environment in which the features are well defined. As a consequence, IRL
allows recovering the optimal policy a posteriori, even under variations of the environment. IRL has
received a lot of attention in literature and has succeeded in several applications [e.g., 4, 5, 6, 7, 8].
However, BC and IRL are tightly related by the intrinsic relationship between reward and optimal
policy. The reward function defines the space of optimal policies and to recover the reward it is
required to observe/recover the optimal policy. The idea of this paper, and of some recent paper [e.g.,
9, 8, 3], is to exploit the synergy between BC and IRL.
Unfortunately, also IRL approaches present issues. First, several IRL methods require solving the
forward problem as part of an inner loop [e.g., 4, 5]. Literature has extensively focused on removing
this limitation [10, 11, 9] in order to scale IRL to real-world applications [12, 3, 13]. Second, IRL
methods generally require designing the function space by providing features that capture the structure
of the reward function [e.g., 4, 14, 5, 10, 15, 9]. This information, provided in addition to expert?s
demonstrations, is critical for the success of the IRL approach. The issue of designing the function
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
space is a well-known problem in supervised learning, but it is even more critical in IRL since a wrong
choice might prevent the algorithm from finding good solutions to the IRL problem [2, 16], especially
when linear reward models are considered. The importance of incorporating feature construction
in IRL has been known in literature since a while [4] but, as far as we know, it has been explicitly
addressed only in [17]. Recently, IRL literature, by mimicking supervised learning one, has focused
on exploiting neural network capability of automatically constructing relevant features out of the
provided data [12, 8, 13]. By exploiting a ?black-box? approach, these methods do not take advantage
of the structure of the underlying Markov decision process (in the phase of feature construction).
We present an IRL algorithm that constructs reward features directly from expert?s demonstrations.
The proposed algorithm is model-free and does not require solving the forward problem (i.e., finding
an optimal policy given a candidate reward function) as an inner step. The Compatible Reward Inverse
Reinforcement Learning (CR-IRL) algorithm builds a reward function that is compatible with the
expert?s policy. It mixes BC and IRL in order to recover the ?optimal? and most ?informative? reward
function in the space spanned by the recovered features. Inspired by the gradient-minimization IRL
approach proposed in [9], we focus on the space of reward functions that makes the policy gradient
of the expert vanish. Since a zero gradient is only a necessary condition for optimality, we consider a
second order optimality criterion based on the policy Hessian to rank the reward functions and finally
select the best one (i.e., the one that penalizes the most a deviation from the expert?s policy).
2
Algorithm Overview
A Markov Decision Process (MDP) [18] is defined as M = (S, A, P, R, ?, ?) where S is the state
space, A is the action space, P(s0 |s, a) is a Markovian transition model that defines the conditional
distribution of the next state s0 given the current state s and the current action a, ? ? [0, 1] is
the discount factor, R(s, a) is the expected reward for performing action a in state s and ? is the
distribution of the initial state. The optimal policy ? ? is the policy that maximizes the discounted
P+?
sum of rewards E[ t=0 ? t R(st , at )|?, M].
CR-IRL takes as input a parametric policy space ?? = {?? : ? ? ? ? Rk } and a set of rewardless
trajectories from the expert policy ? E , denoted by D = (s?i ,0 , a?i ,0 , . . . , s?i ,T (?i ) , a?i ,T (?i ) ) ,
where s?i ,t is the t-th state in trajectory ?i and i = 1, . . . , N . CR-IRL is a non-iterative algorithm
that recovers a reward function for which the expert is optimal without requiring to specify a reward
function space. It starts building the features {?i } of the value function that are compatible with
policy ? E , i.e., that make the policy gradient vanish (Phase 1, see Sec. 3). This step requires a
parametric representation ??E ? ?? of the expert?s policy which can be obtained through behavioral
cloning.1 The choice of the policy space ?? influences the size of the functional space used by
CR-IRL for representing the value function (and the reward function) associated with the expert?s
policy. In order to formalize this notion, we introduce the policy rank, a quantity that represents
the ability of a parametric policy to reduce the dimensions of the approximation space for the value
function of the expert?s policy. Once these value features have been built, they can be transformed into
reward features {?i } (Phase 2 see Sec. 4) by means of the Bellman equation [18] (model-based) or
reward shaping [19] (model-free). All the rewards spanned by the features {?i } satisfy the first-order
necessary optimality condition [20], but we are not sure about their nature (minima, maxima or saddle
points). The final step is thus to recover a reward function that is maximized by the expert?s policy
(Phase 3 see Sec. 5). This is achieved by considering a second-order optimality condition, with the
idea that we want the reward function that penalizes the most a deviation from the parameters of the
expert?s policy ??E . This criterion is similar in spirit to what done in [2, 4, 14], where the goal is to
identify the reward function that makes the expert?s policy better than any other policy by a margin.
The algorithmic structure is reported in Alg. 1.
IRL literature usually considers two different settings: optimal or sub-optimal expert. This distinction
is necessary when a fixed reward space is provided. In fact, the demonstrated behavior may not be
optimal under the considered reward space. In this case, the problem becomes somehow not well
defined and additional ?optimality? criteria are required [16]. This is not the case for CR-IRL that is
able to automatically generate the space of reward functions that make the policy gradient vanish,
1
We want to stress that our primal objective is to recover the reward function since we aim to explain the
motivations that guide the expert and to transfer it, not just to replicate the behavior. As explained in the
introduction, we aim to exploit the synergy between BC and IRL.
2
thus containing also reward functions under which the recovered expert?s policy ??E is optimal. In
the rest of the paper, we will assume to have a parametric representation of the expert?s policy that
we will denote for simplicity by ?? .
3
Expert?s Compatible Value Features
In this section, we present the procedure to obtain the set {?i }pi=1 of Expert?s COmpatible Q-features
(ECO-Q) that make the policy gradient vanish2 (Phase 1). We start introducing the policy gradient
and the associated first-order optimality condition. We will indicate with T the set of all possible
trajectories, p? (? ) the probability density of trajectory ? and R(? ) the ?-discounted trajectory reward
PT (? )
defined as R(? ) = t=0 ? t R(s?,t , a?,t ) that, in our settings, is obtained as a linear combination of
reward features. Given a policy ?? , the expected ?-discounted return for an infinite horizon MDP is:
Z
Z
Z
??
J(?) =
d? (s)
?? (a|s)R(s, a)dads =
p? (? )R(? )d?,
S
A
T
where
is the ?-discounted future state occupancy [21]. If ?? is differentiable w.r.t. the parameter
?, the gradient of the expected reward (policy gradient) [21, 22] is:
Z Z
Z
?? J(?) =
d??? (s, a)?? log ?? (a|s)Q?? (s, a)dads =
p? (? )?? log p? (? )R(? )d?, (1)
d???
S
A
T
where d??? (s, a) = d??? (s)?? (a|s) is the ?-discounted future state-action occupancy, which represents
the expected discounted number of times action a is executed in state s given ? as initial state
distribution and following policy ?? . When ?? is an optimal policy in the class of policies ?? =
{?? : ? ? ? ? Rk } then ? is a stationary point of the expected return and thus ?? J(?) = 0
(first-order necessary conditions for optimality [20]).
We assume the space S ? A to be a Hilbert space [23] equipped with the weighted inner product:3
Z Z
hf, gi?,?? =
f (s, a)d??? (s, a)g(s, a)dsda.
(2)
S
A
When ?? is optimal for the MDP, ?? log ?? and Q?? are orthogonal w.r.t. the inner product (2).
We can exploit the orthogonality property to build an approximation space for the Q-function. Let
G?? = {?? log ?? ? : ? ? Rk } the subspace spanned by the gradient of the log-policy ?? . From
equation (1) finding an approximation space for the Q-function is equivalent to find the orthogonal
complement of the subspace G?? , which in turn corresponds to find the null space of the functional:
G?? [?] = h?? log ?? , ?i?,?? .
(3)
We define an Expert?s COmpatible Q-feature as any function ? making the functional (3) null. This
space G?
?? := null(G?? ) represents the Hilbert subspace of the features for the Q-function that
are compatible with the policy ?? in the sense that any Q-function optimized by policy ?? can be
expressed as a linear combination of those features. Section 3.2 and 3.3 describe how to compute
the ECO-Q from samples in finite and continuous MDPs, respectively. The dimension of G?
?? is
typically very large since the number k of policy parameters is significantly smaller than the number
of state-action pairs. A formal discussion of this issue for finite MDPs is presented in the next section.
3.1
Policy rank
The parametrization of the expert?s policy influences the size of G?
?? . Intuition suggests that the larger
the number k of the parameters the more the policy is informative to infer the Q-function and so the
reward function. This is motivated by the following rationale. Consider representing the expert?s
policy using two different policy models such that one model is a superclass of the other one (for
instance, assume to use linear models where the features used in the simpler model are a subset of the
features used by policies in the other model). All the reward functions that make the policy gradient
2
Notice that any linear combination of the ECO-Q also satisfies the first-order optimality condition.
The inner product as defined is clearly symmetric, positive definite and linear, but there could be state-action
?
pairs never visited, i.e., d?? (s, a) = 0, making hf, f i?,?? = 0 for non-zero f . To ensure the properties of the
inner product, we assume to compute it only on visited state-action pairs.
3
3
vanish with the rich policy model, do the same with the simpler model, while the vice versa does not
hold. This suggests that complex policy models are able to reduce more the space of optimal reward
function w.r.t. simpler models. This notion plays an important role for finite MDPs, i.e., MDPs where
the state-action space is finite. We formalize the ability of a policy to infer the characteristics of the
MDP with the concept of policy rank.
Definition 1. Let ?? a policy with k parameters belonging to the class ?? and differentiable in ?.
The policy rank is the dimension of the space of the linear combinations of the partial derivatives of
?? w.r.t. ?:
rank(?? ) = dim(??? ), ??? = {?? ?? ? : ? ? Rk }.
A first important note is that the policy rank depends not only on the policy model ?? but also on
the value of the parameters of the policy ?? . So the policy rank is a property of the policy not of the
policy model. The following bound on the policy rank holds (the proof can be found in App. A.1).
Proposition 1. Given a finite MDP M, let ?? a policy with k parameters belonging to the class ??
and differentiable in ?, then: rank(?? ) ? min {k, |S||A| ? |S|}.
From an intuitive point of view this is justified by the fact that ?? (?|s) is a probability distribution.
As a consequence, for all s ? S the probabilities ?? (a|s) must sum up to 1, removing |S| degrees
of freedom. This has a relevant impact on the algorithm since it induces a lower bound on the
dimension of the orthogonal complement dim(G?
?? ) ? max {|S||A| ? k, |S|}, thus even the most
flexible policy (i.e., a policy model with a parameter for each state-action pair) cannot determine a
unique reward function that makes the expert?s policy optimal, leaving |S| degrees of freedom. It
follows that it makes no sense to consider a policy with more than |S||A| ? |S| parameters. The
generalization capabilities enjoyed by the recovered reward function are deeply related to the choice
of the policy model. Complex policies (many parameters) would require finding a reward function
that explains the value of all the parameters, resulting in a possible overfitting, whereas a simple
policy model (few parameters) would enforce generalization as the imposed constraints are fewer.
3.2
Construction of ECO-Q in Finite MDPs
We now develop in details the algorithm to generate ECO-Q in the case of finite MDPs. From now
on we will indicate with |D| the number of distinct state-action pairs visited by the expert along the
available trajectories. When the state-action space is finite the inner product (2) can be written in
matrix notation as:
hf , gi?,?? = f T D??? g,
??
where f , g and d? are real vectors with |D| components and D??? = diag(d??? ). The term ?? log ??
is a |D| ? k real matrix, thus finding the null space of the functional (3) is equivalent to finding the
null space of the matrix ?? log ??T D??? . This can be done for instance through SVD which allows
to obtain a set of orthogonal basis functions ?. Given that the weight vector d??? (s, a) is usually
unknown, it needs to be estimated. Since the policy ?? is known, we need to estimate just d??? (s), as
d??? (s, a) = d??? (s)?? (a|s). A Monte Carlo estimate exploiting the expert?s demonstrations in D is:
N T (?i )
1 XX t
d???? (s) =
? 1(s?i ,t = s).
N i=1 t=0
3.3
(4)
Construction of ECO-Q in Continuous MDPs
To extend the previous approach to the continuous domain we assume that the state-action space is
equipped with the Euclidean distance. Now we can adopt an approach similar to the one exploited
to extend Proto-Value Functions (PVF) [24, 25] to infinite observation spaces [26]. The problem is
treated as a discrete one considering only the state-action pairs visited along the collected trajectories.
A Nystr?m interpolation method is used to approximate the value of a feature in a non-visited
state-action pair as a weighted mean of the values of the closest k features. The weight of each feature
is computed by means of a Gaussian kernel placed over the Euclidean space S ? A:
1
1
K (s, a), (s0 , a0 ) = exp ? 2 ks ? s0 k22 ? 2 ka ? a0 k22 ,
(5)
2?S
2?A
where ?S and ?A are respectively the state and action bandwidth. In our setting this approach is fully
equivalent to a kernel k-Nearest Neighbors regression.
4
4
Expert?s Compatible Reward Features
The set of ECO-Q basis functions allows representing the optimal value function under the policy ?? .
In this section, we will show how it is possible to exploit ECO-Q functions to generate basis functions
for the reward representation (Phase 2). In principle, we can use the Bellman equation to obtain the
reward from the Q-function but this approach requires the knowledge of the transition model (see
App. B). The reward can be recovered in a model-free way by exploiting optimality-invariant reward
transformations.
Reversing the Bellman equation [e.g., 10] allows finding the reward space that generates the estimated
Q-function. However, IRL is interested in finding just a reward space under which the expert?s policy
is optimal. This problem can be seen as an instance of reward shaping [19] where the authors show
that the space of all the reward functions sharing the same optimal policy is given by:
Z
0
R (s, a) = R(s, a) + ?
P(s0 |s, a)?(s0 )ds0 ? ?(s),
S
where ?(s) is a state-dependent potential function. A smart choice [19] is to set ? = V ?? under
which the new reward space is given by the advantage function: R0 (s, a) = Q?? (s, a) ? V ?? (s) =
A?? (s, a). Thus the expert?s advantage function is an admissible reward optimized by the expert?s
policy itself. This choice is, of course, related to using Q?? as reward. However, the advantage
function encodes a more local and more transferable information w.r.t. the Q-function.
The space of reward features can be recovered through matrix equality ? = (I ? ??? )?, where ??? is
a |D| ? |D| matrix obtained from ?? repeating the row of each visited state a number of times equal
to the number of distinct actions performed by the expert in that state. Notice that this is a simple
linear transformation through the expert?s policy. The specific choice of the state-potential function
has the advantage to improve the learning capabilities of any RL algorithm [19]. This is not the only
choice of the potential function possible, but it has the advantage of allowing model-free estimation.
Once the ECO-R basis functions have been generated, they can be used to feed any IRL algorithm
that represents the expert?s reward through a linear combination of basis functions. In the next section,
we propose a new method based on the optimization of a second-order criterion that favors reward
functions that significantly penalize deviations from the expert?s policy.
5
Reward Selection via Second-Order Criterion
Any linear combination of the ECO-R {?i }pi=1 makes the gradient vanish, however in general this is
not sufficient to ensure that the policy parameter ? is a maximum of J(?). Combinations that lead to
minima or saddle points should be discarded. Furthermore, provided that a subset of ECO-R leading
to maxima has been selected, we should identify a single reward function in the space spanned by this
subset of features (Phase 3). Both these requirements can be enforced by imposing a second-order
optimality criterion based on the policy Hessian that is given by [27, 28]:
Z
T
H? J(?, ?) =
p? (? ) ?? log p? (? )?? log p? (? ) + H? log p? (? ) R(?, ?)d?,
T
where ? is the reward weight and R(?, ?) =
Pp
i=1
?i
PT (? )
t=0
? t ?i (s?,t , a?,t ).
In order to retain only maxima we need to impose that the Hessian is negative definite. Furthermore,
we aim to find the reward function that best represents the optimal policy parametrization in the sense
that even a slight change of the parameters of the expert?s policy induces a significant degradation of
the performance. Geometrically this corresponds to find the reward function for which the expected
return locally represents the sharpest hyper-paraboloid. These requirements can be enforced using
a Semi-Definite Programming (SDP) approach where the objective is to minimize the maximum
eigenvalue of the Hessian whose eigenvector corresponds to the direction of minimum curvature
(maximum eigenvalue optimality criterion). This problem is not appealing in practice due to its high
computational burden. Furthermore, it might be the case that the strict negative definiteness constraint
is never satisfied due to blocked-to-zero eigenvalues (for instance in presence of policy parameters
that do not affect the policy performance). In these cases, we can consider maximizing an index of
the overall concavity. The trace of the Hessian, being the sum of the eigenvalues, can be used for this
purpose. This problem can be still defined as a SDP problem (trace optimality criterion). See App. C
for details.
5
Trace optimality criterion, although
N
less demanding w.r.t. the eigenvalue- Input: D = (s?i ,0 , a?i ,0 , . . . , s?i ,T (?i ) , a?i ,T (?i ) ) i=1 a set of
expert?s policy ?? .
based one, still displays performance expert?s trajectories and parametrictr?heu
.
Phase 1
degradation as the number of basis Output: trace heuristic ECO-R, R
??
functions increases due to the neg- 1. Estimate d? (s) for the visited state-action pairs using Eq. (4)
?
?
and compute d?? (s, a) = d?? (s)?? (a|s).
ative definiteness constraint. Solving the semidefinite programming 2. Collect d??? (s, a) in the |D| ? |D| diagonal matrix D??? and
problem of one of the previous op?? log ?? (s, a) in the |D| ? k matrix ?? log ?? .
timality criteria is unfeasible for al- 3. Get the set of ECO-Q by computing the null space of matrix
?
?
most all the real world problems.
?? log ?? T D?? through SVD: ? = null ?? log ?? T D?? .
We are interested in formulating a 4. Get the set of ECO-R by applying reward shaping to the set of
non-SDP problem, which is a surECO-Q: ? = (I ? ??? )?.
rogate of the trace optimality crite- 5. Apply SVD to orthogonalize ?.
Phase 2
rion, that can be solved more effi- 6. Estimate the policy Hessian for each ECO-R ?i , i = 1, ...p
ciently (trace heuristic criterion). In
using equation:a
N
our framework, the reward function
X
? ? Ji (?) = 1
H
?? log p? (?j )?? log p? (?j )T
can be expressed as a linear comN j=1
bination of the ECO-R so we can
+ H? log p? (?j ) ?i (?j ) ? b .
rewrite
the
Hessian
as
H
J(?,
?)
=
?
Pp
7. Discard the ECO-R having indefinite Hessian, switch sign for
i=1 ?i H? Ji (?) where Ji (?) is the
expected return considering as rethose having positive semidefinite Hessian, compute the traces
ward function ?i . We assume that
of each Hessian and collect them in the vector tr.
the ECO-R are orthonormal in order 8. Compute the trace heuristic ECO-R as:
to compare them.4 The main chalRtr?heu = ?? , ? = ?tr/ktrk2 .
Phase 3
lenge is how to select the weight ?
in order to get a (sub-)optimal trace 9. (Optional) Apply penalization to unexplored state-action pairs.
minimizer that preserves the negative
a
The optimal baseline b is provided in [29, 30].
semidefinite constraint. From Weyl?s
inequality, we get a feasible solution
Alg 1: CR-IRL algorithm.
by retaining only the ECO-Rs yielding a semidefinite Hessian and switching sign to those with positive semidefinite Hessian. Our
heuristic consists in looking for the weights ? that minimize the trace in this reduced space (in which
all ECO-R have a negative semidefinite Hessian). Notice that in this way we can loose the optimal
solution since the trace minimizer might assign a non-zero weight to a ECO-R with indefinite Hessian.
For brevity, we will indicate with tri = tr(H? Ji (?)) and tr the vector whose components are tri .
SDP is no longer needed:
min ? T tr s.t. k?k22 = 1.
(6)
?
The constraint k?k22 = 1 ensures that, when the ECO-R are orthonormal, the resulting ECO-R has
Euclidean norm one. This is a convex programming problem with linear objective function and
tri
quadratic constraint, the closed form solution can be found with Lagrange multipliers: ?i = ? ktrk
2
(see App. A.2 for the derivation). Refer to Algorithm 1 for a complete overview of CR-IRL (the
computational analysis of CR-IRL is reported in App. E).
CR-IRL does not assume to know the state space S and the action space A, thus the recovered reward
is defined only in the state-action pairs visited by the expert along the trajectories in D. When the state
and action spaces are known, we can complete the reward function also for unexplored state-action
pairs assigning a penalized reward (e.g., a large negative value), otherwise the penalization can be
performed online when the recovered reward is used to solve the forward RL problem.
6
Related Work
There has been a surge of recent interest in improving IRL in order to make it more appealing for
real-world applications. We highlight the lines of works that are more related to this paper.
We start investigating how IRL literature has faced the problem of designing a suitable reward space.
Almost all the IRL approaches share the necessity to define a priori a set of handcrafted features,
4
A normalization condition is necessary since the magnitude of the trace of a matrix can be arbitrarily
changed by multiplying the matrix by a constant.
6
spanning the approximation space of the reward functions. While a good set of basis functions can
greatly simplify the IRL problem, a bad choice may significantly harm the performance of any IRL
algorithm. The Feature construction for Inverse Reinforcement Learning (FIRL) algorithm [17], as far
as we know, is the only approach that explicitly incorporates the feature construction as an inner step.
FIRL alternates between optimization and fitting phases. The optimization phase aims to recover a
reward function?from the current feature set as a linear projection?such that the associated optimal
policy is consistent with the demonstrations. In the fitting phase new features are created (using a
regression tree) in order to better explain regions where the old features were too coarse. The method
proved to be effective achieving also (features) transfer capabilities. However, FIRL requires the
MDP model to solve the forward problem and the complete optimal policy for the fitting step in order
to evaluate the consistency with demonstrations.
Recent works have indirectly coped with the feature construction problem by exploiting neural
networks [12, 3, 13]. Although effective, the black-box approach does not take into account the MDP
structure of the problem. RL has extensively investigated the feature construction for the forward
problem both for value function [24, 25, 31, 32] and policy [21] features. In this paper, we have
followed this line of work mixing concepts deriving from policy and value fields. We have leveraged
on the policy gradient theorem and on the associated concept of compatible functions to derive
ECO-Q features. First-order necessary conditions have already been used in literature to derive IRL
algorithm [9, 33]. However, in both the cases the authors assume a fixed reward space under which it
may not be possible to find a reward for which the expert is optimal. Although there are similarities,
this paper exploits first-order optimality to recover the reward basis while the ?best? reward function
is selected according to a second-order criterion. This allows recovering a more robust solution
overcoming uncertainty issues raised by the use of the first-order information only.
7
Experimental results
We evaluate CR-IRL against some popular IRL algorithms both in discrete and in continuous domains:
the Taxi problem (discrete), the Linear Quadratic Gaussian and the Car on the Hill environments
(continuous). We provide here the most significant results, the full data are reported in App. D.
7.1
Taxi
The Taxi domain is defined in [34]. We assume the expert plays an -Boltzmann policy with fixed :
T
e?a ? s
,
+
? a0 T ? s
|A|
e
a0 ?A
??, (a|s) = (1 ? ) P
where the policy features ? s are the following state features: current location, passenger location,
destination location, whether the passenger has already been pick up.
This test is meant to compare the learning speed of the reward functions recovered by the considered
IRL methods when a Boltzmann policy ( = 0) is trained with REINFORCE [22]. To evaluate the
robustness to imperfect experts, we introduce a noise () in the optimal policy. Figure 2 shows that
CR-IRL, with 100 expert?s trajectories, outperforms the true reward function in terms of convergence
speed regardless the exploration level. Behavioral Cloning (BC), obtained by recovering the maximum
likelihood -Boltzmann policy ( = 0, 0.1) from expert?s trajectories, is very susceptible to noise.
We compare also the second-order criterion of CR-IRL to single out the reward function with
Maximum Entropy IRL (ME-IRL) [6] and Linear Programming Apprenticeship Learning (LPAL) [5]
using as reward features the set of ECO-R (comparisons with different sets of features is reported
in App. D.2). We can see in Figure 2 that ME-IRL does not perform well when = 0, since the
transition model is badly estimated. The convergence speed remains very slow also for = 0.1, since
ME-IRL does not guarantee that the recovered reward is a maximum of J. LPAL provides as output
an apprenticeship policy (not a reward function) and, like BC, is very sensitive to noise and to the
quality of the estimated transition model.
7.2
Linear Quadratic Gaussian Regulator
We consider the one-dimensional Linear Quadratic Gaussian regulator [35] with an expert playing a
Gaussian policy ?K (?|s) ? N (Ks, ? 2 ), where K is the parameter and ? 2 is fixed.
7
= 0.1
=0
0
average return
average return
0
?100
?200
0
50
Reward
100
iteration
CR-IRL
?100
?200
150
0
ME-IRL
LPAL
50
100
iteration
BC ( = 0.1)
BC ( = 0)
150
Expert
Figure 2: Average return of the Taxi problem as a function of the number of iterations of REINFORCE.
0.4
average return
parameter
?0.2
?0.4
?0.6
0
100
Reward
GIRL-square
200
300
iteration
Advantage
CR-IRL
400
0.2
0
0
GIRL-abs-val
Expert
5
10
iteration
CR-IRL
Expert
15
20
BC
Reward
Figure 4: Average return of Car on the Hill as a
function of the number of FQI iterations.
Figure 3: Parameter value of LQG as a function
of the number of iterations of REINFORCE.
We compare CR-IRL with GIRL [9] using two linear parametrizations of the reward function:
R(s, a, ?) = ?1 s2 + ?2 a2 (GIRL-square) and R(s, a, ?) = ?1 |s| + ?2 |a| (GIRL-abs-val). Figure 3
shows the parameter (K) value learned with REINFORCE using a Gaussian policy with variance
? 2 = 0.01. We notice that CR-IRL, fed with 20 expert?s trajectories, converges closer and faster to the
expert?s parameter w.r.t. to the true reward, advantage function and GIRL with both parametrizations.
7.3
Car on the Hill
We further experiment CR-IRL in the continuous Car on the Hill domain [36]. We build the optimal
policy via FQI [36] and we consider a noisy expert?s policy in which a random action is selected with
probability = 0.1. We exploit 20 expert?s trajectories to estimate the parameters w of a Gaussian
policy ?w (a|s) ? N (yw (s), ? 2 ) where the mean yw (s) is a radial basis function network (details
and comparison with = 0.2 in appendix D.4). The reward function recovered by CR-IRL does
not necessary need to be used only with policy gradient approaches. Here we compare the average
return as a function of the number of iterations of FQI, fed with the different recovered rewards.
Figure 4 shows that FQI converges faster to optimal policies when coped with the reward recovered
by CR-IRL rather than with the original reward. Moreover, it overcomes the performance of the
policy recovered via BC.
8
Conclusions
We presented an algorithm, CR-IRL, that leverages on the policy gradient to recover, from a set
of expert?s demonstrations, a reward function that explains the expert?s behavior and penalizes
deviations. Differently from large part of IRL literature, CR-IRL does not require to specify a priori
an approximation space for the reward function. The empirical results show (quite unexpectedly)
that the reward function recovered by our algorithm allows learning policies that outperform both
behavioral cloning and those obtained with the true reward function (learning speed). Furthermore,
the Hessian trace heuristic criterion, when applied to ECO-R, outperforms classic IRL methods.
8
Acknowledgments
This research was supported in part by French Ministry of Higher Education and Research, Nord-Pasde-Calais Regional Council and French National Research Agency (ANR) under project ExTra-Learn
(n.ANR-14-CE24-0010-01).
References
[1] Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from
demonstration. Robotics and Autonomous Systems, 57(5):469?483, 2009.
[2] Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In ICML, pages
663?670, 2000.
[3] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In NIPS, pages 4565?4573,
2016.
[4] Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In ICML,
page 1. ACM, 2004.
[5] Umar Syed, Michael H. Bowling, and Robert E. Schapire. Apprenticeship learning using linear programming. In ICML, volume 307 of ACM International Conference Proceeding Series, pages 1032?1039. ACM,
2008.
[6] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse
reinforcement learning. In AAAI, volume 8, pages 1433?1438. Chicago, IL, USA, 2008.
[7] Nathan D. Ratliff, David Silver, and J. Andrew Bagnell. Learning to search: Functional gradient techniques
for imitation learning. Autonomous Robots, 27(1):25?53, 2009.
[8] Jonathan Ho, Jayesh K. Gupta, and Stefano Ermon. Model-free imitation learning with policy optimization.
In ICML, volume 48 of JMLR Workshop and Conference Proceedings, pages 2760?2769. JMLR.org, 2016.
[9] Matteo Pirotta and Marcello Restelli. Inverse reinforcement learning through policy gradient minimization.
In AAAI, pages 1993?1999, 2016.
[10] Edouard Klein, Bilal Piot, Matthieu Geist, and Olivier Pietquin. A cascaded supervised learning approach
to inverse reinforcement learning. In ECML/PKDD (1), volume 8188 of Lecture Notes in Computer
Science, pages 1?16. Springer, 2013.
[11] Bilal Piot, Matthieu Geist, and Olivier Pietquin. Boosted and reward-regularized classification for apprenticeship learning. In AAMAS, pages 1249?1256. IFAAMAS/ACM, 2014.
[12] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via
policy optimization. In ICML, volume 48 of JMLR Workshop and Conference Proceedings, pages 49?58.
JMLR.org, 2016.
[13] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Andrew Sendonaris,
Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, and Audrunas Gruslys. Learning from
demonstrations for real world reinforcement learning. CoRR, abs/1704.03732, 2017.
[14] Nathan D. Ratliff, J. Andrew Bagnell, and Martin Zinkevich. Maximum margin planning. In ICML,
volume 148 of ACM International Conference Proceeding Series, pages 729?736. ACM, 2006.
[15] Julien Audiffren, Michal Valko, Alessandro Lazaric, and Mohammad Ghavamzadeh. Maximum entropy
semi-supervised inverse reinforcement learning. In IJCAI, pages 3315?3321. AAAI Press, 2015.
[16] Gergely Neu and Csaba Szepesv?ri. Training parsers by inverse reinforcement learning. Machine Learning,
77(2-3):303?337, 2009.
[17] Sergey Levine, Zoran Popovic, and Vladlen Koltun. Feature construction for inverse reinforcement learning.
In NIPS, pages 1342?1350. Curran Associates, Inc., 2010.
[18] Martin L Puterman. Markov decision processes: Discrete stochastic dynamic programming. 1994.
[19] Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory
and application to reward shaping. 99:278?287, 1999.
9
[20] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer Series in Operations Research
and Financial Engineering. Springer New York, 2006.
[21] Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient methods
for reinforcement learning with function approximation. In NIPS, pages 1057?1063. The MIT Press, 1999.
[22] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine learning, 8(3-4):229?256, 1992.
[23] Wendelin B?hmer, Steffen Gr?new?lder, Yun Shen, Marek Musial, and Klaus Obermayer. Construction of
approximation spaces for reinforcement learning. Journal of Machine Learning Research, 14(1):2067?2118,
2013.
[24] Sridhar Mahadevan. Proto-value functions: Developmental reinforcement learning. In ICML, pages
553?560. ACM, 2005.
[25] Sridhar Mahadevan and Mauro Maggioni. Proto-value functions: A laplacian framework for learning representation and control in markov decision processes. Journal of Machine Learning Research,
8(Oct):2169?2231, 2007.
[26] Sridhar Mahadevan, Mauro Maggioni, Kimberly Ferguson, and Sarah Osentoski. Learning representation
and control in continuous markov decision processes. In AAAI, volume 6, pages 1194?1199, 2006.
[27] Sham Kakade. A natural policy gradient. In NIPS, pages 1531?1538. MIT Press, 2001.
[28] Thomas Furmston and David Barber. A unifying perspective of parametric policy search methods for
markov decision processes. In Advances in neural information processing systems, pages 2717?2725,
2012.
[29] Giorgio Manganini, Matteo Pirotta, Marcello Restelli, and Luca Bascetta. Following newton direction
in policy gradient with parameter exploration. In Neural Networks (IJCNN), 2015 International Joint
Conference on, pages 1?8. IEEE, 2015.
[30] Simone Parisi, Matteo Pirotta, and Marcello Restelli. Multi-objective reinforcement learning through
continuous pareto manifold approximation. Journal Artificial Intelligence Research, 57:187?227, 2016.
[31] Ronald Parr, Christopher Painter-Wakefield, Lihong Li, and Michael L. Littman. Analyzing feature
generation for value-function approximation. In ICML, volume 227 of ACM International Conference
Proceeding Series, pages 737?744. ACM, 2007.
[32] Amir Massoud Farahmand and Doina Precup. Value pursuit iteration. In NIPS, pages 1349?1357, 2012.
[33] Peter Englert and Marc Toussaint. Inverse kkt?learning cost functions of manipulation tasks from demonstrations. In Proceedings of the International Symposium of Robotics Research, 2015.
[34] Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. J.
Artif. Intell. Res.(JAIR), 13:227?303, 2000.
[35] Peter Dorato, Vito Cerone, and Chaouki Abdallah. Linear Quadratic Control: An Introduction. Krieger
Publishing Co., Inc., Melbourne, FL, USA, 2000.
[36] Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. Journal
of Machine Learning Research, 6(Apr):503?556, 2005.
[37] C-L Hwang and Abu Syed Md Masud. Multiple objective decision making-methods and applications: a
state-of-the-art survey, volume 164. Springer Science & Business Media, 2012.
[38] Jose M. Vidal and Jos? M Vidal. Fundamentals of multiagent systems. 2006.
[39] Emre Mengi, E Alper Yildirim, and Mustafa Kilic. Numerical optimization of eigenvalues of hermitian
matrix functions. SIAM Journal on Matrix Analysis and Applications, 35(2):699?724, 2014.
[40] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980,
2014.
10
| 6800 |@word norm:1 replicate:2 pieter:2 r:1 decomposition:1 crite:1 pick:1 nystr:1 tr:5 necessity:1 initial:2 series:4 bc:11 bilal:3 outperforms:2 existing:1 recovered:14 transferability:1 current:4 ka:1 michal:1 assigning:1 diederik:1 must:1 written:1 john:1 ronald:2 chicago:1 numerical:2 informative:2 weyl:1 lqg:2 comn:1 stationary:1 generative:1 fewer:1 selected:3 intelligence:1 amir:1 parametrization:2 coarse:1 provides:1 location:3 org:2 simpler:3 along:3 symposium:1 koltun:1 farahmand:1 consists:1 fitting:3 behavioral:5 hermitian:1 introduce:2 apprenticeship:5 expected:7 behavior:4 pkdd:1 surge:1 sdp:4 planning:1 steffen:1 multi:1 bellman:3 inspired:1 discounted:6 automatically:2 equipped:2 considering:3 becomes:1 provided:6 xx:1 underlying:1 notation:1 maximizes:1 moreover:1 project:1 null:7 what:1 brett:1 medium:1 argall:1 eigenvector:1 finding:8 transformation:3 csaba:1 guarantee:1 unexplored:2 wrong:1 control:4 louis:1 positive:3 giorgio:1 engineering:1 local:1 coped:2 todd:1 consequence:2 switching:1 taxi:5 sutton:1 analyzing:1 matteo:5 interpolation:1 inria:2 might:3 black:2 k:2 suggests:2 collect:2 edouard:1 co:1 unique:1 acknowledgment:1 rion:1 practice:1 definite:3 gruslys:1 procedure:1 empirical:1 ce24:1 significantly:3 projection:1 radial:1 fqi:4 get:4 cannot:1 unfeasible:1 selection:1 rogate:1 influence:2 applying:1 equivalent:3 imposed:1 demonstrated:2 zinkevich:1 maximizing:1 williams:1 attention:1 regardless:1 jimmy:1 convex:1 focused:2 politecnico:2 survey:2 simplicity:1 shen:1 matthieu:2 spanned:4 orthonormal:2 deriving:1 financial:1 classic:1 notion:2 variation:1 autonomous:2 maggioni:2 construction:10 pt:2 play:2 dulac:1 parser:1 programming:6 olivier:3 yishay:1 designing:3 curran:1 associate:1 osentoski:1 observed:1 role:1 ifaamas:1 levine:2 solved:1 capture:1 unexpectedly:1 region:1 ensures:1 russell:2 pvf:1 deeply:1 alessandro:1 intuition:1 environment:5 agency:1 developmental:1 reward:100 ziebart:1 sendonaris:1 littman:1 dynamic:2 vito:1 ghavamzadeh:1 trained:1 zoran:1 solving:4 rewrite:1 smart:1 singh:1 basis:10 girl:6 joint:1 differently:2 geist:2 derivation:1 distinct:2 effective:3 describe:1 monte:1 query:1 artificial:1 klaus:1 hyper:1 whose:2 heuristic:5 larger:1 solve:2 quite:1 otherwise:1 anr:2 lder:1 ability:3 favor:1 gi:2 ward:1 itself:1 noisy:1 final:1 online:1 advantage:8 differentiable:3 eigenvalue:6 parisi:1 propose:1 product:5 fr:1 relevant:2 loop:1 parametrizations:2 mixing:1 ernst:1 schaul:1 description:1 intuitive:1 exploiting:5 convergence:2 ijcai:1 requirement:2 silver:1 converges:2 adam:1 derive:2 develop:1 andrew:8 sarah:1 damien:1 nearest:1 op:1 received:1 eq:1 recovering:3 pietquin:3 indicate:3 direction:2 guided:1 stochastic:3 exploration:2 milano:2 ermon:2 mcallester:1 education:1 explains:3 require:6 assign:1 abbeel:2 generalization:4 proposition:1 brian:1 hold:2 considered:3 wright:1 exp:1 mapping:1 algorithmic:1 parr:1 adopt:1 a2:1 purpose:1 estimation:1 visited:8 calais:1 sensitive:1 council:1 vice:1 weighted:2 minimization:2 mit:2 clearly:1 gaussian:8 aim:6 rather:1 cr:22 boosted:1 focus:1 maria:1 rank:10 likelihood:1 cloning:5 greatly:1 adversarial:1 baseline:1 sense:3 posteriori:1 dim:2 browning:1 dependent:1 ferguson:1 typically:1 a0:4 transformed:1 france:1 interested:2 mimicking:1 overall:1 classification:1 flexible:1 issue:4 denoted:1 priori:2 retaining:1 raised:1 art:1 field:1 equal:1 construct:1 once:2 beach:1 never:2 having:2 ng:3 represents:6 lille:1 marcello:5 stuart:2 icml:8 future:2 connectionist:1 jayesh:1 simplify:1 richard:1 few:1 preserve:1 tightly:1 national:1 intell:1 phase:13 ab:4 freedom:2 huge:1 paraboloid:1 interest:1 joel:1 chernova:1 semidefinite:6 yielding:1 primal:1 succeeded:1 closer:1 partial:1 necessary:7 orthogonal:4 tree:2 hester:1 euclidean:3 old:1 penalizes:4 re:1 melbourne:1 instance:4 markovian:1 cost:2 introducing:2 deviation:5 subset:3 harada:1 gr:1 too:1 reported:4 st:2 density:1 international:5 fundamental:1 siam:1 retain:1 sequel:1 destination:1 jos:1 michael:2 precup:1 gergely:1 aaai:4 satisfied:1 containing:1 leveraged:1 expert:60 derivative:1 leading:1 return:10 li:1 account:2 potential:3 sec:3 inc:2 satisfy:1 vecerik:1 explicitly:2 doina:1 depends:1 passenger:2 performed:2 view:1 lot:1 dad:2 closed:1 observing:2 start:3 recover:9 hf:3 capability:4 lpal:3 ative:1 minimize:2 painter:1 square:2 il:1 variance:1 characteristic:1 maximized:1 identify:2 sharpest:1 yildirim:1 carlo:1 trajectory:13 multiplying:1 app:7 explain:2 sharing:1 neu:1 definition:1 against:1 pp:2 associated:4 di:2 recovers:2 proof:1 proved:1 popular:1 knowledge:1 car:5 hilbert:2 formalize:2 shaping:4 matej:1 feed:1 higher:1 jair:1 supervised:5 tom:1 specify:3 done:2 box:2 dey:1 furthermore:4 just:3 wakefield:1 irl:62 christopher:1 somehow:1 french:2 defines:2 mode:1 quality:1 hwang:1 mdp:7 artif:1 usa:3 dietterich:1 building:1 requiring:1 concept:3 k22:4 multiplier:1 former:1 equality:1 true:3 symmetric:1 puterman:1 bowling:1 transferable:1 criterion:15 yun:1 hill:5 stress:1 complete:3 mohammad:1 geurts:1 eco:27 stefano:2 novel:1 recently:1 functional:5 empirically:1 overview:2 rl:3 ji:4 handcrafted:1 volume:9 extend:3 slight:1 significant:2 blocked:1 refer:1 versa:1 imposing:1 enjoyed:1 consistency:1 lihong:1 robot:2 longer:1 similarity:1 curvature:1 closest:1 chelsea:1 recent:3 perspective:1 italy:2 discard:1 manipulation:1 inequality:1 success:1 arbitrarily:1 jorge:1 exploited:1 neg:1 seen:1 minimum:3 additional:2 ministry:1 impose:1 r0:1 determine:1 semi:2 stephen:1 multiple:1 mix:1 full:1 infer:2 sham:1 faster:2 veloso:1 long:1 alberto:1 luca:1 simone:1 laplacian:1 impact:1 regression:2 iteration:9 kernel:2 normalization:1 sergey:2 achieved:1 robotics:2 penalize:1 justified:1 addition:1 want:2 whereas:1 szepesv:1 addressed:1 furmston:1 leaving:1 englert:1 abdallah:1 extra:1 rest:1 regional:1 sure:1 strict:1 tri:3 contrary:1 leveraging:1 spirit:1 incorporates:1 ciently:1 presence:1 leverage:1 mahadevan:3 switch:1 affect:1 bandwidth:1 inner:8 idea:2 reduce:2 imperfect:1 whether:1 motivated:1 osband:1 peter:2 hessian:15 york:1 action:25 deep:1 gabriel:1 generally:1 yw:2 amount:1 repeating:1 discount:1 extensively:2 locally:1 induces:2 reduced:1 generate:3 schapire:1 outperform:1 massoud:1 notice:4 piot:3 sign:2 estimated:4 lazaric:1 klein:1 discrete:4 abu:1 indefinite:2 achieving:1 prevent:1 leibo:1 nocedal:1 geometrically:1 sum:3 enforced:2 inverse:17 jose:1 uncertainty:1 audrunas:1 almost:1 alper:1 bascetta:1 decision:7 appendix:1 lanctot:1 bound:2 fl:1 daishi:1 followed:1 display:1 quadratic:6 badly:1 ijcnn:1 orthogonality:1 constraint:6 ri:1 encodes:1 generates:2 regulator:2 speed:4 nathan:2 span:1 optimality:15 min:2 performing:1 formulating:1 martin:2 transferred:1 according:1 alternate:1 combination:7 vladlen:1 belonging:2 smaller:1 appealing:2 kakade:1 making:3 explained:1 invariant:1 equation:5 deib:2 remains:1 turn:1 loose:1 needed:1 know:3 fed:2 finn:1 available:1 operation:1 pursuit:1 vidal:2 apply:2 observe:1 hierarchical:1 enforce:1 indirectly:1 pierre:1 sonia:1 robustness:1 ho:2 batch:1 original:1 thomas:2 ensure:2 publishing:1 wehenkel:1 newton:1 unifying:1 effi:1 exploit:6 umar:1 especially:1 build:3 objective:6 already:2 quantity:1 parametric:5 md:1 diagonal:1 bagnell:3 obermayer:1 gradient:23 subspace:5 distance:1 reinforce:4 mauro:2 me:4 manifold:1 barber:1 considers:1 collected:1 spanning:1 index:1 relationship:1 providing:1 demonstration:12 unfortunately:1 executed:1 susceptible:1 robert:1 nord:1 trace:13 negative:5 ratliff:2 ba:1 policy:117 unknown:1 perform:2 allowing:1 boltzmann:3 observation:1 markov:6 discarded:1 finite:10 polimi:2 ecml:1 optional:1 looking:1 team:1 mansour:1 overcoming:1 david:3 complement:2 pair:11 required:2 optimized:2 ds0:1 distinction:1 learned:1 kingma:1 maxq:1 nip:6 able:2 usually:2 built:1 max:1 marek:1 critical:2 demanding:1 treated:1 suitable:1 syed:2 regularized:1 cascaded:1 valko:1 natural:1 business:1 representing:3 occupancy:2 improve:1 mdps:7 julien:1 created:1 audiffren:1 faced:1 literature:8 val:2 emre:1 fully:1 lecture:1 highlight:1 rationale:1 multiagent:1 generation:1 limitation:3 lenge:1 toussaint:1 penalization:2 degree:2 sufficient:1 consistent:1 s0:6 principle:1 pareto:1 playing:1 pi:2 share:1 row:1 compatible:10 course:1 penalized:1 placed:1 changed:1 free:6 supported:1 maas:1 guide:1 formal:1 arnold:1 neighbor:1 brenna:1 dimension:4 world:4 transition:4 rich:1 concavity:1 forward:5 author:2 reinforcement:23 far:2 approximate:1 synergy:2 overcomes:1 satinder:1 mustafa:1 overfitting:1 investigating:1 kkt:1 harm:1 manuela:1 popovic:1 pasde:1 imitation:4 agapiou:1 search:4 continuous:10 iterative:1 learn:3 nature:1 transfer:2 ca:1 bination:1 robust:1 improving:1 alg:2 investigated:1 complex:2 constructing:1 domain:6 diag:1 marc:2 apr:1 main:3 motivation:1 noise:3 s2:1 restelli:5 sridhar:3 firl:3 aamas:1 definiteness:2 slow:1 pirotta:5 sub:2 candidate:1 vanish:6 jmlr:4 admissible:1 ian:1 removing:2 rk:4 theorem:1 bad:1 specific:1 gupta:1 intrinsic:2 incorporating:1 burden:1 workshop:2 corr:2 importance:1 anind:1 magnitude:1 krieger:1 margin:2 horizon:1 entropy:3 saddle:2 lagrange:1 expressed:2 springer:4 corresponds:3 minimizer:2 satisfies:1 acm:9 oct:1 conditional:1 superclass:1 goal:3 kimberly:1 feasible:1 change:1 infinite:2 reversing:1 degradation:2 invariance:1 svd:3 orthogonalize:1 experimental:1 select:2 meant:1 brevity:1 jonathan:2 evaluate:3 proto:3 |
6,414 | 6,801 | First-Order Adaptive Sample Size Methods to
Reduce Complexity of Empirical Risk Minimization
Aryan Mokhtari
University of Pennsylvania
[email protected]
Alejandro Ribeiro
University of Pennsylvania
[email protected]
Abstract
This paper studies empirical risk minimization (ERM) problems for large-scale
datasets and incorporates the idea of adaptive sample size methods to improve the
guaranteed convergence bounds for first-order stochastic and deterministic methods. In contrast to traditional methods that attempt to solve the ERM problem
corresponding to the full dataset directly, adaptive sample size schemes start with
a small number of samples and solve the corresponding ERM problem to its statistical accuracy. The sample size is then grown geometrically ? e.g., scaling by a
factor of two ? and use the solution of the previous ERM as a warm start for the
new ERM. Theoretical analyses show that the use of adaptive sample size methods
reduces the overall computational cost of achieving the statistical accuracy of the
whole dataset for a broad range of deterministic and stochastic first-order methods. The gains are specific to the choice of method. When particularized to, e.g.,
accelerated gradient descent and stochastic variance reduce gradient, the computational cost advantage is a logarithm of the number of training samples. Numerical
experiments on various datasets confirm theoretical claims and showcase the gains
of using the proposed adaptive sample size scheme.
1
Introduction
Finite sum minimization (FSM) problems involve objectives that are expressed as the sum of a
typically large number of component functions. Since evaluating descent directions is costly, it is
customary to utilize stochastic descent methods that access only one of the functions at each iteration. When considering first order methods, a fitting measure of complexity is the total number of
gradient evaluations that are needed to achieve optimality of order ?. The paradigmatic deterministic
gradient descent (GD) method serves as a naive complexity upper bound and has long been known
to obtain an ?-suboptimal solution with O(N ? log(1/?)) gradient evaluations for an FSM problem
with N component functions and condition number ? [13].
p Accelerated gradient descent (AGD) [14]
improves the computational complexity of GD to O(N ? log(1/?)), which is known to be the optimal bound for deterministic first-order methods [13]. In terms of stochastic optimization, it has been
only recently that linearly convergent methods have been proposed. Stochastic averaging gradient
[15, 8], stochastic variance reduction [10], and stochastic dual coordinate ascent [17, 18], have all
been shown to converge to ?-accuracy at a cost of O((N +?) log(1/?)) gradient
p evaluations. The accelerating catalyst framework in [11] further
reduces
complexity
to
O((N
+
N ?) log(?) log(1/?))
p
and the works in [1] and [7] to O((N + N ?) log(1/?)). The latter matches the upper bound on
the complexity of stochastic methods [20].
Perhaps the main motivation for studying FSM is the solution of empirical risk minimization (ERM)
problems associated with a large training set. ERM problems are particular cases of FSM, but they
do have two specific qualities that come from the fact that ERM is a proxy for statistical loss minimization. The first property is that since the empirical risk and the statistical loss have different
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
minimizers, there is no reason to solve ERM beyond the expected difference between the two objectives. This so-called statistical accuracy takes the place of ? in the complexity orders of the previous
paragraph and is a constant of order O(1/N ? ) where ? is a constant from the interval [0.5, 1] depending on the regularity of the loss function; see Section 2. The second important property of
ERM is that the component functions are drawn from a common distribution. This implies that if
we consider subsets of the training set, the respective empirical risk functions are not that different
from each other and, indeed, their differences are related to the statistical accuracy of the subset.
The relationship of ERM to statistical loss minimization suggests that ERM problems have more
structure than FSM problems. This is not exploited by most existing methods which, albeit used for
ERM, are in fact designed for FSM. The goal of this paper is to exploit the relationship between ERM
and statistical loss minimization to achieve lower overall computational complexity for a broad class
of first-order methods applied to ERM. The technique we propose uses subsamples of the training
set containing n ? N component functions that we grow geometrically. In particular, we start by a
small number of samples and minimize the corresponding empirical risk added by a regularization
term of order Vn up to its statistical accuracy. Note that, based on the first property of ERM, the
added adaptive regularization term does not modify the required accuracy while it makes the problem
strongly convex and improves the problem condition number. After solving the subproblem, we
double the size of the training set and use the solution of the problem with n samples as a warm
start for the problem with 2n samples. This is a reasonable initialization since based on the second
property of ERM the functions are drawn from a joint distribution, and, therefore, the optimal values
of the ERM problems with n and 2n functions are not that different from each other. The proposed
approach succeeds in exploiting the two properties of ERM problems to improve complexity bounds
of first-order methods. In particular, we show that to reach the statistical accuracy of the full training
set the adaptive sample size scheme reduces the overall computational complexity of a broad range
of first-order methods by a factor of log(N ? ). For instance, the overall computational complexity
of adaptive
sample size AGD to reach
p
p the statistical accuracy of the full training set is of order
O(N ?) which is lower than O((N ?) log(N ? )) complexity of AGD.
Related work. The adaptive sample size approach was used in [6] to improve the performance of
the SAGA method [8] for solving ERM problems. In the dynamic SAGA (DynaSAGA) method in
[6], the size of training set grows at each iteration by adding two new samples, and the iterates are
updated by a single step of SAGA. Although DynaSAGA succeeds in improving the performance of
SAGA for solving ERM problems, it does not use an adaptive regularization term to tune the problem
condition number. Moreover, DynaSAGA only works for strongly convex functions, while in our
proposed scheme the functions are convex (not necessarily strongly convex). The work in [12] is
the most similar work to this manuscript. The Ada Newton method introduced in [12] aims to solve
each subproblem within its statistical accuracy with a single update of Newton?s method by ensuring
that iterates always stay in the quadratic convergence region of Newton?s method. Ada Newton
reaches the statistical accuracy of the full training in almost two passes over the dataset; however, its
computational complexity is prohibitive since it requires computing the objective function Hessian
and its inverse at each iteration.
2
Problem Formulation
Consider a decision vector w 2 Rp , a random variable Z with realizations z and a convex loss
function f (w; z). We aim to find the optimal argument that minimizes the optimization problem
Z
w? := argmin L(w) = argmin EZ [f (w, Z)] = argmin f (w, Z)P (dz),
(1)
w
w
w
Z
where L(w) := EZ [f (w, Z)] is defined as the expected loss, and P is the probability distribution
of the random variable Z. The optimization problem in (1) cannot be solved since the distribution
P is unknown. However, we have access to a training set T = {z1 , . . . , zN } containing N independent samples z1 , . . . , zN drawn from P , and, therefore, we attempt to minimize the empirical loss
associated with the training set T = {z1 , . . . , zN }, which is equivalent to minimizing the problem
n
1X
f (w, zi ),
n i=1
w
w
Pn
for n = N . Note that in (2) we defined Ln (w) := (1/n) i=1 f (w, zi ) as the empirical loss.
wn? := argmin Ln (w) = argmin
2
(2)
There is a rich literature on bounds for the difference between the expected loss L and the empirical
loss Ln which is also referred to as estimation error [4, 3]. We assume here that there exists a
constant Vn , which depends on the number of samples n, that upper bounds the difference between
the expected and empirical losses for all w 2 Rp
?
E sup |L(w) Ln (w)| ? Vn ,
(3)
w2Rp
where the expectation is with respect to the choice of the p
training set. The celebrated work of Vapnik
in [19, Section
3.4]
provides
the
upper
bound
V
=
O(
(1/n) log(1/n)) which can be improved
n
p
to Vn = O( 1/n) using the chaining technique (see, e.g., [5]). Bounds of the order Vn = O(1/n)
have been derived more recently under stronger regularity conditions that are not uncommon in
practice, [2, 9, 4]. In this paper, we report our results using the general bound Vn = O(1/n? ) where
? can be any constant form the interval [0.5, 1].
The observation that the optimal values of the expected loss and empirical loss are within a Vn
distance of each other implies that there is no gain in improving the optimization error of minimizing
Ln beyond the constant Vn . In other words, if we find an approximate solution wn such that the
optimization error is bounded by Ln (wn ) Ln (wn? ) ? Vn , then finding a more accurate solution to
reduce the optimization error is not beneficial since the overall error, i.e., the sum of estimation and
optimization errors, does not become smaller than Vn . Throughout the paper we say that wn solves
the ERM problem in (2) to within its statistical accuracy if it satisfies Ln (wn ) Ln (wn? ) ? Vn .
We can further leverage the estimation error to add a regularization term of the form (cVn /2)kwk2 to
the empirical loss to ensure that the problem is strongly convex. To do so, we define the regularized
empirical risk Rn (w) := Ln (w) + (cVn /2)kwk2 and the corresponding optimal argument
wn? := argmin Rn (w) = argmin Ln (w) +
w
w
cVn
kwk2 ,
2
(4)
and attempt to minimize Rn with accuracy Vn . Since the regularization in (4) is of order Vn and
(3) holds, the difference between Rn (wn? ) and L(w? ) is also of order Vn ? this is not immediate
as it seems; see [16]. Thus, the variable wn solves the ERM problem in (2) to within its statistical
accuracy if it satisfies Rn (wn ) Rn (wn? ) ? Vn . It follows that by solving the problem in (4) for
?
n = N we find wN
that solves the expected risk minimization in (1) up to the statistical accuracy
VN of the full training set T . In the following section we introduce a class of methods that solve
problem (4) up to its statistical accuracy faster than traditional deterministic and stochastic descent
methods.
3
Adaptive Sample Size Methods
The empirical risk minimization (ERM) problem in (4) can be solved using state-of-the-art methods
for minimizing strongly convex functions. However, these methods never exploit the particular
property of ERM that the functions are drawn from the same distribution. In this section, we propose
an adaptive sample size scheme which exploits this property of ERM to improve the convergence
guarantees for traditional optimization method to reach the statistical accuracy of the full training
set. In the proposed adaptive sample size scheme, we start by a small number of samples and solve
its corresponding ERM problem with a specific accuracy. Then, we double the size of the training
set and use the solution of the previous ERM problem ? with half samples ? as a warm start for the
new ERM problem. This procedure keeps going until the training set becomes identical to the given
training set T which contains N samples.
Consider the training set Sm with m samples as a subset of the full training T , i.e., Sm ? T . Assume that we have solved the ERM problem corresponding to the set Sm such that the approximate
?
solution wm satisfies the condition E[Rm (wm ) Rm (wm
)] ? m . Now the next step in the proposed adaptive sample size scheme is to double the size of the current training set Sm and solve the
ERM problem corresponding to the set Sn which has n = 2m samples and contains the previous
set, i.e., Sm ? Sn ? T .
We use wm which is a proper approximate for the optimal solution of Rm as the initial iterate for the
optimization method that we use to minimize the risk Rn . This is a reasonable choice if the optimal
arguments of Rm and Rn are close to each other, which is the case since samples are drawn from
3
Algorithm 1 Adaptive Sample Size Mechanism
p
1: Input: Initial sample size n = m0 and argument wn = wm0 with krRn (wn )k ? ( 2c)Vn
2: while n ? N do {main loop}
3:
4:
5:
6:
7:
8:
9:
10:
Update argument and index: wm = wn and m = n.
Increase sample size: n = min{2m, N }.
? = wm .
Set the initial variable: w
p
? > ( 2c)Vn do
while krRn (w)k
? Compute w
? = Update(w,rR
?
?
Update the variable w:
n (w))
end while
?
Set wn = w.
end while
a fixed distribution P. Starting with wm , we can use first-order descent methods to minimize the
empirical risk Rn . Depending on the iterative method that we use for solving each ERM problem
we might need different number of iterations to find an approximate solution wn which satisfies the
condition E[Rn (wn ) Rn (wn? )] ? n . To design a comprehensive routine we need to come up
with a proper condition for the required accuracy n at each phase.
In the following proposition we derive an upper bound for the expected suboptimality of the variable
wm for the risk Rn based on the accuracy of wm for the previous risk Rm associated with the
training set Sm . This upper bound allows us to choose the accuracy m efficiently.
Proposition 1. Consider the sets Sm and Sn as subsets of the training set T such that Sm ? Sn ? T ,
where the number of samples in the sets Sm and Sn are m and n, respectively. Further, define wm as
?
an m optimal solution of the risk Rm in expectation, i.e., E[Rm (wm ) Rm
] ? m , and recall Vn
as the statistical accuracy of the training set Sn . Then the empirical risk error Rn (wm ) Rn (wn? )
of the variable wm corresponding to the set Sn in expectation is bounded above by
E[Rn (wm ) Rn (wn? )] ?
m+
2(n
m)
n
(Vn
m
+ Vm )+2 (Vm
Vn )+
c(Vm
Vn )
2
kw? k2 . (5)
Proof. See Section 7.1 in the supplementary material.
The result in Proposition 1 characterizes the sub-optimality of the variable wm , which is an m
sub-optimal solution for the risk Rm , with respect to the empirical risk Rn associated with the set
Sn . If we assume that the statistical accuracy Vn is of the order O(1/n? ) and we double the size of
the training set at each step, i.e., n = 2m, then the inequality in (5) can be simplified to
?
?
?
?
1 ?
c
?
? 2
E[Rn (wm ) Rn (wn )] ? m + 2 + 1
2
+
kw
k
Vm .
(6)
2?
2
The expression in (6) formalizes the reason that there is no need to solve the sub-problem Rm
beyond its statistical accuracy Vm . In other words, even if m is zero the expected sub-optimality
will be of the order O(Vm ), i.e., E[Rn (wm ) Rn (wn? )] = O(Vm ). Based on this observation, The
required precision m for solving the sub-problem Rm should be of the order m = O(Vm ).
The steps of the proposed adaptive sample size scheme is summarized in Algorithm 1. Note that
since computation of the sub-optimality Rn (wn ) Rn (wn? ) requires access to the minimizer wn? , we
replace the condition Rn (wn ) Rn (wn? ) ? Vn by a bound on the norm of gradient krRn (wn )k2 .
The risk Rn is strongly convex, and we can bound the suboptimality Rn (wn ) Rn (wn? ) as
Rn (wn )
Rn (wn? ) ?
1
krRn (wn )k2 .
2cVn
(7)
p
Hence, at each stage, we stop updating the variable if the condition krRn (wn )k ? ( 2c)Vn holds
? can be updated in Step 7
which implies Rn (wn ) Rn (wn? ) ? Vn . The intermediate variable w
using any first-order method. We will discuss this procedure for accelerated gradient descent (AGD)
and stochastic variance reduced gradient (SVRG) methods in Sections 4.1 and 4.2, respectively.
4
4
Complexity Analysis
In this section, we aim to characterize the number of required iterations sn at each stage to solve
the subproblems within their statistical accuracy. We derive this result for all linearly convergent
first-order deterministic and stochastic methods.
The inequality in (6) not only leads to an efficient policy for the required precision m at each
step, but also provides an upper bound for the sub-optimality of the initial iterate, i.e., wm , for
minimizing the risk Rn . Using this upper bound, depending on the iterative method of choice, we
can characterize the number of required iterations sn to ensure that the updated variable is within
the statistical accuracy of the risk Rn . To formally characterize the number of required iterations
sn , we first assume the following conditions are satisfied.
Assumption 1. The loss functions f (w, z) are convex with respect to w for all values of z. Moreover, their gradients rf (w, z) are Lipschitz continuous with constant M
krf (w, z)
rf (w0 , z)k ? M kw
w0 k,
for all z.
(8)
The conditions in Assumption 1 imply that the average loss L(w) and the empirical loss Ln (w)
are convex and their gradients are Lipschitz continuous with constant M . Thus, the empirical risk
Rn (w) is strongly convex with constant cVn and its gradients rRn (w) are Lipschitz continuous
with parameter M + cVn .
So far we have concluded that each subproblem should be solved up to its statistical accuracy.
This observation leads to an upper bound for the number of iterations needed at each step to solve
each subproblem. Indeed various descent methods can be executed for solving the sub-problem.
Here we intend to come up with a general result that contains all descent methods that have a
linear convergence rate when the objective function is strongly convex and smooth. In the following
theorem, we derive a lower bound for the number of required iterations sn to ensure that the variable
wn , which is the outcome of updating wm by sn iterations of the method of interest, is within the
statistical accuracy of the risk Rn for any linearly convergent method.
Theorem 2. Consider the variable wm as a Vm -suboptimal solution of the risk Rm in expectation,
?
i.e., E[Rm (wm ) Rm (wm
)] ? Vm , where Vm = O(1/m? ). Consider the sets Sm ? Sn ? T
such that n = 2m, and suppose Assumption 1 holds. Further, define 0 ? ?n < 1 as the linear
convergence factor of the descent method used for updating the iterates. Then, the variable wn
generated based on the adaptive sample size mechanism satisfies E[Rn (wn ) Rn (wn? )] ? Vn if
the number of iterations sn at the n-th stage is larger than
?
?
log 3 ? 2? + (2? 1) 2 + 2c kw? k2
sn
.
(9)
log ?n
Proof. See Section 7.2 in the supplementary material.
The result in Theorem 2 characterizes the number of required iterations at each phase. Depending
on the linear convergence factor ?n and the parameter ? for the order of statistical accuracy, the
number of required iterations might be different. Note that the parameter ?n might depend on the
size of the training set directly or through the dependency of the problem condition number on
n. It is worth mentioning that the result ?in (9) shows a lower bound for the number
of required
?
iteration which means that sn = b ( log 3 ? 2? + (2? 1) 2 + (c/2)kw? k2 /log ?n )c + 1 is
the exact number of iterations needed when minimizing Rn , where bac indicates the floor of a.
To characterize the overall computational complexity of the proposed adaptive sample size scheme,
the exact expression for the linear convergence constant ?n is required. In the following section,
we focus on two deterministic and stochastic methods and characterize their overall computational
complexity to reach the statistical accuracy of the full training set T .
4.1
Adaptive Sample Size Accelerated Gradient (Ada AGD)
The accelerated gradient descent (AGD) method, also called as Nesterov?s method, is a longestablished descent method which achieves the optimal convergence rate for first-order deterministic methods. In this section, we aim to combine the update of AGD with the adaptive sample size
scheme in Section 3 to improve convergence guarantees of AGD for solving ERM problems. This
5
can be done by using AGD for updating the iterates in step 7 of Algorithm 1. Given an iterate wm
within the statistical accuracy of the set Sm , the adaptive sample size accelerated gradient descent
method (Ada AGD) requires sn iterations of AGD to ensure that the resulted iterate wn lies in the
? and y
? as w
?0 = y
? 0 = wm ,
statistical accuracy of Sn . In particular, if we initialize the sequences w
the approximate solution wn for the risk Rn is the outcome of the updates
?n rRn (?
yk ),
? k+1 = y
?k
w
and
(10)
? k+1 = w
? k+1 + n (w
? k+1 w
? k)
y
(11)
? sn . The parameters ?n and n are indexed by n since they depend on
after sn iterations, i.e., wn = w
the number of samples. We use the convergence rate of AGD to characterize the number of required
iterations sn to guarantee that the outcome of the recursive updates in (10) and (11) is within the
statistical accuracy of Rn .
Theorem 3. Consider the variable wm as a Vm -optimal solution of the risk Rm in expectation, i.e.,
?
E[Rm (wm ) Rm (wm
)] ? Vm , where Vm = /m? . Consider the sets Sm ? Sn ? T such that
n = 2m, and suppose Assumption 1 holds. Further, set the parameters ?n and n as
p
p
1
cVn + M
cV
p n.
?n =
and
(12)
n = p
cVn + M
cVn + M + cVn
Then, the variable wn generated based on the update of Ada AGD in (10)-(11) satisfies E[Rn (wn )
Rn (wn? )] ? Vn if the number of iterations sn is larger than
s
?
?
n? M + c
sn
log 6 ? 2? + (2? 1) 4 + ckw? k2 .
(13)
c
Moreover, if we define m0 as the size of the first training set, to reach the statistical accuracy VN of
the full training set T the overall computational complexity of Ada GD is given by
"
!s
#
p
?
?
?
?
N
2?
N ?M
N 1 + log2
+ p
log 6 ? 2? + (2? 1) 4 + ckw? k2 . (14)
?
m0
c
2
1
Proof. See Section 7.3 in the supplementary material.
The result in Theorem 3 characterizes the number of required iterations sn to achieve the statistical
accuracy of Rn . Moreover, it shows that to reach the accuracy VN = O(1/N ? ) for the risk RN
accosiated to the full training set T , the total computational complexity of Ada AGD is of the order
O N (1+?/2) . Indeed, this complexity is lower than the overall computational complexity of AGD
p
for reaching the same target which is given by O N ?N log(N ? ) = O N (1+?/2) log(N ? ) .
Note that this bound holds for AGD since the condition number ?N := (M + cVN )/(cVN ) of the
risk RN is of the order O(1/VN ) = O(N ? ).
4.2
Adaptive Sample Size SVRG (Ada SVRG)
For the adaptive sample size mechanism presented in Section 3, we can also use linearly convergent stochastic methods such as stochastic variance reduced gradient (SVRG) in [10] to update the
iterates. The SVRG method succeeds in reducing the computational complexity of deterministic
first-order methods by computing a single gradient per iteration and using a delayed version of the
average gradient to update the iterates. Indeed, we can exploit the idea of SVRG to develop low
computational complexity adaptive sample size methods to improve the performance of deterministic adaptive sample size algorithms. Moreover, the adaptive sample size variant of SVRG (Ada
SVRG) enhances the proven bounds for SVRG to solve ERM problems.
We proceed to extend the idea of adaptive sample size scheme to the SVRG algorithm. To do so,
?
consider wm as an iterate within the statistical accuracy, E[Rm (wm ) Rm (wm
)] ? Vm , for a set
Sm which contains m samples. Consider sn and qn as the numbers of outer and inner loops for the
? and w
? as
update of SVRG, respectively, when the size of the training set is n. Further, consider w
the sequences of iterates for the outer and inner loops of SVRG, respectively. In the adaptive sample
6
size SVRG (Ada SVRG) method to minimize the risk Rn , we set the approximate solution wm for
? 0 = wm . Then, the outer
the previous ERM problem as the initial iterate for the outer loop, i.e., w
loop update which contains gradient computation is defined as
n
? k) =
rRn (w
1X
? k , zi ) + cVn w
?k
rf (w
n i=1
for
k = 0, . . . , sn
1,
(15)
and the inner loop for the k-th outer loop contains qn iterations of the following update
? t+1,k = w
? t,k
w
? t,k , zit ) + cVn w
? t,k
?n (rf (w
? k , zit )
rf (w
? k + rRn (w
? k )) ,
cVn w
(16)
? 0,k = w
? k,
for t = 0, . . . , qn 1, where the iterates for the inner loop at step k are initialized as w
and it is index of the function which is chosen unfirmly at random from the set {1, . . . , n} at the
? qn ,k is used as the variable for the next outer loop,
inner iterate t. The outcome of each inner loop w
? k+1 = w
? qn ,k . We define the outcome of sn outer loops w
? sn as the approximate solution for
i.e., w
? sn .
the risk Rn , i.e., wn = w
In the following theorem we derive a bound on the number of required outer loops sn to ensure that
the variable wn generated by the updates in (15) and (16) will be in the statistical accuracy of Rn in
expectation, i.e., E[Rn (wn ) Rn (wn? )] ? Vn . To reach the smallest possible lower bound for sn ,
we properly choose the number of inner loop iterations qn and the learning rate ?n .
Theorem 4. Consider the variable wm as a Vm -optimal solution of the risk Rm , i.e., a solution such
?
that E[Rm (wm ) Rm (wm
)] ? Vm , where Vm = O(1/m? ). Consider the sets Sm ? Sn ? T
such that n = 2m, and suppose Assumption 1 holds. Further, set the number of inner loop iterations
as qn = n and the learning rate as ?n = 0.1/(M + cVn ). Then, the variable wn generated based
on the update of Ada SVRG in (15)-(16) satisfies E[Rn (wn ) Rn (wn? )] ? Vn if the number of
iterations sn is larger than
h
?
?i
c
sn
log2 3 ? 2? + (2? 1) 2 + kw? k2 .
(17)
2
Moreover, to reach the statistical accuracy VN of the full training set T the overall computational
complexity of Ada SVRG is given by
h
?
?i
c
4N log2 3 ? 2? + (2? 1) 2 + kw? k2 .
(18)
2
Proof. See Section 7.4.
The result in (17) shows that the minimum number of outer loop iterations for Ada SVRG is equal to
sn = blog2 [3 ? 2? + (2? 1)(2 + (c/2)kw? k2 )]c+1. This bound leads to the result in (18) which
shows that the overall computational complexity of Ada SVRG to reach the statistical accuracy of
the full training set T is of the order O(N ). This bound not only improves the bound O(N 1+?/2 )
for Ada AGD, but also enhances the complexity of SVRG for reaching the same target accuracy
which is given by O((N + ?) log(N ? )) = O(N log(N ? )).
5
Experiments
In this section, we compare the adaptive sample size versions of a group of first-order methods, including gradient descent (GD), accelerated gradient descent (AGD), and stochastic variance reduced
gradient (SVRG) with their standard (fixed sample size) versions. In the main paper, we only use
the RCV1 dataset. Further numerical experiments on MNIST dataset can be found in Section 7.5 in
the supplementary material. We use N = 10, 000 samples of the RCV1 dataset for the training set
and the remaining 10, 242 as the test set. The number of features in each sample is p = 47, 236. In
our experiments, we use logistic loss. The constant c should be within the order of gradients Lipschitz continuity constant M , and, therefore, we set it as c = 1 since the samples are normalized and
M = 1. The size of the initial training set for adaptive methods is mp
0 = 400. In our experiments
we assume ? = 0.5 and therefore the added regularization term is (1/ n)kwk2 .
The plots in Figure 1 compare the suboptimality of GD, AGD, and SVRG with their adaptive sample
size versions. As our theoretical results suggested, we observe that the adaptive sample size scheme
reduces the overall computational complexity of all of the considered linearly convergent first-order
7
10 2
10 2
10 1
10 1
AGD
Ada AGD
SVRG
Ada SVRG
10 2
GD
Ada GD
0
10 -1
10
10
Suboptimality
10
Suboptimality
Suboptimality
10 1
0
10 -1
-2
0
20
40
60
80
10
100
10
0
10 -1
10 -2
-2
0
20
Number of e?ective passes
40
60
80
10
100
-3
0
1
Number of e?ective passes
2
3
4
5
Number of e?ective passes
6
p
Figure 1: Suboptimality vs. number of effective passes for RCV1 dataset with regularization of O(1/ n).
50%
50%
45%
45%
40%
40%
40%
35%
35%
GD
Ada GD
25%
30%
25%
30%
25%
20%
20%
20%
15%
15%
15%
10%
10%
5%
10%
5%
0
20
40
60
Number of e?ective passes
80
100
SVRG
Ada SVRG
45%
Test error
30%
Test error
Test error
35%
50%
AGD
Ada AGD
5%
0
20
40
60
Number of e?ective passes
80
100
0
1
2
3
4
Number of e?ective passes
5
6
p
Figure 2: Test error vs. number of effective passes for RCV1 dataset with regularization of O(1/ n).
methods. If we compare the test errors of GD, AGD, and SVRG with their adaptive sample size
variants, we reach the same conclusion that the adaptive sample size scheme reduces the overall
computational complexity to reach the statistical accuracy of the full training set. In particular, the
left plot in Figure 2 shows that Ada GD approaches the minimum test error of 8% after 55 effective
passes, while GD can not improve the test error even after 100 passes. Indeed, GD will reach lower
test error if we run it for more iterations. The central plot in Figure 2 showcases that Ada AGD
reaches 8% test error about 5 times faster than AGD. This is as predicted by log(N ? ) = log(100) =
4.6. The right plot in Figure 2 illustrates a similar improvement for Ada SVRG. We have observed
similar performances for other datasets such as MNIST ? see Section 7.5 in supplementary material.
6
Discussions
We presented an adaptive sample size scheme to improve the convergence guarantees for a class
of first-order methods which have linear convergence rates under strong convexity and smoothness
assumptions. The logic behind the proposed adaptive sample size scheme is to replace the solution
of a relatively hard problem ? the ERM problem for the full training set ? by a sequence of relatively
easier problems ? ERM problems corresponding to a subset of samples. Indeed, whenever m < n,
solving the ERM problems in (4) for loss Rm is simpler than the one for loss Rn because:
(i) The adaptive regularization term of order Vm makes the condition number of Rm smaller
than the condition number of Rn ? which uses a regularizer of order Vn .
(ii) The approximate solution wm that we need to find for Rm is less accurate than the approximate solution wn we need to find for Rn .
(iii) The computation cost of an iteration for Rm ? e.g., the cost of evaluating a gradient ? is
lower than the cost of an iteration for Rn .
Properties (i)-(iii) combined with the ability to grow the sample size geometrically, reduce the overall computational complexity for reaching the statistical accuracy of the full training set. We particularized our results to develop adaptive (Ada) versions of AGD and SVRG. In both methods we
found a computational complexity reduction of order O(log(1/VN )) = O(log(N ? )) which was
corroborated in numerical experiments. The idea and analysis of adaptive first order methods apply
generically to any other approach with linear convergence rate (Theorem 2). The development of
sample size adaptation for sublinear methods is left for future research.
Acknowledgments
This research was supported by NSF CCF 1717120 and ARO W911NF1710438.
8
References
[1] Zeyuan Allen-Zhu. Katyusha: The First Direct Acceleration of Stochastic Gradient Methods. In STOC,
2017.
[2] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[3] L?eon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT?2010, pages 177?186. Springer, 2010.
[4] L?eon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems 20, Vancouver, British Columbia, Canada, December 3-6, 2007, pages 161?168,
2007.
[5] Olivier Bousquet. Concentration inequalities and empirical processes theory applied to the analysis of
learning algorithms. PhD thesis, Ecole Polytechnique, 2002.
[6] Hadi Daneshmand, Aur?elien Lucchi, and Thomas Hofmann. Starting small - learning with adaptive
sample sizes. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016,
New York City, NY, USA, pages 1463?1471, 2016.
[7] Aaron Defazio. A simple practical accelerated method for finite sums. In Advances In Neural Information
Processing Systems, pages 676?684, 2016.
[8] Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method
with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems 27, Montreal, Quebec, Canada, pages 1646?1654, 2014.
[9] Roy Frostig, Rong Ge, Sham M. Kakade, and Aaron Sidford. Competing with the empirical risk minimizer in a single pass. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris,
France, July 3-6, 2015, pages 728?763, 2015.
[10] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems 26. Lake Tahoe, Nevada, United States.,
pages 315?323, 2013.
[11] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In
Advances in Neural Information Processing Systems, pages 3384?3392, 2015.
[12] Aryan Mokhtari, Hadi Daneshmand, Aur?elien Lucchi, Thomas Hofmann, and Alejandro Ribeiro. Adaptive Newton method for empirical risk minimization to statistical accuracy. In Advances in Neural Information Processing Systems 29. Barcelona, Spain, pages 4062?4070, 2016.
[13] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer
Science & Business Media, 2013.
[14] Yurii Nesterov et al. Gradient methods for minimizing composite objective function. 2007.
[15] Nicolas Le Roux, Mark W. Schmidt, and Francis R. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems
25. Lake Tahoe, Nevada, United States., pages 2672?2680, 2012.
[16] Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Learnability, stability and
uniform convergence. The Journal of Machine Learning Research, 11:2635?2670, 2010.
[17] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss.
The Journal of Machine Learning Research, 14:567?599, 2013.
[18] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, 155(1-2):105?145, 2016.
[19] Vladimir Vapnik. The nature of statistical learning theory. Springer Science & Business Media, 2013.
[20] Blake E Woodworth and Nati Srebro. Tight complexity bounds for optimizing composite objectives. In
Advances in Neural Information Processing Systems, pages 3639?3647, 2016.
9
| 6801 |@word version:5 norm:1 stronger:1 nd:1 seems:1 reduction:3 initial:6 celebrated:1 contains:6 united:2 ecole:1 existing:1 current:1 numerical:3 hofmann:2 zaid:1 designed:1 plot:4 update:15 v:2 half:1 prohibitive:1 iterates:8 provides:2 tahoe:2 simpler:1 zhang:3 mathematical:1 direct:1 become:1 fitting:1 combine:1 introductory:1 paragraph:1 introduce:1 upenn:2 indeed:6 expected:8 considering:1 becomes:1 spain:1 moreover:6 bounded:2 daneshmand:2 medium:2 argmin:7 minimizes:1 finding:1 guarantee:4 formalizes:1 rm:26 k2:10 mcauliffe:1 modify:1 might:3 initialization:1 suggests:1 mentioning:1 range:2 acknowledgment:1 practical:1 practice:1 recursive:1 procedure:2 universal:1 empirical:22 composite:3 word:2 cannot:1 close:1 risk:33 equivalent:1 deterministic:10 dz:1 compstat:1 starting:2 convex:14 roux:1 stability:1 coordinate:3 updated:3 target:2 suppose:3 shamir:1 exact:2 olivier:2 programming:1 us:2 roy:1 updating:4 showcase:2 corroborated:1 observed:1 subproblem:4 solved:4 region:1 yk:1 convexity:2 complexity:30 nesterov:3 dynamic:1 depend:2 solving:9 tight:1 predictive:1 joint:1 various:2 regularizer:1 grown:1 fast:1 effective:3 outcome:5 shalev:3 supplementary:5 solve:11 larger:3 say:1 ability:1 subsamples:1 advantage:1 rr:1 sequence:3 nevada:2 propose:2 aro:1 adaptation:1 loop:15 realization:1 achieve:3 exploiting:1 convergence:15 regularity:2 double:4 sea:2 incremental:1 depending:4 derive:4 develop:2 montreal:1 zit:2 strong:1 solves:3 predicted:1 come:3 implies:3 direction:1 stochastic:22 material:5 particularized:2 proposition:3 cvn:16 rong:1 hold:6 considered:1 blake:1 claim:1 m0:3 achieves:1 smallest:1 estimation:3 city:1 minimization:11 always:1 aim:4 reaching:3 pn:1 derived:1 focus:1 properly:1 improvement:1 hongzhou:1 indicates:1 contrast:1 minimizers:1 typically:1 going:1 france:1 overall:14 dual:3 classification:1 colt:1 development:1 art:1 initialize:1 equal:1 never:1 beach:1 identical:1 kw:8 broad:3 icml:1 jon:1 future:1 report:1 resulted:1 comprehensive:1 delayed:1 phase:2 karthik:1 attempt:3 interest:1 evaluation:3 uncommon:1 generically:1 behind:1 accurate:2 fsm:6 respective:1 ohad:1 indexed:1 logarithm:1 initialized:1 theoretical:3 instance:1 sidford:1 zn:3 ada:25 cost:6 subset:5 uniform:1 johnson:1 learnability:1 characterize:6 dependency:1 mokhtari:2 proximal:1 gd:13 combined:1 st:1 international:1 aur:2 stay:1 vm:19 michael:1 lucchi:2 thesis:1 central:1 satisfied:1 containing:2 choose:2 american:1 rrn:4 elien:2 summarized:1 mp:1 depends:1 sup:1 characterizes:3 start:6 wm:36 francis:2 shai:3 simon:1 minimize:6 accuracy:44 hadi:2 variance:6 efficiently:1 worth:1 reach:14 aribeiro:1 whenever:1 associated:4 proof:4 gain:3 stop:1 dataset:8 ective:6 recall:1 improves:3 routine:1 manuscript:1 improved:1 katyusha:1 rie:1 formulation:1 done:1 strongly:9 stage:3 until:1 continuity:1 logistic:1 quality:1 perhaps:1 grows:1 usa:2 normalized:1 ccf:1 regularization:9 hence:1 chaining:1 suboptimality:7 polytechnique:1 allen:1 recently:2 common:1 volume:1 extend:1 association:1 kwk2:4 cv:1 smoothness:1 frostig:1 access:3 alejandro:2 add:1 optimizing:1 inequality:3 exploited:1 minimum:2 floor:1 zeyuan:1 converge:1 paradigmatic:1 july:1 ii:1 full:15 harchaoui:1 sham:1 reduces:5 smooth:1 match:1 faster:2 bach:2 long:2 lin:1 ensuring:1 variant:2 basic:1 expectation:6 iteration:29 interval:2 grow:2 concluded:1 ascent:3 pass:11 december:1 quebec:1 incorporates:1 sridharan:1 jordan:1 krrn:5 leverage:1 intermediate:1 wm0:1 iii:2 wn:56 iterate:7 zi:3 pennsylvania:2 competing:1 suboptimal:2 reduce:4 idea:4 inner:8 tradeoff:1 expression:2 bartlett:1 defazio:2 accelerating:2 peter:1 hessian:1 proceed:1 york:1 involve:1 tune:1 blog2:1 bac:1 reduced:3 nsf:1 per:1 group:1 achieving:1 drawn:5 krf:1 utilize:1 lacoste:1 geometrically:3 sum:4 run:1 inverse:1 place:1 almost:1 reasonable:2 throughout:1 vn:36 lake:2 decision:1 scaling:1 bound:28 guaranteed:1 convergent:5 quadratic:1 bousquet:2 nathan:1 argument:5 optimality:5 min:1 rcv1:4 relatively:2 beneficial:1 smaller:2 kakade:1 erm:38 ln:12 discus:1 mechanism:3 needed:3 ge:1 serf:1 end:2 yurii:2 studying:1 apply:1 observe:1 schmidt:1 rp:2 customary:1 thomas:2 remaining:1 ensure:5 log2:3 newton:5 exploit:4 eon:2 woodworth:1 objective:7 intend:1 added:3 costly:1 concentration:1 traditional:3 enhances:2 gradient:32 distance:1 outer:9 w0:2 reason:2 index:2 relationship:2 minimizing:6 vladimir:1 executed:1 stoc:1 subproblems:1 design:1 proper:2 policy:1 unknown:1 upper:9 observation:3 datasets:3 sm:14 finite:3 descent:18 immediate:1 rn:57 aryan:2 canada:2 introduced:1 required:15 paris:1 z1:3 barcelona:1 nip:1 beyond:3 suggested:1 rf:5 including:1 business:2 warm:3 regularized:3 zhu:1 scheme:15 improve:8 imply:1 julien:2 naive:1 columbia:1 sn:37 literature:1 nati:1 vancouver:1 catalyst:2 loss:23 lecture:1 ckw:2 sublinear:1 proven:1 srebro:2 proxy:1 course:1 supported:1 svrg:28 evaluating:2 rich:1 qn:7 adaptive:41 simplified:1 ribeiro:2 agd:27 far:1 approximate:9 keep:1 confirm:1 logic:1 mairal:1 shwartz:3 continuous:3 iterative:2 nature:1 ca:1 nicolas:1 improving:2 bottou:2 necessarily:1 main:3 linearly:5 whole:1 motivation:1 referred:1 ny:1 tong:3 precision:2 sub:8 saga:5 exponential:1 lie:1 theorem:8 british:1 specific:3 exists:1 mnist:2 albeit:1 adding:1 vapnik:2 phd:1 illustrates:1 easier:1 ez:2 expressed:1 springer:3 minimizer:2 satisfies:7 goal:1 acceleration:1 replace:2 lipschitz:4 hard:1 reducing:1 averaging:1 total:2 called:2 pas:1 succeeds:3 aaron:3 formally:1 support:1 mark:1 latter:1 accelerated:9 |
6,415 | 6,802 | Hiding Images in Plain Sight:
Deep Steganography
Shumeet Baluja
Google Research
Google, Inc.
[email protected]
Abstract
Steganography is the practice of concealing a secret message within another,
ordinary, message. Commonly, steganography is used to unobtrusively hide a small
message within the noisy regions of a larger image. In this study, we attempt
to place a full size color image within another image of the same size. Deep
neural networks are simultaneously trained to create the hiding and revealing
processes and are designed to specifically work as a pair. The system is trained on
images drawn randomly from the ImageNet database, and works well on natural
images from a wide variety of sources. Beyond demonstrating the successful
application of deep learning to hiding images, we carefully examine how the result
is achieved and explore extensions. Unlike many popular steganographic methods
that encode the secret message within the least significant bits of the carrier image,
our approach compresses and distributes the secret image?s representation across
all of the available bits.
1
Introduction to Steganography
Steganography is the art of covered or hidden writing; the term itself dates back to the 15th century,
when messages were physically hidden. In modern steganography, the goal is to covertly communicate
a digital message. The steganographic process places a hidden message in a transport medium, called
the carrier. The carrier may be publicly visible. For added security, the hidden message can also be
encrypted, thereby increasing the perceived randomness and decreasing the likelihood of content
discovery even if the existence of the message detected. Good introductions to steganography and
steganalysis (the process of discovering hidden messages) can be found in [1?5].
There are many well publicized nefarious applications of steganographic information hiding, such as
planning and coordinating criminal activities through hidden messages in images posted on public
sites ? making the communication and the recipient difficult to discover [6]. Beyond the multitude of
misuses, however, a common use case for steganographic methods is to embed authorship information,
through digital watermarks, without compromising the integrity of the content or image.
The challenge of good steganography arises because embedding a message can alter the appearance
and underlying statistics of the carrier. The amount of alteration depends on two factors: first, the
amount of information that is to be hidden. A common use has been to hide textual messages in
images. The amount of information that is hidden is measured in bits-per-pixel (bpp). Often, the
amount of information is set to 0.4bpp or lower. The longer the message, the larger the bpp, and
therefore the more the carrier is altered [6, 7]. Second, the amount of alteration depends on the carrier
image itself. Hiding information in the noisy, high-frequency filled, regions of an image yields less
humanly detectable perturbations than hiding in the flat regions. Work on estimating how much
information a carrier image can hide can be found in [8].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: The three components of the full system. Left: Secret-Image preparation. Center: Hiding
the image in the cover image. Right: Uncovering the hidden image with the reveal network; this is
trained simultaneously, but is used by the receiver.
The most common steganography approaches manipulate the least significant bits (LSB) of images to
place the secret information - whether done uniformly or adaptively, through simple replacement or
through more advanced schemes [9, 10]. Though often not visually observable, statistical analysis
of image and audio files can reveal whether the resultant files deviate from those that are unaltered.
Advanced methods attempt to preserve the image statistics, by creating and matching models of the
first and second order statistics of the set of possible cover images explicitly; one of the most popular
is named HUGO [11]. HUGO is commonly employed with relatively small messages (< 0.5bpp).
In contrast to the previous studies, we use a neural network to implicitly model the distribution of
natural images as well as embed a much larger message, a full-size image, into a carrier image.
Despite recent impressive results achieved by incorporating deep neural networks with steganalysis [12?14], there have been relatively few attempts to incorporate neural networks into the hiding
process itself [15?19]. Some of these studies have used deep neural networks (DNNs) to select which
LSBs to replace in an image with the binary representation of a text message. Others have used
DNNs to determine which bits to extract from the container images. In contrast, in our work, the
neural network determines where to place the secret information and how to encode it efficiently;
the hidden message is dispersed throughout the bits in the image. A decoder network, that has been
simultaneously trained with the encoder, is used to reveal the secret image. Note that the networks
are trained only once and are independent of the cover and secret images.
In this paper, the goal is to visually hide a full N ? N ? RGB pixel secret image in another
N ? N ? RGB cover image, with minimal distortion to the cover image (each color channel is 8
bits). However, unlike previous studies, in which a hidden text message must be sent with perfect
reconstruction, we relax the requirement that the secret image is losslessly received. Instead, we
are willing to find acceptable trade-offs in the quality of the carrier and secret image (this will be
described in the next section). We also provide brief discussions of the discoverability of the existence
of the secret message. Previous studies have demonstrated that hidden message bit rates as low as
0.1bpp can be discovered; our bit rates are 10? - 40? higher. Though visually hard to detect, given
the large amount of hidden information, we do not expect the existence of a secret message to be
hidden from statistical analysis. Nonetheless, we will show that commonly used methods do not find
it, and we give promising directions on how to trade-off the difficulty of existence-discovery with
reconstruction quality, as required.
2
Architectures and Error Propagation
Though steganography is often conflated with cryptography, in our approach, the closest analogue
is image compression through auto-encoding networks. The trained system must learn to compress
the information from the secret image into the least noticeable portions of the cover image. The
architecture of the proposed system is shown in Figure 1.
The three components shown in Figure 1 are trained as a single network; however, it is easiest to
describe them individually. The leftmost, Prep-Network, prepares the secret image to be hidden. This
component serves two purposes. First, in cases in which the secret-image (size M ? M ) is smaller
than the cover image (N ? N ), the preparation network progressively increases the size of the secret
image to the size of the cover, thereby distributing the secret image?s bits across the entire N ? N
2
Figure 2: Transformations made by the preparation network (3 examples shown). Left: Original
Color Images. Middle: the three channels of information extracted by the preparation network that
are input into the middle network. Right: zoom of the edge-detectors. The three color channels are
transformed by the preparation-network. In the most easily recognizable example, the 2nd channel
activates for high frequency regions, e.g. textures and edges (shown enlarged (right)).
pixels. (For space reasons, we do not provide details of experiments with smaller images, and instead
concentrate on full size images). The more important purpose, relevant to all sizes of hidden images,
is to transform the color-based pixels to more useful features for succinctly encoding the image ?
such as edges [20, 21], as shown in Figure 2.
The second/main network, the Hiding Network, takes as input the output of the preparation-network
and the cover image, and creates the Container image. The input to this network is a N ? N pixel
field, with depth concatenated RGB channels of the cover image and the transformed channels of
the secret image. Over 30 architectures for this network were attempted for our study with varying
number of hidden layers and convolution sizes; the best consisted of 5 convolution layers that had 50
filters each of {3 ? 3, 4 ? 4, 5 ? 5} patches. Finally, the right-most network, the Reveal Network,
is used by the receiver of the image; it is the decoder. It receives only the Container image (not the
cover nor secret image). The decoder network removes the cover image to reveal the secret image.
As mentioned earlier, our approach borrows heavily from auto-encoding networks [22]; however,
instead of simply encoding a single image through a bottleneck, we encode two images such that the
intermediate representation (the container image) appears as similar as possible to the cover image.
The system is trained by reducing the error shown below (c and s are the cover and secret images
respectively, and ? is how to weigh their reconstruction errors):
L(c, c0 , s, s0 ) = ||c ? c0 || + ?||s ? s0 ||
(1)
It is important to note where the errors are computed and the weights that each error affects, see
Figure 3. In particular, note that the error term ||c ? c0 || does not apply to the weights of the
reveal-network that receives the container image and extracts the secret image. On the other hand,
all of the networks receive the error signal ?||s ? s0 || for reconstructing the hidden image. This
ensures that the representations formed early in the preparation network as well as those used for
reconstruction of the cover image also encode information about the secret image.
Figure 3: The three networks are trained as a single, large, network. Error term 1 affects only the first
two networks. Error term 2 affects all 3. S is the secret image, C is the cover image.
3
To ensure that the networks do not simply encode the secret image in the LSBs, a small amount of
noise is added to the output of the second network (e.g. into the generated container image) during
training. The noise was designed such that the LSB was occasionally flipped; this ensured that the
LSB was not the sole container of the secret image?s reconstruction. Later, we will discuss where the
secret image?s information is placed. Next, we examine how the network performs in practice.
3
Empirical Evaluation
The three networks were trained as described above using Adam [23]. For simplicity, the reconstructions minimized the sum of squares error of the pixel difference, although other image metrics could
have easily been substituted [24, 25]. The networks were trained using randomly selected pairs of
images from the ImageNet training set [26].
Quantitative results are shown in Figure 4, as measured by the SSE per pixel, per channel. The
testing was conducted on 1,000 image pairs taken from ImageNet images (not used in training). For
comparison, also shown is the result of using the same network for only encoding the cover image
without the secret image (e.g. ? = 0). This gives the best reconstruction error of the cover using
this network (this is unattainable while also encoding the secret image). Also shown in Figure 4 are
histograms of errors for the cover and reconstruction. As can be seen, there are few large pixel errors.
?
Cover
Secret
Deep-Stego
Deep-Stego
Deep-Stego
0.75
1.00
1.25
2.8
3.0
6.4
3.6
3.2
2.8
Cover Only
0.00
0.1
(n/a)
Figure 4: Left: Number of intensity values off (out of 256) for each pixel, per channel, on cover and
secret image. Right: Distribution of pixel errors for cover and secret images, respectively.
Figure 5 shows the results of hiding six images, chosen to show varying error rates. These images are
not taken from ImageNet to demonstrate that the networks have not over-trained to characteristics of
the ImageNet database, and work on a range of pictures taken with cell phone cameras and DSLRs.
Note that most of the reconstructed cover images look almost identical to the original cover images,
despite encoding all the information to reconstruct the secret image. The differences between the
original and cover images are shown in the rightmost columns (magnified 5? in intensity).
Consider how these error rates compare to creating the container through simple LSB substitution:
replacing the 4 least significant bits (LSB) of the cover image with the 4 most-significant 4-bits
(MSB) of the secret image. In this procedure, to recreate the secret image, the MSBs are copied
from the container image, and the remaining bits set to their average value across the training dataset.
Doing this, the average pixel error per channel on the cover image?s reconstruction is 5.4 (in a range
of 0-255). The average error on the reconstruction of the secret image (when using the average value
for the missing LSB bits) is approximately 4.0.1 Why is the error for the cover image?s reconstruction
larger than 4.0? The higher error for the cover image?s reconstruction reflects the fact that the
distribution of bits in the natural images used are different for the MSBs and LSBs; therefore, even
though the secret and cover image are drawn from the same distribution, when the MSB from the
secret image are used in the place of the LSB, larger errors occur than simply using the average values
of the LSBs. Most importantly, these error rates are significantly higher than those achieved by our
system (Figure 4).
1
Note that an error of 4.0 is expected when the average value is used to fill in the LSB: removing 4 bits
from a pixel?s encoding yields 16x fewer intensities that can be represented. By selecting the average value to
replace the missing bits, the maximum error can be 8, and the average error is 4, assuming uniformly distributed
bits. To avoid any confusion, we point out that though it is tempting to consider using the average value for the
cover image also, recall that the LSBs of the cover image are where the MSBs of the secret image are stored.
Therefore, those bits must be used in this encoding scheme, and hence the larger error.
4
Original
cover
secret
Reconstructed
cover
secret
Differences ?5
cover
secret
Figure 5: 6 Hiding Results. Left pair of each set: original cover and secret image. Center pair: cover
image embedded with the secret image, and the secret image after extraction from the container.
Right pair: Residual errors for cover and hidden ? enhanced 5?. The errors per pixel, per channel are
the smallest in the top row: (3.1, 4.5) , and largest in the last (4.5, 7.9).
We close this section with a demonstration of the limitation of our approach. Recall that the networks
were trained on natural images found in the ImageNet challenge. Though this covers a very large
range of images, it is illuminating to examine the effects when other types of images are used. Five
such images are shown in Figure 6. In the first row, a pure white image is used as the cover, to
examine the visual effects of hiding a colorful secret image. This simple case was not encountered
in training with ImageNet images. The second and third rows change the secret image to bright
pink circles and uniform noise. As can be seen, even though the container image (4th column)
contains only limited noise, the recovered secret image is extremely noisy. In the final two rows, the
cover image is changed to circles, and uniform noise, respectively. As expected, the errors for the
reconstruction of the cover and secret are now large, though the secret image remains recognizable.
3.1
What if the original cover image became accessible?
For many steganographic applications, it can safely be assumed that access to the original cover
image (without the secret image embedded) is impossible for an attacker. However, what if the
original cover image was discovered? What could then be ascertained about the secret image, even
without access to the decoding network? In Figure 5, we showed the difference image between the
original cover and the container with 5x enhancement ? almost nothing was visible. We reexamine
5
Figure 6: Results with images outside the set of natural images.
the residual image at 5x, 10x, and 20x enhancement (with clipping at 255 where appropriate), see
Figure 7. In the first row, note that the residual (at 20x) strongly resembles the cover image. In the
second row, the residual is a combination of the cover and secret image, and in the third row, we see
the most troubling result ? features of the secret image are revealed. (Recall that this happens only
when the original, unperturbed image, is available for comparison). There are many standard methods
for obfuscation, such as adding cryptographic encodings of the secret image before embedding it
into the cover image. We demonstrate another avenue that can be used in conjunction with any other
approach: modifying the network?s error function.
In addition to the two error terms described, we add an error term that minimizes the pixel-wise
correlation between the residual of the cover image and the secret image corr(Rc , S) where Rc =
||C ? C 0 || and S is the secret image. Many weightings for this term were empirically tested. In the
results shown in Figure 7(Bottom), it is scaled to approximately (0.15 * number of pixel * channels).
Minimizing the residual?s correlation with the secret image removed many of the secret image?s
features from the residuals ? even when 20x magnification is used. Naturally, the robustness and
resilience comes at a price; the quality of some of the reconstructions have decreased, as shown (see
saturation of reds in first image).
4
Where is the Secret Image Encoded?
The primary focus of this paper is to concretely demonstrate that it is possible to encode a large
amount of information in an image with limited visually noticeable artifacts. However, no explicit
attempt has been made to actively hide the existence of that information from machine detection.
Though we cannot expect to completely hide the fact that up to 1/2 of the information is part of a
hidden message, measures can be taken to make it more difficult to discover. First, however, we must
determine where the information of the secret image resides.
Is the network simply hiding the information about the secret image in the least significant bits of the
cover image? Tools exist to seek out hidden information in the LSBs. One such publicly available
steganalysis toolkit, StegExpose, was used to test the detectability of our hidden images [27?29].
Per the description of the tool: ?StegExpose rating algorithm is derived from an intelligent and
thoroughly tested combination of pre-existing pixel based steganalysis methods including Sample
Pairs by Dumitrescu (2003), RS Analysis by Fridrich (2001), Chi Square Attack by Westfeld (2000)
and Primary Sets by Dumitrescu (2002)? [27]. In addition to the default settings (threshold = 0.2),
the detection thresholds were varied throughout a large range. The ROC curve for StegExpose is
shown in Figure 8. Note the little variation beyond random guessing (the green line).
StegExpose should have been able to find the information if it were simply placed in the LSB bits.
We turn to a second method to find where the information is stored. The images used in the study
6
Figure 7: Top 3 rows. If the original image is leaked and is subtracted from the container image, the
residual can be computed. With enough enhancement (20x), some of the secret image is revealed.
Bottom 3 rows: by explicitly creating an error term that minimized the correlation between the
residual and the secret image, the residual reveals less about the secret image; however, the pixel
errors for the container rise (note the less saturated colors in some of the red regions).
Figure 8: ROC curves: True Positive Rate vs. False Positive
Rate for StegExpose when trying to detect images embedded
via the proposed method.
are composed, at each pixel, of 24 bits (8 ? (R, G, B)). If we flip the first bit of the R channel
of all the pixels in the container image, we can measure its effects on the reconstructions on the
container image itself and also, by propagating the modified image through reveal network, on the
reconstruction of the secret image. The effects are striking, see Figure 9.
In Figure 9, the left half of the figure should be considered the ?control?. The upper left bar-chart,
shows that the effects of changing a bit in the Red channel of the container only has an effect on the
red-channel in the container, and that the magnitude is proportional to the significance of the bit; this
is exactly as expected and holds true for the Green and Blue channels as well (shown on the diagonal).
Much more interesting is the right side of Figure 9. Here, we see that a bit flip in any bit position
in any color channel of the container image, has an effect across all color channels in the secret
7
Figure 9: Sensitivity to Bits in the Container Image. (Left:) Control case, the container image.
Changing a bit in the container image obviously has no effect on other bits. (Right): However, that
same bit flip in the container image has effects across all color bands in the recovered secret image.
For example, if we want to measure the effect of flipping the 2nd bit in the Blue channel, we would
look at the marked entries (marked with *).
image?s reconstruction. Moreover, the error does not follow the standard significance ordering of the
bit positions. In other words, the information for the secret image is spread across the color channels;
hence the reason it was not detected earlier.
Despite StegExpose not finding the encodings, we suspected that this amount of information should
be readily discoverable. It has recently been demonstrated in multiple studies that deep networks
are competitive and often better than the more established methods of steganalysis that use hand
selected image features [12?14]. Interestingly, many published steganalysis techniques are not blind:
they assume that they are detecting hidden images only from known hiding methods and even that
access to the statistics about the cover image distribution are available. This makes the steganalysis
task easier while making the hiding task much more difficult. We will not discuss the real-world
limitations of that assumption here, but will follow that example.
We trained the steganalysis networks as binary classifiers, using the unperturbed ImageNet images
as negative samples, and our containers as positive examples. We discovered that the steganalysis
networks performed well with 24 ? (N ? N ) binary input layers. Each of the 24 layers represented a
bit in the pixel representation of the input image. The networks were trained in an analogous manner
to others in this study; again a multitude of architectures and learning rates were attempted.
The fully convolutional networks were able to obtain between 90-95% combined classification rates.
For comparison, the same networks were retrained to find Least-Significant-Bit substitution (where
the L least significant bits of each color channel in the cover image are replaced with the L most
significant bits from the secret image). The networks were trained and tested with L = 1, 2, 3, 4; all
achieved over 99% combined rates. An additional set of tests revealed that even when a single bit
was randomly placed in one of the 4 bit positions, the detection rate remained over 99%. Given these
detection rates on even a single bit, the high detection rates with a full image are not surprising.
Nonetheless, this leads to an immediate direction for future work. In this study, the steganalysis
networks were trained and used after the complete encoding system was created. However, using
pre-trained and/or simultaneously trained steganalysis networks in an adversarial learning framework,
such as Generative-Adversarial-Networks (GAN) [30], during the training of steganography nets
provides a method to incorporate an obfuscation based error metric. The adversary provides a
supplemental error signal based upon the discoverability of the hidden message that is minimized in
addition to the reconstruction errors. [31] has recently successfully explored a very similar procedure
with small messages hidden in images.
5
Discussion & Future Work
In this section, we briefly discuss a few observations found in this study and present ideas for future
work. First, lets consider the possibility of training a network to recover the hidden images after the
system has been deployed and without access to the original network. One can imagine that if an
8
attacker was able to obtain numerous instances of container images that were created by the targeted
system, and in each instance if at least one of the two component images (cover or secret image) was
also given, a network could be trained to recover both constituent components. What can an attacker
do without having access to this ground-truth ?training? data? Using a smoothness constraint or other
common heuristic from more classic image decomposition and blind source separation [32?34] may
be a first alternative. With many of these approaches, obtaining even a modest amount of training
data would be useful in tuning and setting parameters and priors. If such an attack is expected, it is
open to further research how much adapting the techniques described in Section 3.1 may mitigate the
effectiveness of these attempts.
As described in the previous section, in its current form, the correct detection of the existence (not
necessarily the exact content) of a hidden image is indeed possible. The discovery rate is high because
of the amount of information hidden compared to the cover image?s data (1:1 ratio). This is far
more than state-of-the-art systems that transmit reliably undetected messages. We presented one of
many methods to make it more difficult to recover the contents of the hidden image by explicitly
reducing the similarity of the cover image?s residual to the hidden image. Though beyond the scope
of this paper, we can make the system substantially more resilient by supplementing the presented
mechanisms as follows. Before hiding the secret image, the pixels are permuted (in-place) in one
of M previously agreed upon ways. The permuted-secret-image is then hidden by the system, as is
the key (an index into M ). This makes recovery difficult even by looking at the residuals (assuming
access to the original image is available) since the residuals have no spatial structure. The use of
this approach must be balanced with (1) the need to send a permutation key (though this can be sent
reliably in only a few bytes), and (2) the fact that the permuted-secret-image is substantially more
difficult to encode; thereby potentially increasing the reconstruction-errors throughout the system.
Finally, it should be noted that in order to employ this approach, the trained networks in this study
cannot be used without retraining. The entire system must be retrained as the hiding networks can no
longer exploit local structure in the secret image for encoding information.
This study opens a new avenue for exploration with steganography and, more generally, in placing
supplementary information in images. Several previous methods have attempted to use neural networks to either augment or replace a small portion of an image-hiding system. We have demonstrated
a method to create a fully trainable system that provides visually excellent results in unobtrusively
placing a full-size, color image into another image. Although the system has been described in the
context of images, the same system can be trained for embedding text, different-sized images, or
audio. Additionally, by using spectrograms of audio-files as images, the techniques described here
can readily be used on audio samples.
There are many immediate and long-term avenues for expanding this work. Three of the most
immediate are listed here. (1) To make a complete steganographic system, hiding the existence of the
message from statistical analyzers should be addressed. This will likely necessitate a new objective in
training (e.g. an adversary), as well as, perhaps, encoding smaller images within large cover images.
(2) The proposed embeddings described in this paper are not intended for use with lossy image files.
If lossy encodings, such as jpeg, are required, then working directly with the DCT coefficients instead
of the spatial domain is possible [35]. (3) For simplicity, we used a straightforward SSE error metric
for training the networks; however, error metrics more closely associated with human vision, such as
SSIM [24], can be easily substituted.
References
[1] Gary C Kessler and Chet Hosmer. An overview of steganography. Advances in Computers, 83(1):51?107,
2011.
[2] Gary C Kessler. An overview of steganography for the computer forensics examiner. Forensic Science
Communications, 6(3), 2014.
[3] Gary C Kessler. An overview of steganography for the computer forensics examiner (web), 2015.
[4] Jussi
Parikka.
Hidden
in
plain
sight:
The
stagnographic
image.
https://unthinking.photography/themes/fauxtography/hidden-in-plain-sight-the-steganographic-image,
2017.
[5] Jessica Fridrich, Jan Kodovsk`y, Vojt?ech Holub, and Miroslav Goljan. Breaking hugo?the process discovery.
In International Workshop on Information Hiding, pages 85?101. Springer, 2011.
9
[6] Jessica Fridrich and Miroslav Goljan. Practical steganalysis of digital images: State of the art. In Electronic
Imaging 2002, pages 1?13. International Society for Optics and Photonics, 2002.
[7] Hamza Ozer, Ismail Avcibas, Bulent Sankur, and Nasir D Memon. Steganalysis of audio based on audio
quality metrics. In Electronic Imaging 2003, pages 55?66. International Society for Optics and Photonics,
2003.
[8] Farzin Yaghmaee and Mansour Jamzad. Estimating watermarking capacity in gray scale images based on
image complexity. EURASIP Journal on Advances in Signal Processing, 2010(1):851920, 2010.
[9] Jessica Fridrich, Miroslav Goljan, and Rui Du. Detecting lsb steganography in color, and gray-scale images.
IEEE multimedia, 8(4):22?28, 2001.
[10] Abdelfatah A Tamimi, Ayman M Abdalla, and Omaima Al-Allaf. Hiding an image inside another image
using variable-rate steganography. International Journal of Advanced Computer Science and Applications
(IJACSA), 4(10), 2013.
[11] Tom?? Pevn`y, Tom?? Filler, and Patrick Bas. Using high-dimensional image models to perform highly
undetectable steganography. In International Workshop on Information Hiding, pages 161?177. Springer,
2010.
[12] Yinlong Qian, Jing Dong, Wei Wang, and Tieniu Tan. Deep learning for steganalysis via convolutional
neural networks. In SPIE/IS&T Electronic Imaging, pages 94090J?94090J. International Society for Optics
and Photonics, 2015.
[13] Lionel Pibre, J?r?me Pasquet, Dino Ienco, and Marc Chaumont. Deep learning is a good steganalysis tool
when embedding key is reused for different images, even if there is a cover source mismatch. Electronic
Imaging, 2016(8):1?11, 2016.
[14] Lionel Pibre, Pasquet J?r?me, Dino Ienco, and Marc Chaumont. Deep learning for steganalysis is better
than a rich model with an ensemble classifier, and is natively robust to the cover source-mismatch. arXiv
preprint arXiv:1511.04855, 2015.
[15] Sabah Husien and Haitham Badi. Artificial neural network for steganography. Neural Computing and
Applications, 26(1):111?116, 2015.
[16] Imran Khan, Bhupendra Verma, Vijay K Chaudhari, and Ilyas Khan. Neural network based steganography
algorithm for still images. In Emerging Trends in Robotics and Communication Technologies (INTERACT),
2010 International Conference on, pages 46?51. IEEE, 2010.
[17] V Kavitha and KS Easwarakumar. Neural based steganography. PRICAI 2004: Trends in Artificial
Intelligence, pages 429?435, 2004.
[18] Alexandre Santos Brandao and David Calhau Jorge. Artificial neural networks applied to image steganography. IEEE Latin America Transactions, 14(3):1361?1366, 2016.
[19] Robert Jaru?ek, Eva Volna, and Martin Kotyrba. Neural network approach to image steganography
techniques. In Mendel 2015, pages 317?327. Springer, 2015.
[20] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked
denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
Journal of Machine Learning Research, 11(Dec):3371?3408, 2010.
[21] Anthony J Bell and Terrence J Sejnowski. The ?independent components? of natural scenes are edge filters.
Vision research, 37(23):3327?3338, 1997.
[22] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504?507, 2006.
[23] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[24] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error
visibility to structural similarity. IEEE transactions on image processing, 13(4):600?612, 2004.
[25] Andrew B Watson. Dct quantization matrices visually optimized for individual images. In proc. SPIE,
1993.
[26] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. Imagenet large
scale visual recognition challenge. CoRR, abs/1409.0575, 2014.
10
[27] Benedikt Boehm. Stegexpose - A tool for detecting LSB steganography. CoRR, abs/1410.6656, 2014.
[28] Stegexpose - github. https://github.com/b3dk7/StegExpose.
Stegexpose ? steganalysis tool for detecting steganography in images.
[29] darknet.org.uk.
https://www.darknet.org.uk/2014/09/stegexpose-steganalysis-tool-detecting-steganography-images/, 2014.
[30] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing
Systems, pages 2672?2680, 2014.
[31] Jamie Hayes and George Danezis. ste-gan-ography: Generating steganographic images via adversarial
training. arXiv preprint arXiv:1703.00371, 2017.
[32] J-F Cardoso. Blind signal separation: statistical principles. Proceedings of the IEEE, 86(10):2009?2025,
1998.
[33] Aapo Hyv?rinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley
& Sons, 2004.
[34] Li Shen and Chuohao Yeo. Intrinsic images decomposition using a local and global sparse representation
of reflectance. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages
697?704. IEEE, 2011.
[35] Hossein Sheisi, Jafar Mesgarian, and Mostafa Rahmani. Steganography: Dct coefficient replacement
method andcompare with jsteg algorithm. International Journal of Computer and Electrical Engineering,
4(4):458, 2012.
11
| 6802 |@word middle:2 unaltered:1 compression:1 briefly:1 nd:2 c0:3 retraining:1 open:2 reused:1 willing:1 hyv:1 seek:1 r:1 rgb:3 decomposition:2 thereby:3 substitution:2 contains:1 selecting:1 interestingly:1 rightmost:1 existing:1 recovered:2 com:2 current:1 surprising:1 diederik:1 must:6 readily:2 john:1 dct:3 visible:2 remove:1 designed:2 visibility:1 progressively:1 msb:5 v:1 half:1 discovering:1 selected:2 fewer:1 generative:2 intelligence:1 detecting:5 provides:3 attack:2 mendel:1 org:2 five:1 rc:2 undetectable:1 recognizable:2 inside:1 manner:1 secret:76 indeed:1 expected:4 examine:4 planning:1 nor:1 chi:1 salakhutdinov:1 decreasing:1 steganography:27 little:1 increasing:2 hiding:22 supplementing:1 discover:2 underlying:1 estimating:2 moreover:1 medium:1 kessler:3 easiest:1 what:4 santos:1 minimizes:1 substantially:2 emerging:1 supplemental:1 finding:1 transformation:1 magnified:1 safely:1 quantitative:1 mitigate:1 exactly:1 ensured:1 scaled:1 classifier:2 uk:2 control:2 sherjil:1 colorful:1 danezis:1 carrier:9 before:2 positive:3 shumeet:2 resilience:1 local:3 engineering:1 despite:3 encoding:15 approximately:2 resembles:1 k:1 limited:2 range:4 practical:1 camera:1 testing:1 practice:2 examiner:2 procedure:2 jan:1 empirical:1 bell:1 significantly:1 revealing:1 matching:1 adapting:1 pre:2 word:1 cannot:2 close:1 andrej:1 context:1 impossible:1 writing:1 www:1 misuse:1 demonstrated:3 center:2 missing:2 send:1 straightforward:1 jimmy:1 shen:1 simplicity:2 recovery:1 pure:1 qian:1 pouget:1 importantly:1 fill:1 century:1 embedding:4 classic:1 sse:2 variation:1 analogous:1 transmit:1 enhanced:1 imagine:1 heavily:1 tan:1 exact:1 rinen:1 goodfellow:1 trend:2 magnification:1 recognition:2 hamza:1 database:2 bottom:2 preprint:2 wang:2 reexamine:1 electrical:1 region:5 ensures:1 eva:1 ordering:1 trade:2 removed:1 mentioned:1 weigh:1 balanced:1 complexity:1 warde:1 chet:1 trained:22 creates:1 upon:2 completely:1 easily:3 represented:2 america:1 stacked:1 ech:1 describe:1 sejnowski:1 detected:2 artificial:3 outside:1 jean:1 encoded:1 larger:6 heuristic:1 supplementary:1 distortion:1 relax:1 reconstruct:1 cvpr:1 encoder:1 tested:3 statistic:4 benedikt:1 transform:1 noisy:3 itself:4 final:1 obviously:1 net:2 ilyas:1 reconstruction:19 jamie:1 ste:1 relevant:1 date:1 ismail:1 description:1 constituent:1 enhancement:3 requirement:1 jing:1 lionel:2 generating:1 perfect:1 adam:2 andrew:1 propagating:1 measured:2 sole:1 noticeable:2 received:1 come:1 larochelle:1 direction:2 concentrate:1 closely:1 correct:1 compromising:1 filter:2 modifying:1 stochastic:1 exploration:1 human:1 public:1 resilient:1 dnns:2 hamid:1 extension:1 hold:1 considered:1 ground:1 visually:6 scope:1 mostafa:1 early:1 smallest:1 purpose:2 perceived:1 ruslan:1 proc:1 individually:1 largest:1 create:2 successfully:1 tool:6 reflects:1 offs:1 activates:1 sight:3 modified:1 avoid:1 zhou:1 varying:2 conjunction:1 encode:7 derived:1 focus:1 likelihood:1 contrast:2 adversarial:4 detect:2 entire:2 hidden:33 transformed:2 pixel:21 uncovering:1 classification:1 hossein:1 pascal:1 augment:1 art:3 spatial:2 field:1 once:1 extraction:1 beach:1 having:1 identical:1 flipped:1 placing:2 look:2 alter:1 future:3 minimized:3 mirza:1 others:2 intelligent:1 yoshua:2 employ:1 few:4 modern:1 randomly:3 oja:1 composed:1 simultaneously:4 preserve:1 zoom:1 individual:1 replaced:1 intended:1 replacement:2 attempt:5 jessica:3 detection:6 ab:2 message:27 possibility:1 highly:1 evaluation:1 saturated:1 photonics:3 farley:1 edge:4 ascertained:1 modest:1 filled:1 unobtrusively:2 circle:2 minimal:1 miroslav:3 instance:2 column:2 earlier:2 cover:60 jpeg:1 ordinary:1 clipping:1 entry:1 uniform:2 successful:1 conducted:1 stored:2 unattainable:1 publicized:1 combined:2 adaptively:1 st:1 thoroughly:1 international:8 sensitivity:1 accessible:1 off:2 dong:1 decoding:1 terrence:1 michael:1 sanjeev:1 again:1 fridrich:4 huang:1 necessitate:1 creating:3 ek:1 yeo:1 actively:1 li:2 alteration:2 coefficient:2 inc:1 explicitly:3 depends:2 blind:3 later:1 performed:1 doing:1 portion:2 red:4 competitive:1 recover:3 jia:1 formed:1 publicly:2 square:2 bright:1 became:1 characteristic:1 efficiently:1 convolutional:2 yield:2 ensemble:1 vincent:1 steganographic:8 russakovsky:1 published:1 randomness:1 detector:1 chaudhari:1 nonetheless:2 frequency:2 resultant:1 naturally:1 associated:1 spie:2 dataset:1 popular:2 recall:3 color:13 bpp:5 dimensionality:1 holub:1 agreed:1 sean:1 carefully:1 back:1 appears:1 alexandre:1 higher:3 forensics:2 follow:2 hosmer:1 tom:2 wei:1 done:1 though:11 strongly:1 correlation:3 autoencoders:1 hand:2 receives:2 working:1 web:1 transport:1 mehdi:1 su:1 replacing:1 propagation:1 google:3 assessment:1 quality:5 reveal:7 artifact:1 perhaps:1 lossy:2 gray:2 usa:1 effect:10 consisted:1 true:2 rahmani:1 hence:2 white:1 leaked:1 during:2 noted:1 authorship:1 criterion:1 leftmost:1 trying:1 complete:2 demonstrate:3 confusion:1 performs:1 zhiheng:1 image:219 wise:1 photography:1 recently:2 common:4 permuted:3 hugo:4 empirically:1 overview:3 volume:1 significant:8 isabelle:1 smoothness:1 tuning:1 analyzer:1 dino:2 had:1 toolkit:1 access:6 longer:2 impressive:1 similarity:2 add:1 patrick:1 integrity:1 closest:1 hide:6 recent:1 showed:1 phone:1 occasionally:1 binary:3 watson:1 jorge:1 seen:2 prepares:1 additional:1 george:1 spectrogram:1 employed:1 deng:1 determine:2 tempting:1 signal:4 full:7 multiple:1 simoncelli:1 alan:1 long:2 manipulate:1 aapo:1 vision:3 metric:5 physically:1 histogram:1 arxiv:4 achieved:4 dec:1 encrypted:1 cell:1 receive:1 addition:3 want:1 robotics:1 decreased:1 addressed:1 lsb:11 krause:1 source:4 bovik:1 container:25 unlike:2 file:4 sent:2 effectiveness:1 structural:1 bernstein:1 intermediate:1 revealed:3 enough:1 embeddings:1 variety:1 affect:3 latin:1 bengio:2 architecture:4 idea:1 avenue:3 bottleneck:1 whether:2 six:1 recreate:1 distributing:1 deep:13 covertly:1 useful:3 generally:1 covered:1 listed:1 cardoso:1 karpathy:1 amount:11 band:1 http:3 exist:1 coordinating:1 per:8 blue:2 detectability:1 goljan:3 key:3 demonstrating:1 threshold:2 drawn:2 changing:2 imaging:4 sum:1 concealing:1 communicate:1 striking:1 named:1 place:6 throughout:3 almost:2 electronic:4 patch:1 separation:2 acceptable:1 ayman:1 bit:40 layer:4 courville:1 copied:1 encountered:1 tieniu:1 activity:1 occur:1 optic:3 constraint:1 fei:2 scene:1 flat:1 erkki:1 pricai:1 extremely:1 discoverable:1 relatively:2 martin:1 combination:2 pink:1 across:6 smaller:3 reconstructing:1 son:1 sheikh:1 making:2 happens:1 jussi:1 taken:4 remains:1 previously:1 discus:3 detectable:1 turn:1 mechanism:1 bing:1 flip:3 serf:1 available:5 apply:1 appropriate:1 pierre:1 subtracted:1 alternative:1 robustness:1 existence:7 original:13 compress:2 recipient:1 remaining:1 ensure:1 top:2 gan:2 exploit:1 concatenated:1 reflectance:1 society:3 objective:1 added:2 flipping:1 primary:2 diagonal:1 guessing:1 losslessly:1 antoine:1 ozer:1 iclr:1 capacity:1 decoder:3 lajoie:1 me:2 reason:2 ozair:1 assuming:2 chart:1 index:1 manzagol:1 ratio:1 demonstration:1 minimizing:1 difficult:6 troubling:1 robert:1 potentially:1 hao:1 negative:1 rise:1 ba:2 reliably:2 cryptographic:1 satheesh:1 attacker:3 perform:1 upper:1 ssim:1 convolution:2 observation:1 juha:1 immediate:3 hinton:1 communication:3 looking:1 discovered:3 perturbation:1 varied:1 mansour:1 retrained:2 intensity:3 rating:1 criminal:1 david:2 pair:7 required:2 khan:2 optimized:1 imagenet:9 security:1 textual:1 established:1 kingma:1 nip:1 beyond:4 able:3 bar:1 below:1 adversary:2 mismatch:2 pattern:1 challenge:3 saturation:1 including:1 green:2 analogue:1 natural:6 difficulty:1 undetected:1 residual:13 advanced:3 forensic:1 scheme:2 altered:1 github:2 technology:1 brief:1 numerous:1 picture:1 created:2 extract:2 auto:2 deviate:1 text:3 prior:1 discovery:4 byte:1 embedded:3 fully:2 expect:2 permutation:1 interesting:1 limitation:2 proportional:1 geoffrey:1 borrows:1 digital:3 illuminating:1 s0:3 suspected:1 haitham:1 principle:1 verma:1 row:9 succinctly:1 changed:1 placed:3 last:1 side:1 wide:1 sparse:1 distributed:1 curve:2 plain:3 depth:1 default:1 world:1 resides:1 rich:1 concretely:1 commonly:3 made:2 far:1 transaction:2 reconstructed:2 observable:1 implicitly:1 global:1 reveals:1 hayes:1 receiver:2 assumed:1 eero:1 khosla:1 why:1 additionally:1 promising:1 channel:20 learn:1 robust:1 ca:1 expanding:1 obtaining:1 interact:1 du:1 excellent:1 necessarily:1 posted:1 anthony:1 domain:1 substituted:2 marc:2 significance:2 main:1 spread:1 noise:5 prep:1 nothing:1 pevn:1 cryptography:1 xu:1 enlarged:1 site:1 roc:2 deployed:1 wiley:1 natively:1 obfuscation:2 position:3 explicit:1 theme:1 breaking:1 third:2 weighting:1 ian:1 removing:1 remained:1 embed:2 unperturbed:2 explored:1 abadie:1 multitude:2 intrinsic:1 incorporating:1 workshop:2 false:1 adding:1 corr:3 quantization:1 vojt:1 texture:1 magnitude:1 karhunen:1 rui:1 easier:1 vijay:1 simply:5 explore:1 appearance:1 likely:1 visual:2 aditya:1 springer:3 gary:3 truth:1 determines:1 dispersed:1 extracted:1 ma:1 goal:2 marked:2 targeted:1 sized:1 replace:3 price:1 content:4 hard:1 change:1 eurasip:1 baluja:1 specifically:1 uniformly:2 reducing:3 distributes:1 denoising:2 olga:1 called:1 multimedia:1 attempted:3 chaumont:2 aaron:1 select:1 berg:1 arises:1 jonathan:1 alexander:1 filler:1 preparation:7 incorporate:2 steganalysis:18 audio:6 trainable:1 |
6,416 | 6,803 | Neural Program Meta-Induction
Jacob Devlin?
Google
[email protected]
Rudy Bunel?
University of Oxford
[email protected]
Rishabh Singh
Microsoft Research
[email protected]
Pushmeet Kohli?
DeepMind
[email protected]
Matthew Hausknecht
Microsoft Research
[email protected]
Abstract
Most recently proposed methods for Neural Program Induction work under the
assumption of having a large set of input/output (I/O) examples for learning any
underlying input-output mapping. This paper aims to address the problem of data
and computation efficiency of program induction by leveraging information from
related tasks. Specifically, we propose two approaches for cross-task knowledge
transfer to improve program induction in limited-data scenarios. In our first proposal, portfolio adaptation, a set of induction models is pretrained on a set of
related tasks, and the best model is adapted towards the new task using transfer
learning. In our second approach, meta program induction, a k-shot learning approach is used to make a model generalize to new tasks without additional training.
To test the efficacy of our methods, we constructed a new benchmark of programs
written in the Karel programming language [17]. Using an extensive experimental
evaluation on the Karel benchmark, we demonstrate that our proposals dramatically
outperform the baseline induction method that does not use knowledge transfer. We
also analyze the relative performance of the two approaches and study conditions
in which they perform best. In particular, meta induction outperforms all existing
approaches under extreme data sparsity (when a very small number of examples are
available), i.e., fewer than ten. As the number of available I/O examples increase
(i.e. a thousand or more), portfolio adapted program induction becomes the best
approach. For intermediate data sizes, we demonstrate that the combined method
of adapted meta program induction has the strongest performance.
1
Introduction
Neural program induction has been a very active area of research in the last few years, but this past
work has made highly variable set of assumptions about the amount of training data and types of
training signals that are available. One common scenario is example-driven algorithm induction,
where the goal is to learn a model which can perform a specific task (i.e., an underlying program
or algorithm), such as sorting a list of integers[7, 11, 12, 21]. Typically, the goal of these works are
to compare a newly proposed network architecture to a baseline model, and the system is trained
on input/output examples (I/O examples) as a standard supervised learning task. For example, for
integer sorting, the I/O examples would consist of pairs of unsorted and sorted integer lists, and the
model would be trained to maximize cross-entropy loss of the output sequence. In this way, the
induction model is similar to a standard sequence generation task such as machine translation or
image captioning. In these works, the authors typically assume that a near-infinite amount of I/O
examples corresponding to a particular task are available.
?
Work performed at Microsoft Research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Other works have made different assumptions about data: Li et al. [14] trains models from scratch
using 32 to 256 I/O examples. Lake et al. [13] learns to induce complex concepts from several
hundred examples. Devlin et al. [5], Duan et al. [6], and Santoro et al. [19] are able to perform
induction using as few one I/O example, but these works assume that a large set of background tasks
from the same task family are available for training. Neelakantan et al. [16] and Andreas et al. [1]
also develop models which can perform induction on new tasks that were not seen at training time,
but are conditioned on a natural language representation rather than I/O examples.
These varying assumptions about data are all reasonable in differing scenarios. For example, in a
scenario where a reference implementation of the program is available, it is reasonable to expect that
an unlimited amount of I/O examples can be generated, but it may be unreasonable to assume that any
similar program will also be available. However, we can also consider a scenario like FlashFill [9],
where the goal is to learn a regular expression based string transformation program based on userprovided examples, such as ?John Smith ? Smith, J.?). Here, it is only reasonable to assume
that a handful of I/O examples are available for a particular task, but that many examples are available
for other tasks in the same family (e.g., ?Frank Miller ? Frank M?).
In this work, we compare several different techniques for neural program induction, with a particular
focus on how the relative accuracy of these techniques differs as a function of the available training
data. In other words, if technique A is better than technique B when only five I/O examples are
available, does this mean A will also be better than B when 50 I/O examples are available? What
about 1000? 100,000? How does this performance change if data for many related tasks is available?
To answer these questions, we evaluate four general techniques for cross-task knowledge sharing:
? Plain Program Induction (PLAIN) - Supervised learning is used to train a model which
can perform induction on a single task, i.e., read in an input example for the task and predict
the corresponding output. No cross-task knowledge sharing is performed.
? Portfolio-Adapted Program Induction (PLAIN + ADAPT) - Simple transfer learning is
used to to adapt a model which has been trained on a related task for a new task.
? Meta Program Induction (META) - A k-shot learning-style model is used to represent an
exponential family of tasks, where the training I/O examples corresponding to a task are
directly conditioned on as input to the network. This model can generalize to new tasks
without any additional training.
? Adapted Meta Program Induction (META + ADAPT) - The META model is adapted to a
specific new task using round-robin hold-one-out training on the task?s I/O examples.
We evaluate these techniques on a synthetic domain described in Section 2, using a simple but strong
network architecture. All models are fully example-driven, so the underlying program representation
is only used to generate I/O examples, and is not used when training or evaluating the model.
2
Karel Domain
In order to ground the ideas presented here, we describe our models in relation to a particular
synthetic domain called ?Karel?. Karel is an educational programming language developed at
Stanford University in the 1980s[17]. In this language, a virtual agent named Karel the Robot moves
around a 2D grid world placing markers and avoiding obstacle. The domain specific language (DSL)
for Karel is moderately complex, as it allows if/then/else blocks, for loops, and while loops,
but does not allow variable assignments. Compared to the current program induction benchmarks,
Karel introduces a new challenge of learning programs with complex control flow, where the stateof-the-art program synthesis techniques involving constraint-solving and enumeration do not scale
because of the prohibitively large search space. Karel is also an interesting domain as it is used
for example-driven programming in an introductory Stanford programming course.2 In this course,
students are provided with several I/O grids corresponding to some underlying Karel program that
they have never seen before, and must write a single program which can be run on all inputs to
generate the corresponding outputs. This differs from typical programming assignments, since the
program specification is given in the form of I/O examples rather than natural language. An example
is given in Figure 1. Note that inducing Karel programs is not a toy reinforcement learning task.
2
The programs are written manually by students; it is not used to teach program induction or synthesis.
2
Since the example I/O grids are of varying dimensions, the learning task is not to induce a single
trace that only works on grids of a fixed size, but rather to induce a program that can can perform the
desired action on ?arbitrary-size grids?, thereby forcing it to use the loop structure appropriately.
Figure 1: Karel Domain: On the left, a sample task from the Karel domain with two training I/O
? O).
? The computer is Karel, the circles
examples (I1 , O1 ), (I2 , O2 ) and one test I/O example (I,
represent markers and the brick wall represents obstacles. On the right, the language spec for Karel.
In this work, we only explore the induction variant of Karel, where instead of attempting to synthesize
? from a corresponding input grid I.
?
the program, we attempt to directly generate the output grid O
Although the underlying program is used to generate the training data, it is not used by the model in
any way, so in principle it does not have to explicitly exist. For example, a more complex real-world
analogue would be a system where a user controls a drone to provide examples of a task such as
?Fly around the boundary of the forest, and if you see a deer, take a picture of it, then return home.?
Such a task might be difficult to represent using a program, but could be possible with a sufficiently
powerful and well-trained induction model, especially if cross-task knowledge sharing is used.
3
Plain Program Induction
In this work, plain program induction (denoted as PLAIN) refers to the supervised training of a
parametric model using a set of input/output examples (I1 , O1 ), ..., (IN , ON ), such that the model
? In this scenario, all I/O examples in
can take some new I? as input and emit the corresponding O.
training and test correspond to the same task (i.e., underlying program or algorithm), such as sorting
a list of integers. Examples of past work in plain program induction using neural networks include
[7, 11, 12, 8, 4, 20, 2].
For the Karel domain, we use a simple architecture shown on the left side of Figure 2. The
input feature map are an 16-dimensional vector with n-hot encodings to represent the objects
of the cell, i.e., (AgentFacingNorth, AgentFacingEast, ..., OneMarker, TwoMarkers,
..., Obstacle). Additionally, instead of predicting the output grid directly, we use an
LSTM to predict the delta between the input grid and output grid as a series of tokens using. For example, AgentRow=+1 AgentCol=+2 HeroDir=south MarkerRow=0 MarkerCol=0
MarkerCount=+2 would indicate that the hero has moved north 1 row, east 2 rows, is facing south,
and also added two markers on its starting position. This sequence can be deterministically applied to
the input to create the output grid. Specific details about the model architecture and training are given
in Section 8.
4
Portfolio-Adapted Program Induction
Most past work in neural programs induction assumes that a very large amount of training data is
available to train a particular task, and ignores data sparsity issues entirely. However, in a practical
scenario such as the FlashFill domain described in Section 1 or the real-world Karel analogue
3
Figure 2: Network Architecture: Diagrams for the general network architectures used for the Karel
domain. Specifics of the model are provided in Section 8.
described in Section 2, I/O examples for a new task must be provided by the user. In this case, it may
be unrealistic to expect more than a handful of I/O examples corresponding to a new task.
Of course, it is typically infeasible to train a deep neural network from scratch with only a handful of
training examples. Instead, we consider a scenario where data is available for a number of background
tasks from the same task family. In the Karel domain, the task family is simply any task from the
Karel DSL, but in principle the task family can be more a more abstract concept such as ?The set of
string transformations that a user might perform on columns in a spreadsheet.?
One way of taking advantage of such background tasks is with straightforward transfer learning,
which we refer to as portfolio-adapted program induction (denoted as PLAIN + ADAPT). Here, we
have a portfolio of models each trained on a single background I/O task. To train an induction model
for a new task, we select the ?best? background model and use it as an initialization point for training
our new model. This is analogous to the type of transfer learning used in standard classification
tasks like image recognition or machine translation [10, 15]. The criteria by which we select this
background model is to score the training I/O examples for the new task with each model in the
portfolio, and select the one with the highest log-likelihood.
5
Meta Program Induction
Although we expect that PLAIN + ADAPT will allow us to learn an induction model with fewer I/O
examples than training from scratch, it is still subject to the normal pitfalls of SGD-based training.
In particular, it is typically very difficult to train powerful DNNs using very few I/O examples (e.g.,
< 100) without encountering significant overfitting.
An alternative method is to train a single network which represents an entire (exponentially large)
family of tasks, and the latent representation of a particular task is represented by conditioning on
the training I/O examples for that task. We refer to this type of model as meta induction (denoted as
META) because instead of using SGD to learn a latent representation of a particular task based on I/O
examples, we are using SGD to learn how to learn a latent task representation based on I/O examples.
More specifically, our meta induction architecture takes as input a set of demonstration examples
? and emits the corresponding output O.
? A diagram
(I1 , O1 ), ..., (Ik , Ok ) and an additional eval input I,
is shown in Figure 2. The number of demonstration examples k is typically small, e.g., 1 to 5. At
training time, we are given a large number of tasks with k + 1 examples each. During training,
one example is chosen at random to represent the eval example, the others are used to represent the
demonstration examples. At test time, we are given k I/O examples which correspond to a new task
? Then, we are able to generate
that was not seen at training, along with one or more eval inputs I.
?
the corresponding O for the new task without performing any SGD. The META model could also be
described as a k-shot learning system, closely related to Duan et al. [6] and Santoro et al. [19].
In a scenario where a moderate number of I/O examples are available at test time, e.g., 10 to 100,
performing meta induction is non-trivial. It is not computationally feasible to train a model which is
4
directly conditioned on k = 100 examples, and using a larger value of k at test time than training
time creates an undesirable mismatch. So, if the model is trained using k examples but n examples
are available at test time (n > k), the approach we take is to randomly sample a number of k-sized
sets and performing ensembling of the softmax log probabilities for each output token. There are (n
choose k) total subsets available, but we found little improvement in using more than 2 ? n/k. We set
k = 5 in all experiments, and present results using different values of n in Section 8.
6
Adapted Meta Program Induction
The previous approach to use n > k I/O examples at test
time seems reasonable, but certainly not optimal. An alternative approach is to combine the best aspects of META
and PLAIN + ADAPT, and adapt the meta model to a particular new task using SGD. To do this, we can repeatedly
sample k + 1 I/O examples from the n total examples
provided, and fine tune the META model for the new task
in the exact manner that it was trained originally. For decoding, we still perform the same algorithm as the META
model, but the weights have been adapted for the particular Figure 3: Data-Mixture Regularization
task being decoded.
In order to mitigate overfitting, we found that it is useful
to perform ?data-mixture regularization,? where the I/O examples for the new task are mixed with
random training data corresponding to other tasks. In all experiments here we sample 10% of the I/O
examples in a minibatch from the new task and 90% from random training tasks. It is potential that
underfitting could occur in this scenario, but note that the meta network is already trained to represent
an exponential number of tasks, so using a single task for 10% of the data is quite significant. Results
with data mixture adaptation are shown in Figure 3, which demonstrates that this acts as a strong
regularizer and moderately improves held-out loss.
7
Comparison with Existing Work on Neural Program Induction
There has been a large amount of past work in neural program induction, and many of these works
have made different assumptions about the conditions of the induction scenario. Here, our goal is to
compare the four techniques presented here to each other and to past work across several attributes:
? Example-Driven Induction - 3 = The system is trained using I/O examples as specification.
7 = The system uses some other specification, such as natural language.
? No Explicit Program Representation - 3 = The system can be trained without any explicit
program or program trace. 7 = The system requires a program or program trace.
? Task-Specific Learning - 3 = The model is trained to maximize performance on a particular
task. 7 = The model is trained for a family of tasks.
? Cross-Task Knowledge Sharing - 3 = The system uses information from multiple tasks
when training a model for a new task. 7 = The system uses information from only a single
task for each model.
The comparison is presented in Table 1. The PLAIN technique is closely related to the example-driven
induction models such as Neural Turing Machines[7] or Neural RAM[12], which typically have not
focused on cross-task knowledge transfer. The META model is closely related are the k-shot imitation
learning approaches [6, 5, 19], but these papers did not explore task-specific adaptation.
8
Experimental Results
In this section we evaluate the four techniques PLAIN, PLAIN + ADAPT, META, META + ADAPT on the
Karel domain. The primary goal is to compare performance relative to the number of training I/O
examples available for the test task.
5
System
ExampleDriven
Induction
No Explicit
Program
or Trace
TaskSpecific
Learning
Novel Architectures Applied to Program Induction
NTM [7], Stack RNN [11], NRAM [12]
Neural Transducers [8], Learn Algo [21]
3
3
3
Others [4, 20, 2, 13]
NPI [18]
Recursive NPI [3], NPL [14]
Trace-Augmented Induction
3
7
3
7
3
3
Cross-Task
Knowledge
Sharing
7
3
7
Non Example-Driven Induction (e.g., Natural Language-Driven Induction)
Inducing Latent Programs [16]
7
3
3
3
Neural Module Networks [1]
k-shot Imitation Learning
1-Shot Imitation Learning [6]
3
3
RobustFill [5], Meta-Learning [19]
7
3
Techniques Explored in This Work
Plain Program Induction
3
3
Portfolio-Adapted Program Induction
3
3
Meta Program Induction
3
3
Adapted Meta Program Induction
3
3
3
3
7
3
7
3(Weak)
3(Strong)
3(Strong)
Table 1: Comparison with Existing Work: Comparison of existing work across several attributes.
For the primary experiments reported here, the overall network architecture is sketched in Figure 2,
with details as follows: The input encoder is a 3-layer CNN with a FC+relu layer on top. The output
decoder is a 1-layer LSTM. For the META model, the task encoder uses 1-layer CNN to encode the
input and output for a single example, which are concatenated on the feature map dimension and fed
through a 6-layer CNN with a FC+relu layer on top. Multiple I/O examples were combined with
max-pooling on the final vector. All convolutional layers use a 3 ? 3 kernel with a 64-dimensional
feature map. The fully-connected and LSTM are 1024-dimensional. Different model sizes are
explored later in this section. The dropout, learning rate, and batch size were optimized with grid
search for each value of n using a separate set of validation tasks. Training was performed using
SGD + momentum and gradient clipping using an in-house toolkit.
All training, validation, and test programs were generated by treating the Karel DSL as a probabilistic
context free grammar and performing top-down expansion with uniform probability at each node.
The input grids were generated by creating a grid of a random size and inserting the agent, markers,
and obstacles at random. The output grid was generated by executing the program on the input grid,
and if the agent ran into an obstacle or did not move, then the example was thrown out and a new
input grid was generated. We limit the nesting depth of control flow to be at most 4 (i.e. at most 4
nested if/while blocks can be chosen in a valid program). We sample I/O grids of size n ? m, where
n and m are integers sampled uniformly from the range 2 to 20. We sample programs of size upto 20
statements. Every program and I/O grid in the training/validation/test set is unique.
Results are presented in Figure 4, evaluated on 25 test tasks with 100 eval examples each.3 The x-axis
represents the number of training/demonstration I/O examples available for the test task, denoted as
n. The PLAIN system was trained only on these n examples directly. The PLAIN + ADAPT system was
also trained on these n examples, but was initialized using a portfolio of m models that had been
trained on d examples each. Three different values of m and d are shown in the figure. The META
model in this figure was trained on 1,000,000 tasks with 6 I/O examples each, but smaller amounts of
META training are shown in Figure 5. A point-by-point analysis is given below:
3
Note that each task and eval example is evaluated independently, so the size of the test set does not affect
the accuracy.
6
Figure 4: Induction Results: Comparison of the four induction techniques on the Karel scenario.
The accuracy denotes the total percentage of examples for which the 1-best output grid was exactly
equal to the reference.
? PLAIN vs. PLAIN + ADAPT: PLAIN + ADAPT significantly outperforms PLAIN unless n is
very large (10k+), in which case both systems perform equally well. This result makes sense,
since we expect that much of the representation learning (e.g., how to encode an I/O grid
with a CNN) will be independent of the exact task.
? PLAIN + ADAPT Model Portfolio Size: Here, we compare the three model portfolio settings
shown for PLAIN + ADAPT. The number of available models (m = 1 vs. m = 25) only
has a small effect on accuracy, and this effect is only present for small values of n (e.g.,
n < 100) when the absolute performance is poor in any case. This implies that the majority
of cross-task knowledge sharing is independent of the exact details of a task.
On the other hand, the number of examples used to train each model in the portfolio
(d = 1000 vs d = 100000) has a much larger effect, especially for moderate values of
n, e.g., 50 to 100. This makes sense, as we would not expect a significant benefit from
adaptation unless (a) d n, and (b) n is large enough to train a robust model.
? META vs. META + ADAPT: META + ADAPT does not improve over META for small values of
n, which is in-line with the common observation that SGD-based training is difficult using a
small number of samples. However, for large values of n, the accuracy of META + ADAPT
increases significantly while the META model remains flat.
? PLAIN + ADAPT vs. META + ADAPT: Perhaps the most interesting result in the entire chart
is the fact that the accuracy crosses over, and PLAIN + ADAPT outperforms META + ADAPT by
a significant margin for large values of n (i.e., 1000+). Intuitively, this makes sense, since
the meta induction model was trained to represent an exponential family of tasks moderately
well, rather than represent a single task with extreme precision.
Because the network architecture of the META model is a superset of the PLAIN model,
these results imply that for a large value of n, the model is becoming stuck in a poor local
optima.4 To validate this hypothesis, we performed adaptation on the meta network after
randomly re-initializing all of the weights, and found that in this case the performance of
META + ADAPT matches that of PLAIN + ADAPT for large values of n. This confirms that the
pre-trained meta network is actually a worse starting point than training from scratch when
a large number of training I/O examples are available.
Learning Curves: The left side of Figure 4 presents average held-out loss for the various techniques
using 50 and 1000 training I/O examples. Epoch 0 on the META + ADAPT corresponds to the META
4
Since the DNN is over-parameterized relative to the number of training examples, the system is able to
overfit the training examples in all cases. Therefore ?poor local optimal? is referring to the model?s ability to
generalize to the test examples.
7
Figure 5: Ablation results for Karel Induction.
loss. We can see that the PLAIN + ADAPT loss starts out very high, but the model able to adapt to the
new task quickly. The META + ADAPT loss starts out very strong, but only improves by a small amount
with adaptation. For 1000 I/O examples, it is able to overtake the META + ADAPT model by a small
amount, supporting what was observed in Figure 4.
Varying the Model Size: Here, we present results on three architectures: Large = 64-dim feature
map, 1024-dim FC/RNN (used in the primary results); Medium = 32-dim feature map, 256-dim
FC/RNN; Small = 16-dim feature map, 64-dim FC/RNN. All models use the structure described
earlier in this section. We can see the center of Figure 5 that model size has a much larger impact on
the META model than the PLAIN, which is intuitive ? representing an entire family tasks from a given
domain requires significantly more parameters than a single task. We can also see that the larger
models outperform the smaller models for any value of n, which is likely because the dropout ratio
was selected for each model size and value of n to mitigate overfitting.
Varying the Amount of META Training: The META model presented in Figure 4 represents a very
optimistic scenario which is trained on 1,000,000 background tasks with 6 I/O examples each. On
the right side of Figure 5, we present META results using 100,000 and 10,000 training tasks. We see a
significant loss in accuracy, which demonstrates that it is quite challenging to train a META model
that can generalize to new tasks.
9
Conclusions
In this work, we have contrasted two techniques for using cross-task knowledge sharing to improve
neural program induction, which are referred to as adapted program induction and meta program
induction. Both of these techniques can be used to improve accuracy on a new task by using models
that were trained on related tasks from the same family. However, adapted induction uses a transfer
learning style approach while meta induction uses a k-shot learning style approach.
We applied these techniques to a challenging induction domain based on the Karel programming
language, and found that each technique, including unadapted induction, performs best under certain
conditions. Specifically, the preferred technique depends on the number of I/O examples (n) that
are available for the new task we want to learn, as well as the amount of background data available.
These conclusions can be summarized by the following table:
Technique
Background Data Required
When to Use
PLAIN
None
n is very large (10,000+)
PLAIN + ADAPT
Few related tasks (1+) with a large
number of I/O examples (1,000+)
n is fairly large (1,000 to
10,000)
META
Many related tasks (100k+) with a
small number of I/O examples (5+)
n is small (1 to 20)
META + ADAPT
Same as META
n is moderate (20 to 100)
Although we have only applied these techniques to a single domain, we believe that these conclusions
are highly intuitive, and should generalize across domains. In future work, we plan to explore
more principled methods for adapted meta adaption, in order to improve upon results in the very
limited-example scenario.
8
References
[1] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. pages 39?48,
2016.
[2] Marcin Andrychowicz and Karol Kurach. Learning efficient algorithms with hierarchical attentive memory.
CoRR, abs/1602.03218, 2016.
[3] Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize via
recursion. In ICLR, 2017.
[4] Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, and Alex Graves. Associative long shortterm memory. ICML, 2016.
[5] Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet
Kohli. Robustfill: Neural program learning under noisy I/O. CoRR, abs/1703.07469, 2017.
[6] Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter
Abbeel, and Wojciech Zaremba. One-shot imitation learning. CoRR, abs/1703.07326, 2017.
[7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401,
2014.
[8] Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce
with unbounded memory. NIPS, 2015.
[9] Sumit Gulwani, William R Harris, and Rishabh Singh. Spreadsheet data manipulation using examples.
Communications of the ACM, 2012.
[10] Mi-Young Huh, Pulkit Agrawal, and Alexei A. Efros. What makes imagenet good for transfer learning?
CoRR, abs/1608.08614, 2016. URL http://arxiv.org/abs/1608.08614.
[11] Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In
NIPS, pages 190?198, 2015.
[12] Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. ICLR, 2016.
[13] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through
probabilistic program induction. Science, 350(6266):1332?1338, 2015.
[14] Chengtao Li, Daniel Tarlow, Alexander L. Gaunt, Marc Brockschmidt, and Nate Kushman. Neural program
lattices. In ICLR, 2017.
[15] Minh-Thang Luong and Christopher D. Manning. Stanford neural machine translation systems for spoken
language domains. 2015.
[16] Arvind Neelakantan, Quov V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with
gradient descent. ICLR, 2016.
[17] Richard E Pattis. Karel the robot: a gentle introduction to the art of programming. John Wiley & Sons,
Inc., 1981.
[18] Scott Reed and Nando de Freitas. Neural programmer-interpreters. ICLR, 2016.
[19] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Metalearning with memory-augmented neural networks. In International conference on machine learning,
pages 1842?1850, 2016.
[20] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. NIPS,
2015.
[21] Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from
examples. CoRR, abs/1511.07275, 2015. URL http://arxiv.org/abs/1511.07275.
9
| 6803 |@word kohli:2 cnn:4 armand:2 seems:1 pieter:1 confirms:1 jacob:3 sgd:7 thereby:1 shot:8 series:1 efficacy:1 score:1 daniel:1 outperforms:3 existing:4 past:5 current:1 com:4 o2:1 freitas:1 written:2 must:2 john:2 uria:1 treating:1 v:5 sukhbaatar:1 spec:1 fewer:2 selected:1 kushman:1 ivo:2 smith:2 tarlow:1 node:1 org:2 five:1 unbounded:1 wierstra:1 along:1 constructed:1 ik:1 jonas:1 transducer:1 introductory:1 combine:1 dan:1 underfitting:1 manner:1 salakhutdinov:1 pitfall:1 duan:3 little:1 enumeration:1 becomes:1 provided:4 underlying:6 medium:1 what:3 string:2 deepmind:1 developed:1 interpreter:1 differing:1 chengtao:1 transformation:2 spoken:1 mitigate:2 every:1 act:1 flashfill:2 zaremba:2 exactly:1 prohibitively:1 demonstrates:2 botvinick:1 uk:1 control:3 wayne:2 szlam:1 danihelka:2 before:1 local:2 limit:1 encoding:1 oxford:1 becoming:1 might:2 blunsom:1 initialization:1 challenging:2 limited:2 range:1 practical:1 unique:1 recursive:1 block:2 differs:2 shin:1 area:1 rnn:4 yan:1 significantly:3 word:1 induce:3 regular:1 refers:1 pre:1 undesirable:1 context:1 map:6 transduce:1 center:1 phil:1 educational:1 straightforward:1 starting:2 independently:1 focused:1 tomas:2 nesting:1 analogous:1 user:3 exact:3 programming:8 us:6 hypothesis:1 synthesize:1 recognition:1 observed:1 module:2 fly:1 preprint:1 initializing:1 thousand:1 connected:1 highest:1 ran:1 principled:1 moderately:3 trained:20 singh:3 solving:1 algo:1 creates:1 upon:1 efficiency:1 represented:1 various:1 regularizer:1 train:11 describe:1 deer:1 kalchbrenner:1 quite:2 stanford:3 larger:4 encoder:2 grammar:1 ability:1 noisy:1 final:1 associative:1 sequence:3 advantage:1 agrawal:1 net:1 cai:1 propose:1 adaptation:6 inserting:1 loop:3 ablation:1 intuitive:2 inducing:3 moved:1 validate:1 gentle:1 sutskever:3 optimum:1 darrell:1 captioning:1 karol:2 adam:1 executing:1 object:1 develop:1 ac:1 recurrent:1 edward:1 strong:5 taskspecific:1 indicate:1 implies:1 hermann:1 closely:3 attribute:2 human:1 jonathon:1 nando:1 programmer:2 virtual:1 dnns:1 abbeel:1 benigno:1 wall:1 sainbayar:1 hold:1 around:2 sufficiently:1 ground:1 normal:1 unadapted:1 mapping:1 predict:2 algorithmic:1 matthew:2 efros:1 ruslan:1 create:1 karel:27 aim:1 rather:4 varying:4 encode:2 focus:1 improvement:1 likelihood:1 baseline:2 sense:3 dim:6 typically:6 entire:3 santoro:3 relation:1 dnn:1 marcin:3 i1:3 sketched:1 issue:1 classification:1 overall:1 stateof:1 denoted:4 plan:1 art:2 softmax:1 fairly:1 equal:1 never:1 having:1 beach:1 thang:1 manually:1 placing:1 represents:4 icml:1 future:1 others:2 richard:2 few:4 randomly:2 microsoft:5 william:1 attempt:1 thrown:1 ab:7 highly:2 eval:5 alexei:1 evaluation:1 certainly:1 introduces:1 mixture:3 extreme:2 rishabh:3 held:2 emit:1 arthur:1 hausknecht:1 unless:2 pulkit:1 initialized:1 desired:1 circle:1 re:1 brick:1 column:1 earlier:1 obstacle:5 assignment:2 lattice:1 clipping:1 subset:1 hundred:1 uniform:1 sumit:1 reported:1 answer:1 synthetic:2 combined:2 rudy:2 st:1 referring:1 lstm:3 overtake:1 international:1 probabilistic:2 decoding:1 synthesis:2 quickly:1 ilya:3 choose:1 worse:1 creating:1 luong:1 style:3 return:1 wojciech:2 li:2 toy:1 potential:1 de:1 student:2 summarized:1 north:1 inc:1 explicitly:1 depends:1 performed:4 later:1 jason:1 optimistic:1 analyze:1 start:2 npi:2 chart:1 accuracy:8 convolutional:1 greg:2 miller:1 correspond:2 dsl:3 generalize:6 weak:1 none:1 strongest:1 metalearning:1 sharing:7 trevor:1 attentive:1 mohamed:1 mi:1 emits:1 newly:1 sampled:1 knowledge:10 improves:2 actually:1 ok:1 originally:1 supervised:3 evaluated:2 ox:1 overfit:1 hand:1 rahman:1 christopher:1 marker:4 google:3 minibatch:1 perhaps:1 believe:1 usa:1 effect:3 lillicrap:1 concept:3 regularization:2 read:1 moritz:1 i2:1 round:1 during:1 criterion:1 demonstrate:2 performs:1 image:2 novel:1 recently:1 dawn:1 common:2 conditioning:1 exponentially:1 refer:2 significant:5 grid:21 language:11 portfolio:12 had:1 toolkit:1 robot:3 specification:3 encountering:1 access:1 moderate:3 driven:7 forcing:1 scenario:14 manipulation:1 certain:1 meta:58 unsorted:1 joshua:1 seen:3 additional:3 schneider:1 maximize:2 nate:1 ntm:1 signal:1 multiple:2 match:1 adapt:30 cross:11 long:2 huh:1 arvind:1 equally:1 impact:1 involving:1 variant:1 spreadsheet:2 arxiv:4 represent:9 kernel:1 sergey:1 cell:1 proposal:2 background:9 want:1 fine:1 else:1 diagram:2 suleyman:1 appropriately:1 south:2 subject:1 pooling:1 leveraging:1 flow:2 integer:5 near:1 bunel:1 intermediate:1 enough:1 superset:1 bartunov:1 affect:1 relu:2 architecture:12 andreas:2 devlin:3 idea:1 kurach:2 gaunt:1 expression:1 gulwani:1 url:2 song:1 action:1 repeatedly:1 deep:1 dramatically:1 useful:1 andrychowicz:3 tune:1 amount:10 ten:1 neelakantan:2 tenenbaum:1 generate:5 http:2 outperform:2 exist:1 percentage:1 delta:1 klein:1 write:1 four:4 nal:1 ram:1 year:1 run:1 turing:2 parameterized:1 you:1 powerful:2 named:1 family:11 reasonable:4 lake:2 home:1 entirely:1 layer:7 dropout:2 npl:1 drone:1 adapted:15 occur:1 handful:3 constraint:1 alex:2 flat:1 unlimited:1 aspect:1 attempting:1 performing:4 mikolov:2 poor:3 manning:1 across:3 smaller:2 son:1 rob:2 making:1 intuitively:1 computationally:1 remains:1 hero:1 fed:1 end:2 available:24 unreasonable:1 hierarchical:1 upto:1 alternative:2 batch:1 ho:1 assumes:1 top:3 include:1 denotes:1 concatenated:1 especially:2 move:2 question:1 added:1 already:1 parametric:1 primary:3 gradient:2 iclr:5 separate:1 decoder:1 majority:1 trivial:1 induction:64 marcus:1 o1:3 reed:1 ratio:1 demonstration:4 difficult:3 statement:1 frank:2 teach:1 trace:5 implementation:1 perform:10 observation:1 benchmark:3 minh:1 daan:1 descent:1 supporting:1 communication:1 stack:2 arbitrary:1 brenden:1 pair:1 required:1 extensive:1 optimized:1 imagenet:1 nip:4 address:1 able:5 below:1 pattern:1 mismatch:1 scott:1 sparsity:2 challenge:1 program:67 max:1 including:1 memory:5 analogue:2 hot:1 unrealistic:1 natural:4 predicting:1 recursion:1 representing:1 improve:5 imply:1 picture:1 axis:1 shortterm:1 epoch:1 relative:4 graf:2 loss:7 expect:5 fully:2 mixed:1 generation:1 interesting:2 facing:1 validation:3 abdel:1 agent:3 principle:2 translation:3 row:2 karl:1 course:3 token:2 last:1 free:1 infeasible:1 side:3 allow:2 taking:1 absolute:1 benefit:1 curve:1 boundary:1 plain:30 dimension:2 evaluating:1 world:3 depth:1 valid:1 ignores:1 author:1 made:3 reinforcement:1 stuck:1 pushmeet:3 preferred:1 mustafa:1 active:1 overfitting:3 fergus:2 imitation:4 surya:1 search:2 latent:5 robin:1 additionally:1 table:3 learn:8 transfer:9 robust:1 ca:1 scratch:4 brockschmidt:1 forest:1 expansion:1 complex:4 domain:17 bradly:1 marc:1 did:2 joulin:2 augmented:3 ensembling:1 referred:1 wiley:1 precision:1 position:1 decoded:1 deterministically:1 explicit:3 exponential:3 momentum:1 stadie:1 house:1 inferring:1 learns:1 young:1 down:1 specific:7 list:3 explored:2 consist:1 corr:5 conditioned:3 margin:1 sorting:3 entropy:1 fc:5 simply:1 explore:3 likely:1 rohrbach:1 timothy:1 pretrained:1 nested:1 corresponds:1 adaption:1 harris:1 acm:1 grefenstette:1 weston:1 goal:5 sorted:1 sized:1 towards:1 feasible:1 change:1 specifically:3 infinite:1 typical:1 uniformly:1 contrasted:1 called:1 total:3 experimental:2 east:1 select:3 jonathan:2 alexander:1 evaluate:3 avoiding:1 |
6,417 | 6,804 | Bayesian Dyadic Trees and Histograms for Regression
St?phanie van der Pas
Mathematical Institute
Leiden University
Leiden, The Netherlands
[email protected]
Veronika Ro?ckov?
Booth School of Business
University of Chicago
Chicago, IL, 60637
[email protected]
Abstract
Many machine learning tools for regression are based on recursive partitioning
of the covariate space into smaller regions, where the regression function can
be estimated locally. Among these, regression trees and their ensembles have
demonstrated impressive empirical performance. In this work, we shed light
on the machinery behind Bayesian variants of these methods. In particular, we
study Bayesian regression histograms, such as Bayesian dyadic trees, in the simple
regression case with just one predictor. We focus on the reconstruction of regression
surfaces that are piecewise constant, where the number of jumps is unknown. We
show that with suitably designed priors, posterior distributions concentrate around
the true step regression function at a near-minimax rate. These results do not require
the knowledge of the true number of steps, nor the width of the true partitioning
cells. Thus, Bayesian dyadic regression trees are fully adaptive and can recover the
true piecewise regression function nearly as well as if we knew the exact number
and location of jumps. Our results constitute the first step towards understanding
why Bayesian trees and their ensembles have worked so well in practice. As an
aside, we discuss prior distributions on balanced interval partitions and how they
relate to an old problem in geometric probability. Namely, we relate the probability
of covering the circumference of a circle with random arcs whose endpoints are
confined to a grid, a new variant of the original problem.
1
Introduction
Histogram regression methods, such as regression trees [1] and their ensembles [2], have an impressive
record of empirical success in many areas of application [3, 4, 5, 6, 7]. Tree-based machine learning
(ML) methods build a piecewise constant reconstruction of the regression surface based on ideas
of recursive partitioning. Perhaps the most popular partitioning schemes are the ones based on
parallel-axis splits. One recent example is the Mondrian process [8], which was introduced to the
ML community as a prior over tree data structures with interesting self-consistency properties. Many
efficient algorithms exist that can be deployed to fit regression histograms underpinned by some
partitioning scheme. Among these, Bayesian variants, such as Bayesian CART [9, 10] and BART
[11], have appealed to umpteen practitioners. There are several reasons why. Bayesian tree-based
regression tools (a) can adapt to regression surfaces without any need for pruning, (b) are reluctant to
overfit, (c) provide an avenue for uncertainty statements via posterior distributions. While practical
success stories abound [3, 4, 5, 6, 7], the theoretical understanding of Bayesian regression tree
methods has been lacking. In this work, we study the quality of posterior distributions with regard
to the three properties mentioned above. We provide first theoretical results that contribute to the
understanding of Bayesian Gaussian regression methods based on recursive partitioning.
Our performance metric will be the speed of posterior concentration/contraction around the true
regression function. This is ultimately a frequentist assessment, describing the typical behavior of the
posterior under the true generative model [12]. Posterior concentration rate results are now slowly
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
entering the machine learning community as a tool for obtaining more insights into Bayesian methods
[13, 14, 15, 16, 17]. Such results quantify not only the typical distance between a point estimator
(posterior mean/median) and the truth, but also the typical spread of the posterior around the truth.
Ideally, most of the posterior mass should be concentrated in a ball centered around the true value
with a radius proportional to the minimax rate [12, 18]. Being inherently a performance measure of
both location and spread, optimal posterior concentration provides a necessary certificate for further
uncertainty quantification [19, 20, 21]. Beyond uncertainty assessment, theoretical guarantees that
describe the average posterior shrinkage behavior have also been a valuable instrument for assessing
the suitability of priors. As such, these results can often provide useful guidelines for the choice of
tuning parameters, e.g. the latent Dirichlet allocation model [14].
Despite the rapid growth of this frequentist-Bayesian theory field, posterior concentration results
for Bayesian regression histograms/trees/forests have, so far, been unavailable. Here, we adopt this
theoretical framework to get new insights into why these methods work so well.
Related Work
Bayesian density estimation with step functions is a relatively well-studied problem [22, 23, 24]. The
literature on Bayesian histogram regression is a bit less crowded. Perhaps the closest to our conceptual
framework is the work by Coram and Lalley [25], who studied Bayesian non-parametric binary
regression with uniform mixture priors on step functions. The authors focused on L1 consistency.
Here, we focus on posterior concentration rather than consistency. We are not aware of any other
related theoretical study of Bayesian histogram methods for Gaussian regression.
Our Contributions
In this work we focus on a canonical regression setting with merely one predictor. We study
hierarchical priors on step functions and provide conditions under which the posteriors concentrate
optimally around the true regression function. We consider the case when the true regression function
itself is a step function, i.e. a tree or a tree ensemble, where the number and location of jumps is
unknown.
We start with a very simple space of approximating step functions, supported on equally sized intervals
where the number of splits is equipped with a prior. These partitions include dyadic regression trees.
We show that for a suitable complexity prior, all relevant information about the true regression
function (jump sizes and the number of jumps) is learned from the data automatically. During the
course of the proof, we develop a notion of the complexity of a piecewise constant function relative
to its approximating class.
Next, we take a larger approximating space consisting of functions supported on balanced partitions
that do not necessarily have to be of equal size. These correspond to more general trees with splits at
observed values. With a uniform prior over all balanced partitions, we are able to achieve a nearly
ideal performance (as if we knew the number and the location of jumps). As an aside, we describe
the distribution of interval lengths obtained when the splits are sampled uniformly from a grid. We
relate this distribution to the probability of covering the circumference of a circle with random arcs, a
problem in geometric probability that dates back to [26, 27]. Our version of this problem assumes
that the splits are chosen from a discrete grid rather than from a unit interval.
Notation
With ? and . we will denote an equality and inequality, up to a constant. The ?-covering number
of a set ? for a semimetric d, denoted by N (?, ?, d), is the minimal number of d-balls N
of radius ?
needed to cover the set ?. We denote by ?(?) the standard normal density and by Pfn =
Pf,i the
n-fold product
measure
of
the
n
independent
observations
under
(1)
with
a
regression
function
f (?).
Pn
By Pxn = n1 i=1 ?xi we denote the empirical distribution of the observed covariates, by || ? ||n the
norm on L2 (Pxn ) and by || ? ||2 the standard Euclidean norm.
2
Bayesian Histogram Regression
We consider a classical nonparametric regression model, where response variables Y (n) =
(Y1 , . . . , Yn )0 are related to input variables x(n) = (x1 , . . . , xn )0 through the function f0 as follows
Yi = f0 (xi ) + ?i , ?i ? N (0, 1), i = 1, . . . , n.
(1)
2
We assume that the covariate values xi are one-dimensional, fixed and have been rescaled so that
xi ? [0, 1]. Partitioning-based regression methods are often invariant to monotone transformations
of observations. In particular, when f0 is a step function, standardizing the distance between the
observations, and thereby the split points, has no effect on the nature of the estimation problem.
Without loss of generality, we will thereby assume that the observations are aligned on an equispaced
grid.
Assumption 1. (Equispaced Grid) We assume that the scaled predictor values satisfy xi = ni for
each i = 1, . . . , n.
This assumption implies that partitions that are balanced in terms of the Lebesque measure will be
balanced also in terms of the number of observations. A similar assumption was imposed by Donoho
[28] in his study of Dyadic CART.
The underlying regression function f0 : [0, 1] ? R is assumed to be a step function, i.e.
f0 (x) =
K0
X
?k0 I?0k (x),
k=1
0
{?0k }K
k=1
0
where
is a partition of [0, 1] into K0 non-overlapping intervals. We assume that {?0k }K
k=1
is minimal, meaning that f0 cannot be represented with a smaller partition (with less than K0 pieces).
Each partitioning cell ?0k is associated with a step size ?k0 , determining the level of the function f0
0 0
on ?0k . The entire vector of K0 step sizes will be denoted by ? 0 = (?10 , . . . , ?K
).
One might like to think of f0 as a regression tree with K0 bottom leaves. Indeed, every step function
can be associated with an equivalence class of trees that live on the same partition but differ in
their tree topology. The number of bottom leaves K0 will be treated as unknown throughout this
paper. Our goal will be designing a suitable class of priors on step functions so that the posterior
concentrates tightly around f0 . Our analysis with a single predictor has served as a precursor to a
full-blown analysis for high-dimensional regression trees [29].
We consider an approximating space of all step functions (with K = 1, 2, . . . bottom leaves)
F = ??
K=1 FK ,
(2)
which consists of smaller spaces (or shells) of all K-step functions
(
)
K
X
FK = f? : [0, 1] ? R; f? (x) =
?k I?k (x) ,
k=1
{?k }K
k=1
each indexed by a partition
and a vector of K step heights ?. The fundamental building
block of our theoretical analysis will be the prior on F. This prior distribution has three main
ingredients, described in detail below, (a) a prior on the number of steps K, (b) a prior on the
0
partitions {?k }K
k=1 of size K, and (c) a prior on step sizes ? = (?1 , . . . , ?K ) .
2.1
Prior ?K (?) on the Number of Steps K
To avoid overfitting, we assign an exponentially decaying prior distribution that penalizes partitions
with too many jumps.
Definition 2.1. (Prior on K) The prior on the number of partitioning cells K satisfies
?K (k) ? ?(K = k) ? exp(?cK k log k) for
k = 1, 2, . . . .
(3)
This prior is no stranger to non-parametric problems. It was deployed for stepwise reconstructions of
densities [24, 23] and regression surfaces [25]. When cK is large, this prior is concentrated on models
with small complexity where overfitting should not occur. Decreasing cK leads to the smearing of
the prior mass over partitions with more jumps. This is illustrated in Figure 1, which depicts the prior
for various choices of cK . We provide recommendations for the choice of cK in Section 3.1.
2.2
Prior ?? (? | K) on Interval Partitions {?k }K
k=1
After selecting the number of steps K from ?K (k), we assign a prior over interval partitions ?? (? |K).
We will consider two important special cases.
3
?K(k)
0.5
f0(x)
6
?
?
0.4
1
1/2
1/5
1/10
True
K=2
5
K=5
?
?
?
4
0.3
?
?
?
?
?
?
?
?
?
?
?
?
3
0.2
0.1
2
?
K=10
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
1
?
0.0
2
?
?
?
4
6
?
?
8
?
0
?
10
0.0
k
0.2
0.4
0.6
0.8
1.0
x
Figure 1: (Left) Prior on the tree size for several values of cK , (Right) Best approximations of f0 (in
the `2 sense) by step functions supported on equispaced blocks of size K ? {2, 5, 10}.
2.2.1
Equivalent Blocks
Perhaps the simplest partition is based on statistically equivalent blocks [30], where all the cells are
required to have the same number of points. This is also known as the K-spacing rule that partitions
the unit interval using order statistics of the observations.
Definition 2.2. (Equivalent Blocks) Let x(i) denote the ith order statistic of x = (x1 , . . . , xn )0 ,
where x(n) ? 1 and n = Kc for some c ? N\{0}. Denote by x(0) ? 0. A partition {?k }K
k=1
consists of K equivalent blocks, when ?k = (x(jk ) , x(jk+1 ) ], where jk = (k ? 1)c.
A variant of this definition can be obtained in terms of interval lengths rather than numbers of
observations.
Definition 2.3. (Equispaced Blocks) A partition {?k }K
k=1 consists of K equispaced blocks ?k , when
k
?k = k?1
,
for
k
=
1,
.
.
.
,
K.
K
K
When K = 2s for some s ? N\{0}, the equispaced partition corresponds to a full complete binary
tree with splits at dyadic rationals. If the observations xi lie on a regular grid (Assumption 1), then
Definition 2.2 and 2.3 are essentially equivalent. We will thereby focus on equivalent blocks (EB)
and denote such a partition (for a given K > 0) with ?EB
K . Because there is only one such partition
EB
EB
for each K, the prior ?? (?|K) has a single point mass mass at ?EB
= ??
K . With ?
K=1 ?K we
denote the set of all EB partitions for K = 1, 2, . . . . We will use these partitioning schemes as a
jump-off point.
2.2.2
Balanced Intervals
Equivalent (equispaced) blocks are deterministic and, as such, do not provide much room for learning
about the actual location of jumps in f0 . Balanced intervals, introduced below, are a richer class of
partitions that tolerate a bit more imbalance. First, we introduce the notion of cell counts ?(?k ). For
each interval ?k , we write
n
1X
?(?k ) =
I(xi ? ?k ),
(4)
n i=1
the proportion of observations falling inside ?k . Note that for equivalent blocks, we can write
?(?1 ) = ? ? ? = ?(?K ) = c/n = 1/K.
Definition 2.4. (Balanced Intervals) A partition {?k }K
k=1 is balanced if
2
Cmin
C2
? ?(?k ) ? max for all k = 1, . . . , K
K
K
for some universal constants Cmin ? 1 ? Cmax not depending on K.
4
(5)
K=2
K=3
1
?
0.9
?
?
?
?
1
0.8
?
?
c
0.2
?
0.6
0.4
0.1
1?c
c
0
?
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.6
?
0.2
0
0
?
?
0.2
0.3
?3
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
0.4
0.8
?
?
?
0.5
0.9
1
0
?1
?
?
?
0.6
1
?
?
?
0.4
1?c
0.7
?1
0.8
?
0.2
0.4
0.6
0.8
1
?2
?2
(a) K = 2
(b) K = 3
Figure 2: Two sets EK of possible stick lengths that satisfy the minimal cell-size condition |?k | ? C
with n = 10, C = 2/n and K = 2, 3.
The following variant of the balancing condition uses interval widths rather than cell counts:
2
e 2 /K ? |?k | ? C
emax
C
/K. Again, under Assumption 1, these two definitions are equivamin
lent. In the sequel, we will denote by ?BI
K the set of all balanced partitions consisting of K intervals
BI
and by ?BI = ??
?
the
set
of
all
balanced intervals of sizes K = 1, 2, . . . . It is worth pointing
K=1 K
out that the balance assumption on the interval partitions can be relaxed, at the expense of a log factor
in the concentration rate [29].
With balanced partitions, the K th shell FK of the approximating space F in (2) consists of all step
functions that are supported on partitions ?BI
K and have K ?1 points of discontinuity uk ? In ? {xi :
i = 1, . . . , n ? 1} for k = 1, . . . K ? 1. For equispaced blocks in Definition 2.3, we assumed that
the points of subdivision were deterministic, i.e. uk = k/K. For balanced partitions, we assume that
uk are random and chosen amongst the observed values xi . The order statistics of the vector of splits
u = (u1 , . . . , uK?1 )0 uniquely define a segmentation of [0, 1] into K intervals ?k = (u(k?1) , u(k) ],
where u(k) designates the k th smallest value in u and u(0) ? 0, u(K) = x(n) ? 1.
Our prior over balanced intervals ?? (? | K) will be defined implicitly through a uniform prior over
the split vectors u. Namely, the prior over balanced partitions ?BI
K satisfies
1
BI
K
?? ({?k }K
I
{?
}
?
?
.
(6)
k
k=1 | K) =
k=1
K
card(?BI
K )
In the following Lemma, we obtain upper bounds on card(?BI
K ) and discuss how they relate to an
old problem in geometric probability. In the sequel, we denote with |?k | the lengths of the segments
defined through the split points u.
Lemma 2.1. Assume that u = (u1 , . . . , uK?1 )0 is a vector of independent random variables
obtained by uniform sampling (without replacement) from In . Then under Assumption 1, we have for
1/n < C < 1/K
bn(1?K C)c+K?1
K?1
? min |?k | ? C =
(7)
n?1
1?k?K
and
?
K?1
max |?k | ? C
1?k?K
=1?
n
e
X
(?1)
k=1
k
n?1
k
bn(1?k C)c+K?1
K?1
n?1
K?1
,
(8)
where n
e = min{n ? 1, b1/Cc}.
Proof. The denominator of (7) follows from the fact that there are n ? 1 possible splits for the
K ? 1 points of discontinuity uk . The numerator is obtained after adapting the proof of Lemma
5
2 of Flatto and Konheim [31]. Without lost of generality, we will assume that C = a/n for some
a = 1, . . . , bn/Kc so that n(1 ? KC) is an integer. Because the jumps uk can only occur on the
grid In , we have |?k | = j/n for some j = 1, . . . , n ? 1. It follows from Lemma 1 of Flatto and
PK
Konheim [31] that the set EK = {|?k | : k=1 |?k | = 1 and |?k | ? C for k = 1, . . . , K} lies
PK
in the interior of a convex hull of K points vr = (1 ? KC)er + C k=1 ek for r = 1, . . . , K,
where er = (er1 , . . . , erK )0 are unit base vectors, i.e. erj = I(r = j). Two examples of the set EK
(for K = 2 and K = 3) are depicted in Figure 2. In both figures, n = 10 (i.e. 9 candidate split
points) and a = 2. With K = 2 (Figure 2(a)), there are only 7 = n(1?KC)+K?1
pairs of interval
K?1
lengths (|?1 |, |?2 |)0 that satisfy the minimal cell condition. These points lie on a grid between
the two vertices v1 = (1 ? C, C) and v2 = (C, 1 ? C). With K = 3, the convex hull of points
v1 = (1 ? 2C, C, C)0 , v2 = (C, 1 ? 2C, C)0 and v1 = (C, C, 1 ? 2C)0 corresponds to a diagonal
dissection of a cube of a side length (1 ? 3C) (Figure 2(b), again with a = 2 and n = 10). The
number of lattice points in the interior (and on the boundary)
of such tetrahedron corresponds to
an arithmetic sum 12 (n ? 3a + 2)(n ? 3a + 1) = n?3a+2
. So far, we showed (7) for K = 2 and
2
K = 3. To complete the induction argument, suppose that the formula holds for some arbitrary
K > 0. Then the size of the lattice inside (and on the boundary) of a (K + 1)-tetrahedron of a side
length [1 ? (K?
+ 1)C] ?
can be obtained by summing?lattice sizes inside K-tetrahedrons of increasing
side lengths 0, 2/n, 2 2/n, . . . , [1 ? (K + 1)C] 2/n, i.e.
n[1?(K+1)C]+K?1
X
j=K?1
j
K ?1
n[1 ? (K + 1)C] + K
=
,
K
PN
N +1
where we used the fact j=K Kj = K+1
. The second statement (8) is obtained by writing the
event as a complement of the union of events and applying the method of inclusion-exclusion.
Remark 2.1. Flatto and Konheim [31] showed that the probability of covering a circle with random
arcs of length C is equal to the probability that all segments of the unit interval, obtained with iid
random uniform splits, are smaller than C. Similarly, the probability (8) could be related to the
probability of covering the circle with random arcs whose endpoints are chosen from a grid of n ? 1
equidistant points on the circumference.
e2 )c+K?1
n?1
min
There are K?1
partitions of size K, of which bn(1?CK?1
satisfy the minimal cell width
2
e
balancing condition (where C
> K/n). This number gives an upper bound on the combinatorial
min
complexity of balanced partitions card(?BI
K ).
2.3
Prior ?(? | K) on Step Heights ?
To complete the prior on F K , we take independent normal priors on each of the coefficients. Namely
?(? | K) =
K
Y
?(?k ),
(9)
k=1
where ?(?) is the standard normal density.
3
Main Results
A crucial ingredient of our proof will be understanding how well one can approximate f0 with other
step functions (supported on partitions ?, which are either equivalent blocks ?EB or balanced
partitions ?BI ). We will describe the approximation error in terms of the overlap between the true
K
0
partition {?0k }K
k=1 and the approximating partitions {?k }k=1 ? ?. More formally, we define the
restricted cell count (according to Nobel [32]) as
0
0
0
m V ; {?0k }K
k=1 = |?k : ?k ? V 6= ?|,
0
the number of cells in {?0k }K
k=1 that overlap with an interval V ? [0, 1]. Next, we define the
complexity of f0 as the smallest size of a partition in ? needed to completely cover f0 without any
overlap.
6
Definition 3.1. (Complexity of f0 w.r.t. ?) We define K(f0 , ?) as the smallest K such that there
exists a K-partition {?k }K
k=1 in the class of partitions ? for which
0
m ?k ; {?0k }K
k=1 = 1 for all k = 1, . . . , K.
The number K(f0 , ?) will be referred to as the complexity of f0 w.r.t. ?.
The complexity number K(f0 , ?) indicates the optimal number of steps needed to approximate f0
with a step function (supported on partitions in ?) without any error. It depends on the true number
0
of jumps K0 as well as the true interval lengths |?0k |. If the minimal partition {?0k }K
k=1 resided in the
K
0
approximating class, i.e. {?0k }k=1
? ?, then we would obtain K(f0 , ?) = K0 , the true number of
0 K0
steps. On the other hand, when {?k }k=1 ?
/ ?, the complexity number K(f0 , ?) can be much larger.
0
This is illustrated in Figure 1 (right), where the true partition {?0k }K
k=1 consists of K0 = 4 unequal
pieces and we approximate it with equispaced blocks with K = 2, 5, 10 steps. Because the intervals
?0k are not equal and the smallest one has a length 1/10, we need K(f0 , ?EB ) = 10 equispaced
0
blocks to perfectly approximate f0 . For our analysis, we do not need to assume that {?0k }K
k=1 ? ?
(i.e. f0 does not need to be inside the approximating class) or that K(f0 , ?) is finite. The complexity
number can increase with n, where sharper performance is obtained when f0 can be approximated
error-free with some f ? ?, where f has a small number of discontinuities relative to n.
Another way to view K(f0 , ?) is as the ideal partition size on which the posterior should concentrate. Ifpthis number were known, we could achieve a near-minimax posterior concentration
rate n?1/2 K(f0 , ?) log[n/K(f0 , ?)] (Remark 3.3). The actual
p minimax rate for estimating a
?1/2
piece-wise constant f0 (consisting of K0 > 2 pieces) is n
K0 log(n/K0 ) [33]. In our main
results, we will target the nearly optimal rate expressed in terms of K(f0 , ?).
3.1
Posterior Concentration for Equivalent Blocks
Our first result shows that the minimax rate is nearly achieved, without any assumptions on the
number of pieces of f0 or the sizes of the pieces.
Theorem 3.1. (Equivalent Blocks) Let f0 : [0, 1] ? R be a step function with K0 steps, where K0
is unknown. Denote by F the set of all step functions supported on equivalent blocks, equipped
with priors ?K (?) and ?(? | K) as in (3) and (9). Denote with Kf0 ? K(f0 , ?EB ) and assume
?
k? 0 k2? . log n and Kf0 . n. Then, under Assumption 1, we have
q
? f ? F : kf ? f0 kn ? Mn n?1/2 Kf0 log (n/Kf0 ) | Y (n) ? 0
(10)
in Pfn0 -probability, for every Mn ? ? as n ? ?.
Before we proceed with the proof, a few remarks ought to be made. First, it is worthwhile to
emphasize that the statement in Theorem 3.1 is a frequentist one as it relates to an aggregated
behavior of the posterior distributions obtained under the true generative model Pfn0 .
Second, the theorem shows that the Bayesian procedure performs an automatic adaptation to
K(f0 , ?EB ). The posterior will concentrate on EB partitions that are fine enough to approximate f0
well. Thus, we are able to recover the true function as well as if we knew K(f0 , ?EB ).
Third, it is worth mentioning that, under Assumption 1, Theorem 3.1 holds for equivalent as well as
equisized blocks. In this vein, it describes the speed of posterior concentration for dyadic regression
trees. Indeed, as mentioned previously, with K = 2s for some s ? N\{0}, the equisized partition
corresponds to a full binary tree with splits at dyadic rationals.
Another interesting insight is that the Gaussian prior (9), while selected for mathematical convenience,
turns out to be sufficient for optimal recovery. In other words, despite the relatively large amount of
mass near zero, the Gaussian prior does not rule out optimal posterior concentration. Our standard
normal prior is a simpler version of the Bayesian CART prior, which determines the variance from
the data [9].
Let Kf0 ? K(f0 , ?EB ) be as in Definition 3.1. Theorem 3.1 is proved by verifying the three
p
Skn
conditions of Theorem 4 of [18], for ?n = n?1/2 Kf0 log(n/Kf0 ) and Fn = K=0
FK , with
7
kn of the order Kf0 log(n/Kf0 ). The approximating subspace Fn ? F should be rich enough to
approximate f0 well and it should receive most of the prior mass. The conditions for posterior
contraction at the rate ?n are:
?
(C1) sup log N 36
, {f ? Fn : kf ? f0 kn < ?}, k.kn ? n?2n ,
?>?n
(C2)
2
?(F\Fn )
= o(e?2n?n ),
2
2
?(f ? F : kf ? f0 kn ? ?n )
(C3)
j2
2
?(f ? Fn : j?n < kf ? f0 kn ? 2j?n )
? e 4 n?n for all sufficiently large j.
2
2
?(f ? F : kf ? f0 kn ? ?n )
The entropy condition (C1) restricts attention to EB partitions with small K. As will be seen from the
proof, the largest allowed partitions have at most (a constant multiple of) Kf0 log (n/Kf0 ) pieces..
Condition (C2) requires that the prior does not promote partitions with more than Kf0 log (n/Kf0 )
pieces. This property is guaranteed by the exponentially decaying prior ?K (?), which penalizes large
partitions.
The final condition, (C3), requires that the prior charges a k.kn neighborhood of the true function. In
our proof, we verify this condition by showing that the prior mass on step functions of the optimal
size Kf0 is sufficiently large.
Proof. We verify the three conditions (C1), (C2) and (C3).
(C1) Let ? > ?n and K ? N. For f? , f? ? FK , we have K ?1 k? ? ?k22 = kf? ? f? k2n because
?(?k ) = 1/K for each k. We now argue
? of [18] to show that
as in the proof of Theorem 12
?
N 36
, {f ? FK : kf ? f0 kn < ?}, k.kn can be covered by the number of K?/36-balls required
?
to cover a K?-ball in RK . This number is bounded above by 108K . Summing over K, we
recognize a geometric series. Taking the logarithm of the result, we find that (C1) is satisfied if
log(108)(kn + 1) ? n?2n .
(C2)
We bound the denominator by:
2
2
?(f ? F : kf ? f0 k2n ? ?2 ) ? ?K (Kf0 )? ? ? RK(f0 ) : k? ? ? ext
0 k2 ? ? Kf0 ,
Kf0
where ? ext
is an extended version of ? 0 ? RK0 , containing the coefficients for f0 expressed
0 ?R
Kf0
as a step function on the partition {?0k }k=1
. This can be bounded from below by
Z 2
?K (Kf0 )
?K (Kf0 ) ? Kf0 /2 xKf0 /2?1 e?x/2
K(f0 )
2
2
?
?
?
R
:
k?k
?
?
K
/2
>
dx.
f0
ext 2
ext 2
2
ek?0 k2 /2
ek?0 k2 /2 0
2Kf0 /2 ?(Kf0 /2)
We bound this from below by bounding the exponential at the upper integration limit, yielding:
2
e?? Kf0 /4
?K (Kf0 )
K /2
?Kf0 Kf0f0 .
ext
2
K
ek?0 k2 /2 2 f0 ?(Kf0 /2 + 1)
(11)
For ? = ?n ? 0, we thus find that the denominator in (C2) can be lower bounded with
ext 2
2
eKf0 log ?n ?cK Kf0 log Kf0 ?k?0 k2 /2?Kf0 /2[log 2+?n /2] . We bound the numerator:
!
Z ?
?
?
[
X
?(F\Fn ) = ?
Fk ?
e?cK k log k ? e?cK (kn +1) log(kn +1) +
e?cK x log x ,
k=kn +1
kn +1
k=kn +1
which is of order e?cK (kn +1) log(kn +1) . Combining this bound with (11), we find that (C2) is met if:
e?Kf0 log ?n +(cK +1) Kf0 log Kf0 +Kf0 k?
0 2
k? ?cK (kn +1) log(kn +1)+2n?2n
? 0 as n ? ?.
(C3) We bound the numerator by one, and use the bound (11) for the denominator. As ?n ? 0, we
2
obtain the condition ?Kf0 log ?n + (cK + 1)Kf0 log Kf0 + Kf0 k? 0 k2? ? j4 n?2n for all sufficiently
large j.
8
p
Conclusion With ?n = n?1/2 Kf0 log(n/Kf0 ), letting kn ? n?2n = Kf0 log(n/Kf0 ), the
0 2
condition (C1) is ?
met. With this choice of kn , the condition (C2) holds
? as well as long as k? k? .
log n and Kf0 . n. Finally, the condition (C3) is met for Kf0 . n.
Remark 3.1. It is worth pointing out that the proof will hold for a larger class of priors on K,
as long as the prior shrinks at least exponentially fast (meaning that it is bounded from above by
ae?bK for constants a, b > 0). However, a prior at this exponential limit will require tuning, because
the optimal a and b will depend on K(f0 , ?EB ). We recommend using the prior (2.1) that prunes
somewhat more aggressively, because it does not require tuning by the user. Indeed, Theorem 3.1
holds regardless of the choice of cK > 0. We conjecture, however, that values cK ? 1/K(f0 , ?EB )
lead to a faster concentration speed and we suggest cK = 1 as a default option.
Remark 3.2. When Kf0 is known, there is no need for assigning a prior ?K (?) and the conditions
(C1) and (C3) are verified similarly as before, fixing the number of steps at Kf0 .
3.2
Posterior Concentration for Balanced Intervals
An analogue of Theorem 3.1 can be obtained for balanced partitions from Section 2.2.2 that correspond
to regression trees with splits at actual observations. Now, we assume that f0 is ?BI -valid and carry
out the proof with K(f0 , ?BI ) instead of K(f0 , ?EB ). The posterior concentration rate is only
slightly worse.
Theorem 3.2. (Balanced Intervals) Let f0 : [0, 1] ? R be a step function with K0 steps, where K0
is unknown. Denote by F the set of all step functions supported on balanced intervals equipped with
priors ?K (?), ?? (?|K) and ?(? | K) as in (3), (6) and (9). Denote with Kf0 ? K(f0 , ?BI ) and
?
assume k? 0 k2? . log2? n and K(f0 , ?BI ) . n. Then, under Assumption 1, we have
q
? f ? F : kf ? f0 kn ? Mn n?1/2 Kf0 log2? (n/Kf0 ) | Y (n) ? 0
(12)
in Pfn0 -probability, for every Mn ? ? as n ? ?, where ? > 1/2.
2??1
. The
Proof. All three conditions (C1), (C2) and (C3)
Phold if we choose kn ? Kf0 [log(n/Kf0 )]
kn
BI
k
2
entropy condition will be satisfied when log
C
card(?
)
.
n
?
for
some
C
>
0,
where
k
n
k=1
q
n?1
n?1
?n = n?1/2 Kf0 log2? (n/Kf0 ). Using the upper bound card(?BI
k ) < k?1 < kn ?1 (because
kn < n?1
2 for large enough n), the condition (C1) is verified. Using the fact that card(?Kf0 ) .
Kf0 log(n/Kf0 ), the condition (C2) will be satisfied when, for some D > 0, we have
e?Kf0 log ?n +(cK +1) Kf0 log Kf0 +D Kf0 log(n/Kf0 )+Kf0 k?
0 2
k? ?cK (kn +1) log(kn +1)+2n?2n
k? 0 k2?
2?
? 0. (13)
?
. n. These
This holds for our choice of kn under the assumption
. log n and Kf0
choices also yield (C3).
?
Remark 3.3. When Kf0 & n,p
Theorem 3.1 and Theorem 3.2 still hold, only with the bit slower
slower concentration rate n?1/2 Kf0 log n.
4
Discussion
We provided the first posterior concentration rate results for Bayesian non-parametric regression with
step functions. We showed that under suitable complexity priors, the Bayesian procedure adapts to
the unknown aspects of the target step function. Our approach can be extended in three ways: (a)
to smooth f0 functions, (b) to dimension reduction with high-dimensional predictors, (c) to more
general partitioning schemes that correspond to methods like Bayesian CART and BART. These three
extensions are developed in our followup manuscript [29].
5
Acknowledgment
This work was supported by the James S. Kemper Foundation Faculty Research Fund at the University
of Chicago Booth School of Business.
9
References
[1] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Statistics/Probability Series. Wadsworth Publishing Company, Belmont, California, U.S.A., 1984.
[2] L. Breiman. Random forests. Mach. Learn., 45:5?32, 2001.
[3] A. Berchuck, E. S. Iversen, J. M. Lancaster, J. Pittman, J. Luo, P. Lee, S. Murphy, H. K. Dressman, P. G.
Febbo, M. West, J. R. Nevins, and J. R. Marks. Patterns of gene expression that characterize long-term
survival in advanced stage serous ovarian cancers. Clin. Cancer Res., 11(10):3686?3696, 2005.
[4] S. Abu-Nimeh, D. Nappa, X. Wang, and S. Nair. A comparison of machine learning techniques for phishing
detection. In Proceedings of the Anti-phishing Working Groups 2nd Annual eCrime Researchers Summit,
eCrime ?07, pages 60?69, New York, NY, USA, 2007. ACM.
[5] M. A. Razi and K. Athappilly. A comparative predictive analysis of neural networks (NNs), nonlinear
regression and classification and regression tree (CART) models. Expert Syst. Appl., 29(1):65 ? 74, 2005.
[6] D. P. Green and J. L. Kern. Modeling heterogeneous treatment effects in survey experiments with Bayesian
Additive Regression Trees. Public Opin. Q., 76(3):491, 2012.
[7] E. C. Polly and M. J. van der Laan.
Super learner in prediction.
http://works.bepress.com/mark_van_der_laan/200/, 2010.
Available at:
[8] D. M. Roy and Y. W. Teh. The Mondrian process. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou,
editors, Advances in Neural Information Processing Systems 21, pages 1377?1384. Curran Associates,
Inc., 2009.
[9] H. A. Chipman, E. I. George, and R. E. McCulloch. Bayesian CART model search. JASA, 93(443):935?948,
1998.
[10] D. Denison, B. Mallick, and A. Smith. A Bayesian CART algorithm. Biometrika, 95(2):363?377, 1998.
[11] H. A. Chipman, E. I. George, and R. E. McCulloch. BART: Bayesian Additive Regression Trees. Ann.
Appl. Stat., 4(1):266?298, 03 2010.
[12] S. Ghosal, J. K. Ghosh, and A. W. van der Vaart. Convergence rates of posterior distributions. Ann. Statist.,
28(2):500?531, 04 2000.
[13] T. Zhang. Learning bounds for a generalized family of Bayesian posterior distributions. In S. Thrun,
L. K. Saul, and P. B. Sch?lkopf, editors, Advances in Neural Information Processing Systems 16, pages
1149?1156. MIT Press, 2004.
[14] J. Tang, Z. Meng, X. Nguyen, Q. Mei, and M. Zhang. Understanding the limiting factors of topic
modeling via posterior contraction analysis. In T. Jebara and E. P. Xing, editors, Proceedings of the
31st International Conference on Machine Learning (ICML-14), pages 190?198. JMLR Workshop and
Conference Proceedings, 2014.
[15] N. Korda, E. Kaufmann, and R. Munos. Thompson sampling for 1-dimensional exponential family bandits.
In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in
Neural Information Processing Systems 26, pages 1448?1456. Curran Associates, Inc., 2013.
[16] F.-X. Briol, C. Oates, M. Girolami, and M. A. Osborne. Frank-Wolfe Bayesian quadrature: Probabilistic
integration with theoretical guarantees. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and
R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1162?1170. Curran
Associates, Inc., 2015.
[17] M. Chen, C. Gao, and H. Zhao. Posterior contraction rates of the phylogenetic indian buffet processes.
Bayesian Anal., 11(2):477?497, 06 2016.
[18] S. Ghosal and A. van der Vaart. Convergence rates of posterior distributions for noniid observations. Ann.
Statist., 35(1):192?223, 02 2007.
[19] B. Szab?, A. W. van der Vaart, and J. H. van Zanten. Frequentist coverage of adaptive nonparametric
Bayesian credible sets. Ann. Statist., 43(4):1391?1428, 08 2015.
[20] I. Castillo and R. Nickl. On the Bernstein von Mises phenomenon for nonparametric Bayes procedures.
Ann. Statist., 42(5):1941?1969, 2014.
10
[21] J. Rousseau and B. Szabo. Asymptotic frequentist coverage properties of Bayesian credible sets for sieve
priors in general settings. ArXiv e-prints, September 2016.
[22] I. Castillo. Polya tree posterior distributions on densities. preprint available at http: // www.
lpma-paris. fr/ pageperso/ castillo/ polya. pdf , 2016.
[23] L. Liu and W. H. Wong. Multivariate density estimation via adaptive partitioning (ii): posterior concentration. arXiv:1508.04812v1, 2015.
[24] C. Scricciolo. On rates of convergence for Bayesian density estimation. Scand. J. Stat., 34(3):626?642,
2007.
[25] M. Coram and S. Lalley. Consistency of Bayes estimators of a binary regression function. Ann. Statist.,
34(3):1233?1269, 2006.
[26] L. Shepp. Covering the circle with random arcs. Israel J. Math., 34(11):328?345, 1972.
[27] W. Feller. An Introduction to Probability Theory and Its Applications, Vol. 2, 3rd Edition. Wiley, 3rd
edition, January 1968.
[28] D. L. Donoho. CART and best-ortho-basis: a connection. Ann. Statist., 25(5):1870?1911, 10 1997.
[29] V. Rockova and S. L. van der Pas. Posterior concentration for Bayesian regression trees and their ensembles.
arXiv:1708.08734, 2017.
[30] T. Anderson. Some nonparametric multivariate procedures based on statistically equivalent blocks. In P.R.
Krishnaiah, editor, Multivariate Analysis, pages 5?27. Academic Press, New York, 1966.
[31] L. Flatto and A. Konheim. The random division of an interval and the random covering of a circle. SIAM
Rev., 4:211?222, 1962.
[32] A. Nobel. Histogram regression estimation using data-dependent partitions. Ann. Statist., 24(3):1084?1105,
1996.
[33] C. Gao, F. Han, and C.H. Zhang. Minimax risk bounds for piecewise constant models. Manuscript, pages
1?36, 2017.
11
| 6804 |@word faculty:1 version:3 norm:2 proportion:1 nd:1 suitably:1 bn:4 contraction:4 thereby:3 carry:1 reduction:1 liu:1 series:2 selecting:1 com:1 luo:1 assigning:1 dx:1 fn:6 chicago:3 partition:52 belmont:1 additive:2 opin:1 designed:1 fund:1 aside:2 bart:3 generative:2 leaf:3 selected:1 denison:1 ith:1 smith:1 record:1 provides:1 math:2 contribute:1 location:5 certificate:1 simpler:1 zhang:3 height:2 mathematical:2 phylogenetic:1 c2:10 consists:5 inside:4 introduce:1 indeed:3 rapid:1 behavior:3 nor:1 decreasing:1 automatically:1 company:1 actual:3 equipped:3 pf:1 precursor:1 abound:1 increasing:1 estimating:1 notation:1 underlying:1 bounded:4 mass:7 provided:1 mcculloch:2 israel:1 erk:1 developed:1 ghosh:1 transformation:1 ought:1 guarantee:2 every:3 charge:1 growth:1 shed:1 ro:1 scaled:1 k2:9 stick:1 partitioning:12 unit:4 underpinned:1 uk:7 yn:1 biometrika:1 before:2 limit:2 despite:2 ext:6 mach:1 meng:1 might:1 eb:17 studied:2 equivalence:1 appl:2 mentioning:1 phanie:1 bi:16 statistically:2 practical:1 acknowledgment:1 nevins:1 recursive:3 practice:1 block:20 lost:1 union:1 procedure:4 flatto:4 mei:1 area:1 empirical:3 universal:1 adapting:1 word:1 regular:1 kern:1 suggest:1 get:1 cannot:1 interior:2 convenience:1 risk:1 live:1 writing:1 applying:1 wong:1 www:1 equivalent:14 imposed:1 demonstrated:1 circumference:3 deterministic:2 attention:1 regardless:1 convex:2 focused:1 survey:1 thompson:1 recovery:1 emax:1 insight:3 estimator:2 rule:2 his:1 notion:2 ortho:1 limiting:1 target:2 suppose:1 user:1 exact:1 us:1 equispaced:10 designing:1 pxn:2 curran:3 pa:2 associate:3 chicagobooth:1 approximated:1 jk:3 roy:1 wolfe:1 summit:1 vein:1 observed:3 bottom:3 preprint:1 wang:1 verifying:1 region:1 rescaled:1 valuable:1 balanced:21 mentioned:2 feller:1 complexity:11 covariates:1 ideally:1 ultimately:1 depend:1 mondrian:2 segment:2 predictive:1 division:1 learner:1 completely:1 basis:1 k0:19 represented:1 various:1 er1:1 fast:1 describe:3 neighborhood:1 lancaster:1 whose:2 richer:1 larger:3 statistic:4 vaart:3 think:1 itself:1 final:1 reconstruction:3 product:1 adaptation:1 fr:1 j2:1 relevant:1 aligned:1 combining:1 date:1 achieve:2 adapts:1 convergence:3 assessing:1 comparative:1 depending:1 develop:1 stat:2 fixing:1 polya:2 school:2 coverage:2 implies:1 quantify:1 differ:1 concentrate:5 resided:1 radius:2 met:3 girolami:1 hull:2 centered:1 coram:2 cmin:2 public:1 require:3 assign:2 suitability:1 rousseau:1 extension:1 hold:7 around:6 sufficiently:3 normal:4 exp:1 lawrence:1 pointing:2 adopt:1 smallest:4 estimation:5 combinatorial:1 largest:1 tool:3 mit:1 gaussian:4 super:1 rather:4 ck:20 pn:2 avoid:1 shrinkage:1 breiman:2 serous:1 focus:4 indicates:1 sense:1 dependent:1 entire:1 kc:5 bandit:1 koller:1 among:2 classification:2 denoted:2 smearing:1 special:1 integration:2 wadsworth:1 cube:1 field:1 aware:1 equal:3 beach:1 sampling:2 nickl:1 icml:1 nearly:4 promote:1 recommend:1 piecewise:5 few:1 tightly:1 recognize:1 murphy:1 szabo:1 consisting:3 replacement:1 n1:1 friedman:1 erj:1 detection:1 mixture:1 nl:1 yielding:1 light:1 behind:1 necessary:1 machinery:1 tree:30 indexed:1 old:2 euclidean:1 penalizes:2 circle:6 logarithm:1 re:1 theoretical:7 minimal:6 korda:1 modeling:2 cover:3 lattice:3 vertex:1 predictor:5 uniform:5 too:1 optimally:1 characterize:1 kn:29 nns:1 st:3 density:7 fundamental:1 international:1 siam:1 sequel:2 lee:2 off:1 probabilistic:1 again:2 von:1 satisfied:3 containing:1 choose:1 slowly:1 pittman:1 worse:1 ek:7 expert:1 zhao:1 syst:1 standardizing:1 crowded:1 coefficient:2 inc:3 satisfy:4 depends:1 piece:8 view:1 sup:1 start:1 recover:2 decaying:2 parallel:1 option:1 xing:1 bayes:2 rk0:1 contribution:1 il:1 ni:1 variance:1 who:1 kaufmann:1 ensemble:5 correspond:3 yield:1 lkopf:1 bayesian:35 iid:1 served:1 worth:3 cc:1 researcher:1 tetrahedron:3 definition:10 semimetric:1 james:1 e2:1 proof:12 associated:2 mi:1 sampled:1 rational:2 proved:1 treatment:1 popular:1 reluctant:1 knowledge:1 credible:2 segmentation:1 back:1 manuscript:2 tolerate:1 response:1 shrink:1 generality:2 anderson:1 just:1 stage:1 overfit:1 lent:1 hand:1 working:1 nonlinear:1 assessment:2 overlapping:1 quality:1 perhaps:3 usa:2 effect:2 building:1 verify:2 true:19 k22:1 equality:1 sieve:1 aggressively:1 entering:1 illustrated:2 during:1 width:3 self:1 uniquely:1 covering:7 numerator:3 generalized:1 stone:1 pdf:1 complete:3 performs:1 l1:1 meaning:2 wise:1 endpoint:2 exponentially:3 tuning:3 automatic:1 grid:9 consistency:4 fk:7 inclusion:1 similarly:2 sugiyama:1 rd:2 f0:63 impressive:2 surface:4 j4:1 phishing:2 base:1 han:1 posterior:34 closest:1 recent:1 showed:3 exclusion:1 multivariate:3 inequality:1 binary:4 success:2 der:6 yi:1 seen:1 george:2 relaxed:1 somewhat:1 prune:1 aggregated:1 arithmetic:1 relates:1 full:3 multiple:1 stranger:1 ii:1 smooth:1 faster:1 adapt:1 academic:1 long:4 equally:1 prediction:1 variant:5 regression:47 denominator:4 essentially:1 metric:1 ae:1 heterogeneous:1 arxiv:3 histogram:9 confined:1 cell:11 achieved:1 receive:1 c1:9 fine:1 spacing:1 interval:28 median:1 crucial:1 sch:1 cart:8 practitioner:1 integer:1 chipman:2 ckov:1 near:3 ideal:2 bernstein:1 split:15 enough:3 bengio:1 fit:1 equidistant:1 followup:1 topology:1 perfectly:1 idea:1 avenue:1 expression:1 proceed:1 york:2 constitute:1 remark:6 useful:1 covered:1 k2n:2 netherlands:1 amount:1 nonparametric:4 locally:1 statist:7 concentrated:2 simplest:1 http:2 febbo:1 exist:1 pfn:1 canonical:1 restricts:1 blown:1 estimated:1 ovarian:1 discrete:1 write:2 vol:1 abu:1 group:1 falling:1 verified:2 v1:4 merely:1 monotone:1 sum:1 uncertainty:3 throughout:1 family:2 bit:3 bound:11 guaranteed:1 fold:1 annual:1 occur:2 worked:1 u1:2 speed:3 argument:1 min:4 aspect:1 relatively:2 conjecture:1 according:1 ball:4 smaller:4 describes:1 slightly:1 rev:1 invariant:1 dissection:1 restricted:1 krishnaiah:1 previously:1 discus:2 describing:1 count:3 turn:1 needed:3 letting:1 instrument:1 available:2 hierarchical:1 v2:2 worthwhile:1 frequentist:5 weinberger:1 buffet:1 slower:2 original:1 assumes:1 dirichlet:1 include:1 publishing:1 log2:3 cmax:1 iversen:1 clin:1 ghahramani:1 build:1 approximating:9 classical:1 print:1 parametric:3 concentration:17 diagonal:1 september:1 amongst:1 subspace:1 distance:2 card:6 thrun:1 topic:1 argue:1 reason:1 induction:1 nobel:2 length:11 scand:1 balance:1 olshen:1 statement:3 relate:4 expense:1 sharper:1 frank:1 noniid:1 anal:1 guideline:1 unknown:6 teh:1 imbalance:1 upper:4 observation:11 arc:5 finite:1 anti:1 january:1 extended:2 y1:1 arbitrary:1 jebara:1 community:2 ghosal:2 introduced:2 complement:1 namely:3 required:2 pair:1 c3:8 bk:1 paris:1 connection:1 california:1 unequal:1 learned:1 nip:1 discontinuity:3 shepp:1 beyond:1 able:2 below:4 pattern:1 max:2 green:1 oates:1 analogue:1 mallick:1 suitable:3 event:2 business:2 quantification:1 treated:1 overlap:3 advanced:1 mn:4 minimax:6 scheme:4 axis:1 kj:1 prior:51 understanding:5 geometric:4 literature:1 l2:1 kf:9 determining:1 relative:2 asymptotic:1 appealed:1 fully:1 lacking:1 loss:1 interesting:2 proportional:1 allocation:1 ingredient:2 leiden:2 foundation:1 jasa:1 sufficient:1 editor:6 story:1 balancing:2 cancer:2 course:1 supported:9 free:1 side:3 burges:1 institute:1 saul:1 taking:1 munos:1 van:7 regard:1 boundary:2 default:1 xn:2 valid:1 dimension:1 rich:1 author:1 made:1 jump:12 adaptive:3 nguyen:1 far:2 welling:1 pruning:1 approximate:6 emphasize:1 implicitly:1 dressman:1 gene:1 ml:2 overfitting:2 conceptual:1 b1:1 assumed:2 summing:2 knew:3 xi:9 search:1 latent:1 designates:1 why:3 nature:1 learn:1 ca:1 inherently:1 obtaining:1 forest:2 unavailable:1 schuurmans:1 zanten:1 bottou:2 necessarily:1 garnett:1 pk:2 spread:2 main:3 bounding:1 edition:2 osborne:1 dyadic:8 allowed:1 quadrature:1 x1:2 referred:1 west:1 depicts:1 deployed:2 ny:1 vr:1 wiley:1 exponential:3 lie:3 candidate:1 jmlr:1 third:1 tang:1 formula:1 theorem:12 rk:2 briol:1 covariate:2 showing:1 er:2 cortes:1 survival:1 exists:1 stepwise:1 workshop:1 chen:1 booth:2 entropy:2 depicted:1 gao:2 expressed:2 recommendation:1 corresponds:4 truth:2 satisfies:2 determines:1 acm:1 shell:2 nair:1 sized:1 goal:1 donoho:2 ann:8 towards:1 room:1 typical:3 kf0:65 uniformly:1 szab:1 laan:1 lemma:4 castillo:3 subdivision:1 formally:1 mark:1 indian:1 skn:1 phenomenon:1 |
6,418 | 6,805 | A graph-theoretic approach to multitasking
Noga Alon?
Tel-Aviv University
Sebastian Musslick
Princeton University
Daniel Reichman?
UC Berkeley
Jonathan D. Cohen ?
Princeton University
Igor Shinkar?
UC Berkeley
Thomas L. Griffiths
UC Berkeley
Tal Wagner?
MIT
Biswadip Dey
Princeton University
Kayhan Ozcimder
Princeton University
Abstract
A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of
representations. However, how the richness of such interactions trades off against
the ability of a network to simultaneously carry out multiple independent processes
? a salient limitation in many domains of human cognition ? remains largely unexplored. In this paper we use a graph-theoretic analysis of network architecture
to address this question, where tasks are represented as edges in a bipartite graph
G = (A ? B, E). We define a new measure of multitasking capacity of such
networks, based on the assumptions that tasks that need to be multitasked rely on
independent resources, i.e., form a matching, and that tasks can be multitasked
without interference if they form an induced matching. Our main result is an
inherent tradeoff between the multitasking capacity and the average degree of the
network that holds regardless of the network architecture. These results are also
extended to networks of depth greater than 2. On the positive side, we demonstrate
that networks that are random-like (e.g., locally sparse) can have desirable multitasking properties. Our results shed light into the parallel-processing limitations of
neural systems and provide insights that may be useful for the analysis and design
of parallel architectures.
1
Introduction
One of the primary features of neural network architectures is their ability to support parallel
distributed processing [RMG+ 86]. The decentralized nature of biological and artificial nets results in
greater robustness and fault tolerance when compared to serial architectures such as Turing machines.
On the other hand, the lack of a central coordination mechanism in neural networks can result
in interference between units (neurons) and such interference effects have been demonstrated in
several settings such as the analysis of associative memories [AGS85] and multitask learning [MC89].
?
Equal contribution.
Equal contribution. Supported by DARPA contract N66001-15-2-4048, Value Alignment in Autonomous
Systems and Grant: 2014-1600, Sponsor: William and Flora Hewlett Foundation, Project Title: Cybersecurity
and Internet Policy
?
This publication was made possible through the support of a grant from the John Templeton Foundation.
The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the
John Templeton Foundation
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Understating the source of such interference and how it can be prevented has been a major focus of
recent research (see, e.g., [KPR+ 17] and the references therein).
While the stark limitation of our ability to carry out multiple tasks simultaneously, i.e., multitask, is
one of the most widely documented phenomena in cognitive psychology [SS77], the sources for this
limitation are still unclear. Recently, a graph-theoretic model [FSGC14, MDO+ 16] has suggested
that interference effects may explain the limitations of the human cognitive system in performing
multiple task processes at the same time. This model consists of a simple 2-layer feed-forward
network represented by a bipartite graph G = (A ? B, E) wherein the vertex set is partitioned into
two disjoint sets of nodes A and B, representing the inputs and the outputs of tasks respectively. An
edge (a, b) ? E corresponds to a directed pathway from the input layer to the output layer in the
network that is taken to represent a cognitive process (or task4 ) that maps an input to an output. In
more abstract terms, every vertex in a ? A is associated with a set of inputs Ia , every vertex in B is
associated with a set of outputs Ob and the edge (a, b) is associated with a function fa,b : Ia ? Ob .
5
In this work, we also consider deeper architectures with r > 2 layers, where edges correspond
to mappings between nodes from consecutive layers and a path P from the input (first) layer to the
output (last) layer is simply the composition of the mappings on the edges in P . The model above is
quite general and simple modifications of it may apply to other settings. For example, we can assume
the vertices in A are senders and vertices in B are receivers and that a task associated with an edge
e = (a, b) is transmitting information from a to b along a communication channel e.
Given a 2-layer network, a task set is a set of edges T ? E. A key assumption made in [MDO+ 16,
FSGC14] that we adopt as well is that all task sets that need to be multitasked in parallel form a
matching, namely, no two edges in T share a vertex as an endpoint. This assumption reflects a
limitation on the parallelism of the network that is similar to the Exclusive Read Exclusive Write
(EREW) model in parallel RAM, where tasks cannot simultaneously read from the same input or
write to the same output. Similarly, for depth r > 2 networks, task sets correspond to node disjoint
paths from the input layer to the output layer. For simplicity, we shall mostly focus from now on the
depth 2 case with |A| = |B| = n.
In [MDO+ 16, FSGC14] it is suggested that concurrently executing two tasks associated with two
(disjoint) edges e and f will result in interference if e and f are connected by a third edge h.
The rationale for this interference assumption stems from the distributed operation of the network
that may result in the task associated with h becoming activated automatically once its input and
output are operating, resulting in interference with the tasks associated with e and f . Therefore,
[MDO+ 16, FSGC14] postulate that all tasks within a task set T can be performed in parallel
without interferences only if the edges in T form an induced matching. Namely, no two edges
in T are connected by a third edge. Interestingly, the induced matching condition also arises in
the communication setting [BLM93, AMS12, CK85], where it is assumed that messages between
senders and receivers can be reliably transmitted if the edges connecting them forms an induced
matching. Following the aforementioned interference model, [MDO+ 16] define the multitasking
capability of a bipartite network G as the maximum cardinality of an induced matching in G.
It has been demonstrated that neural network architectures are subject to a fundamental tradeoff
between learning efficiency that is promoted by an economic use of shared representations between
tasks, on the one hand, and the ability of to execute multiple tasks independently, on the other hand
[MS?+ 17]. Namely, it is suggested that as the average degree d (?efficiency of representations?
? larger degree corresponds to more economical use of shared representations between tasks) of
G increases, the ?multitasking ability? should decay in d [FSGC14]. That is, the cardinality of
the maximal induced matching should be upper bounded by f (d)n with limd?? f (d) = 0. This
prediction was tested and supported on certain architectures by numerical simulations in [MDO+ 16,
FSGC14], where it was suggested that environmental constraints push towards efficient use of
representations which inevitably limits multitasking. Establishing such as a tradeoff is of interest, as
4
We view a task as constituting a simple mechanistic instantiation of a cognitive process, consistent with
Neisser?s original definition [Nei67]. According to this definition a task process (e.g. color naming) is a mapping
from an input space (e.g. colors) to an output space (verbal). Within this framework the decision of what
constitutes an input space for a task is left to the designer and may be problem-specific. The modeling of more
complex tasks might require to extend this framework to multidimensional input spaces. This would allow to
capture scenarios in which tasks are partially overlapping in terms of their input and output spaces.
5
The function fa,b is hypothesized to be implemented by a gate used in neural networks such as sigmoid or
threshold gate.
2
Figure 1: In the depicted bipartite graph, the node shading represents the bipartition. The blue edges
form an induced matching, which represents a large set of tasks that can be multitasked. However,
the red edges form a matching in which the largest induced matching has size only 1. This represents
a set of tasks that greatly interfere with each other.
Figure 2: Hypercube on 8 nodes. Node shading represents the bipartition. On the left, the blue edges
form an induced matching of size 2. On the right, the red edges form a matching of size 4 whose
largest induced matching has size 1. Hence the multitasking capacity of the hypercube is at most 1/4.
it can identify limitations of artificial nets that rely on shared representations and aid in designing
systems that attain an optimal tradeoff. More generally, establishing a connection between graphtheoretic parameters and connectionist models of cognition consists of a new conceptual development
that may apply to domains beyond multitasking.
Identifying the multitasking capacity of G = (A ? B, E) with the size of its maximal induced
matching has two drawbacks. First, the existence of some, possibly large, set of tasks that can be
multitasked does not preclude the existence of a (possibly small) set of critical tasks that greatly
interfere with each other (e.g., consider the case in which a complete bipartite graph Kd,d occurs
as a subgraph of G. This is illustrated in Figure 1). Second, it is easy to give examples of graphs
(where |A| = |B| = n) with average degree ?(n) that contain an induced matching of size n/2
(for example, two copies of complete bipartite graph connected by a matching: see Figure 1 for
an illustration). Hence, it is impossible to upper bound the multitasking capacity of every network
with average degree d by f (d)n with f vanishing as the average degree d tends infinity. Therefore,
the generality of the suggested tradeoff between efficiency and concurrency is not clear under this
definition.
Our main contribution is a novel measure of the multitasking capacity that is aimed at solving the
first problem, namely networks with ?high? capacity which contain a ?small? task set whose edges
badly interfere with one another. In particular, for a parameter k we consider every matching of size
k, and ask whether every matching M of size k contains a large induced matching M 0 ? M . This
motivates the following definition (see Figure 2 for an illustration).
Definition 1.1. Let G = (A ? B, E) be a bipartite graph with |A| = |B| = n, and let k ? N be a
parameter. We say that G is a (k, ?(k))-multitasker if for every matching M in G of size |M | ? k,
there exists an induced matching M 0 ? M such that
|M 0 | ? ?(k)|M |.
We will say that a graph G is an ?-multitasker if it is (n, ?)-multitasker.
The parameter ? ? (0, 1] measures the multitasking capabilities of G, and the larger ? is the better
multitasker G is considered. We call the parameter ?(k) ? (0, 1] the multitasking capacity of G for
matchings of size k.
Definition 1.1 generalizes to networks of depth r > 2, where instead of matchings, we consider first
layer to last layer node disjoint paths, and instead of induced matchings we consider induced paths,
i.e., a set of disjoint paths such that no two nodes belonging to different paths are adjacent.
The main question we shall consider here is what kind of tradeoffs one should expect between ?, d
and k. In particular, which network architectures give rise to good multitasking behavior? Should we
3
expect ?multitasking vs. multiplexing?: namely, ? tending to zero with d for all graphs of average
degree d? While our definition of multitasking capacity is aimed at resolving the problem of small
task sets that can be poorly multitasked, it turns out to be also related also to the ?multitasking vs.
multiplexing? phenomena. Furthermore, our graph-theoretic formalism also gives insights as to how
network depth and interference are related.
1.1
Our results
We divide the presentation of the results into two parts. The first part discusses the case of d-regular
graphs, and the second part discusses general graphs.
The d-regular case: Let G = (A ? B, E) be a bipartite d-regular graph with n vertices on each
side. Considering the case of k = n, i.e., maximal possible induced matchings that are contained
in a perfect matching (that is a matching of
? cardinality n), we show that if a d-regular graph is an
(n, ?(n))-multitasker, then ?(n) = O(1/ d). Our upper bound on ?(n) establishes an inherent
limitation on the multitasking capacity of any network. That is, for any infinite family of networks
with average degree tending to infinity it holds that ?(n) must tend to 0 as the degree grows. In fact,
we prove that degree of the graph d constrains the multitasking capacity also for task sets of smaller
sizes. Specifically, for all k that is sufficiently larger than ?(n/d) it holds that ?(k) tends to 0 as d
increases. In this version of the paper we prove this result for k > n/d1/4 . The full version of this
paper [ACD+ ] contains the statement and the result that holds for all d > ?( nd ).
Theorem 1.2. Let G = (A ? B, E), be a d-regular (k, ?(k))-multitasker graph with |A| = |B| = n.
n
). In particular, there exists a perfect matching in G that
If n/d1/4 ? k ? n, then ?(k) ? O( k?
d
?
does not contain an induced matching of size larger than O(n/ d).
For task sets of size n, Theorem 1.2 is tight up to logarithmic factors, as we provide a construction of
an infinite family of d-regular graph, where every matching of size n contains an induced matching
of size ?( ?d 1log d ). The precise statement appear in the full version of the paper [ACD+ ].
1
For arbitrary values of k ? n it is not hard to see that every d-regular graph achieves ?(k) ? 2d
. We
show that this naive bound can be asymptotically improved upon, by constructing an ?-multitaskers
with ? = ?( logd d ). The construction is based on bipartite graphs which have good spectral expansion
properties. For more details see the full version of the paper [ACD+ ].
We also consider networks of depth r > 2 6 . We generalize our ideas for depth 2 networks by upperbounding the multitasking capacity of arbitrary d-regular networks of depth r by O((r/d ln(r))1?1/r ).
Observe that as we show that there are d-regular bipartite graphs with ?(n) = ?d 1log d , this implies
that for tasks sets of size n, networks of depth 2 < r d incur interference which is strictly worse
than depth 2 networks. We believe that interference worsens as r increases to r + 1 (for r > 2),
although whether this is indeed the case is an open question.
The irregular case: Next we turn to arbitrary, not necessarily regular, graphs. We show that
for an arbitrary bipartite graph with n vertices on each side and average degree d its multitasking
1/3
capacity ?(n) is upper bounded by O logd n
. That is, when the average degree is concerned,
the multitasking capacity of a graph tends to zero, provided that the average degree of a graph is
?(log n).
Theorem 1.3. Let G = (A ? B, E), be a bipartite graph of average degree d with |A| = |B| = n.
If G is an ?-multitasker then ? ? O(( logd n )1/3 ).
For dense graphs satisfying d = ?(n) (which is studied in [FSGC14]), we prove a stronger upper
bound of ?(n) = O( ?1n ) using the Szemer?di regularity lemma. See Theorem 3.9 for details.
We also show that there are multitaskers of average degree ?(log log n), with ? > 1/3 ? . Hence,
in contrast to the regular case, for the multitasking capacity to decay with average degree d, we must
assume that d grows faster than log log n. The details behind this construction, which build on ideas
in [Pyb85, PRS95], appear in full version of this paper [ACD+ ].
6
We think of r as a constant independent of n and d as tending to infinity with n.
4
Finally, for any d ? N and for all ? ? (0, 1/5) we show a construction of a graph with average degree
d that is a (k, ?)-multitaskers for all k ? ?(n/d1+4? ). Comparing this to the foregoing results, here
we do not require that d = O(log log n). That is, allowing larger values of d allows us to construct
networks with constant multitasking capacities, albeit only with respect to matchings whose size is at
most n/d1+4? . See Theorem 3.10 for details.
2
Preliminaries
A matching M in a graph G is a set of edges {e1 , ..., em } such that no two edges in M share a
common vertex. If G has 2n vertices and |M | = n, we say that M is a perfect matching. By Hall
Theorem, every d-regular graph with bipartition (A, B) has a perfect matching. A matching M is
induced if there are no two distinct edges e1 , e2 in M , such that there is an edge connecting e1 to
e2 . Given a graph G = (V, E) and two disjoint sets A, B ? V we let e(A, B) be the set of edges
with one endpoint in A and the other in B. For a subset A, e(A) is the set of all edges contained in A.
Given an edge e ? E, we define the graph G/e obtained by contracting e = (u, v) as the graph with
a vertex set (V ? ve ) \ {u, v}. The vertex ve is connected to all vertices in G neighboring u or v. For
all other vertices x, y ? V \ {u, v}, they form an edge in G/e if and only if they were connected in
G. Contracting a set of edges, and in particular contracting a matching, means contracting the edges
one by one in an arbitrary order.
Given a subset of vertices U ? V , the subgraph induced by U , denoted by G[U ] is the graph whose
vertex set is U and two vertices in U are connected if and only if they are connected in G. For a set
of edges E 0 ? E, denote by G[E 0 ] the graph induced by all vertices incident to an edge in E 0 . We
will use the following simple observation throughout the paper.
Lemma 2.1. Let M be a matching in G, and let davg be the average degree of G[M ]. If we contract
e ] has average degree at most 2davg ? 2.
all edges in M in G[M ], then the resulting graph G[M
e ] has |M |
Proof. G[M ] contains 2|M | vertices and davg |M | edges. The result follows as G[M
vertices and at most davg |M | ? |M | edges.
An independent set in a graph G = (V, E) is a set of vertices that do not span an edge. We will use
the following well known fact attributed to Turan.
Lemma 2.2. Every n-vertex graph with average degree davg contains an independent set of size at
least davgn +1 .
Let G = (V, E) be a bipartite graph, k an integer and ? ? (0, 1], a parameter. We define the
(?, k)-matching graph H(G, ?, k) = (L, R, F ) to be a bipartite graph, where L is the set of all
matchings of size k in G, R is the set of all induced matchings of size ?k in G, and a vertex vM ? L
(corresponding to matching M of size k) is connected to a vertex uM 0 (corresponding to an induced
matching M 0 of size ?k) if and only if M 0 ? M . We omit ?, k, G from the notation of H when it
will be clear from the context. We will repeatedly use the following lemma in upper bounding the
multitasking capacity in graph families.
Lemma 2.3. Suppose that the average degree of the vertices in L in the graph H(G, ?, k) is strictly
smaller than 1. Then ?(k) < ?.
Proof. By the assumption, L has a vertex of degree 0. Hence there exist a matching of size k in G
that does not contain an induced matching of size ?k.
3
3.1
Upper bounds on the multitasking capacity
The regular case
In this section we prove Theorem 1.2 that upper bounds the multitasking capacity of arbitrary dregular multitaskers. We start the proof of Theorem 1.2 with the case
? k = n. The following theorem
shows that d-regular (k = n, ?)-multitaskers must have ? = O(1/ d).
5
Theorem 3.1. Let G = (A ? B, E) be a bipartite d-regular graph where |A| = |B| = n. Then G
9n
.
contains a perfect matching M such that every induced matching M 0 ? M has size at most ?
d
For the proof, we need bounds on the number of perfect matchings in d-regular bipartite graphs.
Lemma 3.2. Let G = (A, B, E), be a bipartite d-regular graph where |A| = |B| = n. Denote by
M (G) the number of perfect matchings in G. Then
n
n
d
(d ? 1)d?1
?
? M (G) ? (d!)n/d .
e
dd?2
The lower bound on M (G) is due to Schrijver [Sch98]. The upper bound on M (G) is known as
Minc?s conjecture, which has been proven by Bregman [Bre73].
Proof of Theorem 3.1. Consider H(G, ?, n), where ? will be determined later. Clearly |R| ?
n 2
? ( ?e )2?n . By the upper bound in Lemma 3.2, every induced matching of size ?n can be
?n
n
contained in at most (d!)(1??)n/d perfect matchings. By the lower bound in Lemma 3.2, |L| ? de .
Therefore, the average degree of the the vertices in L is at most
?
3
?n
( ?e )2?n ? (d!)(1??)n/d
( ?e )2?n ? ( 2?d( de )d )(1??)n/d
1??
e
2?d
?
=
?
(2?d)
.
d n
d n
?2 d
e
e
q
3
1??
3
Setting ? > 2 ed yields ?e2 d < 21 , and it can be verified that (2?d) 2?d < 2 for all such ?.
Therefore in this setting, the average degree of the vertices in L is smaller than 1, which concludes
the proof by Lemma 2.3. This completes the proof of the theorem.
We record the following simple observation, which is immediate from the definition.
Proposition 3.3. If G is a (k, ?)-multitasker, then for all 1 < ? ? n/k, the graph G is a (?k, ?
? )multitasker.
Theorem 1.2 follows by combining Theorem 3.1 with (the contrapositive of) Proposition 3.3.
3.2
Upper bounds for networks of depth larger than 2
A graph G = (V, E) is a network with r layers of width n and degree d, if V is partitioned into r
independent sets V1 , . . . , Vr of size n each, such that each (Vi , Vi+1 ) induces a d-regular bipartite
graph for all i < r, and there are no additional edges in G.
A top-bottom path in G is a path v1 , . . . , vr such that vi ? Vi for all i ? r, and vi , vi+1 are neighbors
for all i < r.
A set of node-disjoint top-bottom paths p1 , . . . , pk is called induced if for every two edges e ? pi
and e0 ? pj such that i 6= j, there is no edge in G connecting e and e0 .
Fact 3.4. A set of node-disjoint top-bottom paths p1 , . . . , pk is induced if and only if for every i < r
it holds that (p1 ? . . . ? pk ) ? E(Vi , Vi+1 ) is an induced matching in G.
We say that a network G as above is a (k, ?)-multitasker if every set of k node-disjoint top-bottom
paths contains an induced subset of size at least ?k.
1? r1
e?r
Theorem 3.5. If G is an (n, ?)-multitasker then ? < e d ln(r)
.
Proof. Let H = (L, R; EH ) be the bipartite graph in which side L has a node for each set of n
node-disjoint top-bottom paths in G, side R has a node for each induced set of ?n node-disjoint
top-bottom paths in G, and P ? L, P 0 ? R are adjacent iff P 0 ? P . Let D be the maximum
degree of side R. We wish to upper-bound the average degree of side L, which is upper-bounded by
D|R|/|L|.
Q
n r
|R| is clearly upper bounded by ?n
. It is a simple observation that |L| equals i<r mi , where
mi denotes the number of perfect matchings in the bipartite graph G[Vi ? Vi+1 ]. Since this graph is
6
d-regular, by the Falikman-Egorichev proof of the Van der Waerden conjecture ([Fal81], [Ego81]), or
by Schrijver?s lower bound, we have mi ? (d/e)n and hence |L| ? (d/e)n(r?1) . To upper bound
D, fix P 0 ? R, and let G0 be the network resulting by removing all nodes and edges in P 0 from G.
This removes exactly ?n nodes from each layer Vi ; denote by Vi0 the remaining nodes in this layer in
G0 . It is a straightforward observation that D equals the number of sets of (1 ? ?)n node-disjoint
top-bottom paths in G0 . Each such set decomposes intoQ
M1 , . . . , Mr?1 such that Mi is a perfect
0
matching on G0 [Vi0 , Vi+1
] for each i < r. Therefore D ? i?1 m0i where m0i denotes the number of
0
perfect matchings in G0 [Vi0 , Vi+1
]. The latter is a bipartite graph with (1??)n nodes on each side and
maximum degree d, and hence by the Bregman-Minc inequality, m0i ? (d!)(1??)n/d . Consequently,
D ? (d!)(1??)n(r?1)/d .
Putting everything together, we find that the average degree of side L is upper bounded by
?
n r
(d!)(1??)n(r?1)/d ? ?n
( 2?d(d/e)d )(1??)n(r?1)/d ? ( ?e )?nr
D|R|
?
?
|L|
(d/e)n(r?1)
(d/e)n(r?1)
r ?n(r?1)
1??
e e r?1
= (2?d) 2?d ?
.
d ?
(1)
1
For C = r/ ln(r) we will show that if ? ? e(eC/d)1? r then above bound is less than 1, which
implies side L has a node of degree 0, a contradiction. To this end, note that for this ? we have
r
ln(r)
e e r?1
1
=
,
(2)
?
d ?
C
r
and
1?1/r 1/r
d
)
(2?d)(1??)/(2?d) ? (2?d)1/(2?d) ? (2?d)1/(2eC
.
Fact 3.6. For every constants ?, ? > 0, the function f (d) = (?d)1/(?d
1/r
and f (er /?) = er? /?e .
1/r
)
is maximized at d = er /?,
Plugging this above (and using r ? 2), we obtain
(2?d)(1??)/(2?d) ? (2?d)1/(2eC
1?1/r 1/r
d
)
? er(2?eC)
1/r
/(2Ce2 )
and plugging this with Equation (2) into Equation (1) yields
3.3
D|R|
|L|
?
? eln(r)
2??r 1/r /(2e3/2 )
?
?
r,
< 1, as required.
The irregular case
Below we consider general (not necessarily regular) graphs with average degree d, and prove
Theorem 1.3. In order to prove it, we first show a limitation on the multitasking capacity of graphs
where the average degree of a graph is d, and the maximum degree is bounded by a parameter ?.
Theorem 3.7. Let G be a bipartite graph with n nodes on each side, average degree d, and maximum
1
2
degree ?. If G is an ?-multitasker, then ? < O(? 3 /d 3 ).
A proof of Theorem 3.7 can be found in the full version of this paper [ACD+ ].
Note that Theorem 3.7 does not provide any nontrivial bounds on ? when ? exceeds d2 . However, we
use it to prove Theorem 1.3, which establishes nearly the same upper bound with no assumption on ?.
To do so we need the following lemma, which is also proved in the full version of this paper [ACD+ ].
Lemma 3.8. Every bipartite graph with 2n vertices and average degree d > 4 log n contains a
d
subgraph in which the average degree is at least b = 4 log
n and the maximum degree is at most 2b.
We can now prove Theorem 1.3.
Proof of Theorem 1.3. By Lemma 3.8 G contains a subgraph with average degree b ? d/(4 log n)
and maximum degree at most 2b. The result thus follows from Theorem 3.7.
As in the regular case, for smaller values of k we can obtain a bound of ? = O(
multitaskers. See the full version of this paper [ACD+ ] for the precise details.
When the graph is dense, we prove the following better upper bounds on ?.
7
p
n
dk )
for (k, ?)-
Theorem 3.9. Let G be a bipartite graph with n vertices on each side, and average degree d = ?(n).
If G is an ?-multitasker, then ? < O(( n1 )1/2 ).
Proof. By the result in [PRS95] (see Theorem 3) the graph G contains a d0 -regular bipartite graph
with d0 = ?(n). The result thus follows from our upper bound for regular graphs as stated in
Theorem 1.2.
3.4
A simple construction of a good multitasker
We show that for small constants ?, we may achieve a significant increase in k show existence of a
(O(n/d1+4? ), ?)-multitaskers for any 0 < ? < 1/5.
Theorem 3.10. Fix d ? N, and let n ? N be sufficiently large. For a fixed 0 < ? < 1/5, there exists
a (k, ?)-multitasker with n vertices on each side, average degree d, for all k ? ?(n/d1+4? ).
Proof. It is known (see, e.g., [FW16]) that for sufficiently large n, there exist an n-vertex graph
G = (V, E) with average degree d such that every subgraph of G of size s ? O(n/d1+4? ) has
average degree at most 12 ( ?1 ? 1). Define a bipartite graph H = (A ? B, EH ) such that A and B are
two copies of V , and for a ? A and b ? B we have (a, b) ? EH if and only if (a, b) ? E. We get
that the average degree of H is d, and for any two A0 ? A and B 0 ? B such that |A0 | = |B 0 | ? s/2,
the average degree of H[A0 ? B 0 ] is at most ?1 ? 1. Consider a matching M of size s/2 in H. By
Lemma 2.1, if we contract all edges of the matching, we get a graph of average degree at most ?2 ? 1.
By Lemma 2.2, such a graph contains an independent set of size at least 12 ?|M |, which corresponds
to a large induced matching contained in M . This concludes the proof of the theorem.
4
Conclusions
We have considered a new multitasking measure for parallel architectures that is aimed at providing
quantitative measures of parallel processing capabilities of neural systems. We established an inherent
tradeoff between the density of the network and its multitasking capacity that holds for every graph
that is sufficiently dense. This tradeoff is rather general and it applies to regular graphs, to irregular
graphs and to layered networks of depth greater than 2. We have also obtained quantitative insights.
For example, we have provided evidence that interference increases as depth increases from 2 to
r > 2, and demonstrated that irregular graphs allow for better multitasking than regular graphs for
certain edge densities. Our findings are also related to recent efforts in cognitive neuroscience to
pinpoint the reason for the limitations people experience in multiasking control demanding tasks.
We have found that networks with pseudorandom properties (locally sparse, spectral expanders) have
good multitasking capabilities. Interestingly, previous works have documented the benefits of random
and pseudorandom architectures in deep learning, Hopfield networks and other settings [ABGM14,
Val00, KP88]. Whether there is an underlying cause for these results remains an interesting direction
for future research.
Our work is limited in several aspects. First, our model is graph-theoretic in nature, focusing
exclusively on the adjacency structure of tasks and does not consider many parameters that emerge in
biological and artificial parallel architectures. Second, we do not address tasks of different weights
(assuming all tasks have the same weights), stochastic and probabilistic interference (we assume
interference occurs with probability 1) and the exact implementation of the functions that compute
the tasks represented by edges. A promising avenue for future work will be to evaluate the predictive
validity of ?, that is, the ability to predict parallel processing performance of trained neural networks
from corresponding measures of ?.
To summarize, the current work is directed towards laying the foundations for a deeper understanding
of the factors that affect the tension between efficiency of representation, and flexibility of processing
in neural network architectures. We hope that this will help inspire a parallel proliferation of efforts
to further explore this area.
8
References
[ABGM14] Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for
learning some deep representations. In ICML, pages 584?592, 2014.
[ACD+ ] Noga Alon, Jonathan D. Cohen, Biswadip Dey, Tom Griffiths, Sebastian Musslick,
Kayhan ?zcimder, Daniel Reichman, Igor Shinkar, and Tal Wagner. A graph-theoretic
approach to multitasking (full version). Available at arXiv:1611.02400, 2017.
[AGS85] Daniel J Amit, Hanoch Gutfreund, and Haim Sompolinsky. Storing infinite numbers of
patterns in a spin-glass model of neural networks. Physical Review Letters, 55(14):1530,
1985.
[AMS12] Noga Alon, Ankur Moitra, and Benny Sudakov. Nearly complete graphs decomposable
into large induced matchings and their applications. In Proceedings of the Forty-Fourth
annual ACM Symposium on Theory of Computing, pages 1079?1090, 2012.
[BLM93] Yitzhak Birk, Nathan Linial, and Roy Meshulam. On the uniform-traffic capacity
of single-hop interconnections employing shared directional multichannels. IEEE
Transactions on Information Theory, 39(1):186?191, 1993.
[Bre73] Lev M Bregman. Some properties of nonnegative matrices and their permanents. In
Soviet Math. Dokl, volume 14, pages 945?949, 1973.
[CK85] Imrich Chlamtac and Shay Kutten. On broadcasting in radio networks?problem analysis
and protocol design. IEEE Transactions on Communications, 33(12):1240?1246, 1985.
[Ego81] Gregory P. Egorychev. The solution of van der waerden?s problem for permanents.
Advances in Mathematics, 42(3):299?305, 1981.
[Fal81] Dmitry I Falikman. Proof of the van der waerden conjecture regarding the permanent of
a doubly stochastic matrix. Mathematical Notes, 29(6):475?479, 1981.
[FSGC14] Samuel F Feng, Michael Schwemmer, Samuel J Gershman, and Jonathan D Cohen.
Multitasking versus multiplexing: Toward a normative account of limitations in the
simultaneous execution of control-demanding behaviors. Cognitive, Affective, & Behavioral Neuroscience, 14(1):129?146, 2014.
[FW16] Uriel Feige and Tal Wagner. Generalized girth problems in graphs and hypergraphs.
Manuscript, 2016.
[KP88] J?nos Koml?s and Ramamohan Paturi. Convergence results in an associative memory
model. Neural Networks, 1(3):239?250, 1988.
[KPR+ 17] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka GrabskaBarwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings
of the National Academy of Sciences, pages 3521?3526, 2017.
[MC89] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist
networks: The sequential learning problem. Psychology of learning and motivation,
24:109?165, 1989.
[MDO+ 16] Sebastian Musslick, Biswadip Dey, Kayhan Ozcimder, Mostofa Patwary, Ted L Willke,
and Jonathan D Cohen. Controlled vs. Automatic Processing: A Graph-Theoretic Approach to the Analysis of Serial vs. Parallel Processing in Neural Network Architectures.
In Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci),
pages 1547?1552, 2016.
[MS?+ 17] Sebastian Musslick, Andrew Saxe, Kayhan ?zcimder, Biswadip Dey, Greg Henselman,
and Jonathan D. Cohen. Multitasking capability versus learning efficiency in neural
network architectures. In 39th Cognitive Science Society Conference, London, 2017.
[Nei67] Ulrich Neisser. Cognitive psychology. Appleton-Century-Crofts, New York, 1967.
9
[PRS95] L?szl? Pyber, Vojtech R?dl, and Endre Szemer?di. Dense graphs without 3-regular
subgraphs. Journal of Combinatorial Theory, Series B, 63(1):41?54, 1995.
[Pyb85] Laszlo Pyber. Regular subgraphs of dense graphs. Combinatorica, 5(4):347?349, 1985.
[RMG+ 86] David E Rumelhart, James L McClelland, PDP Research Group, et al. Parallel distributed
processing: Explorations in the microstructure of cognition, vol. 1-2. MIT Press, MA,
1986.
[Sch98] Alexander Schrijver. Counting 1-factors in regular bipartite graphs. Journal of Combinatorial Theory, Series B, 72(1):122?135, 1998.
[SS77] Walter Schneider and Richard M Shiffrin. Controlled and automatic human information
processing: I. Detection, search, and attention. Psychological Review, 84(1):1?66, 1977.
[Val00] Leslie G Valiant. Circuits of the Mind. Oxford University Press, 2000.
10
| 6805 |@word multitask:2 worsens:1 version:9 stronger:1 nd:1 open:1 d2:1 simulation:1 multitasked:6 shading:2 carry:2 contains:11 exclusively:1 ce2:1 series:2 daniel:3 interestingly:2 current:1 comparing:1 must:3 john:3 numerical:1 ramamohan:1 remove:1 v:4 vanishing:1 record:1 math:1 node:22 pascanu:1 mathematical:1 along:1 limd:1 symposium:1 neisser:2 consists:2 prove:9 doubly:1 pathway:1 affective:1 behavioral:1 forgetting:1 indeed:1 proliferation:1 p1:3 behavior:2 automatically:1 preclude:1 multitasking:36 cardinality:3 considering:1 project:1 provided:2 bounded:6 notation:1 underlying:1 circuit:1 what:2 kind:1 sudakov:1 turan:1 gutfreund:1 finding:1 berkeley:3 unexplored:1 every:19 multidimensional:1 quantitative:2 shed:1 exactly:1 um:1 control:2 unit:2 grant:2 omit:1 appear:2 positive:1 tends:3 limit:1 mostofa:1 oxford:1 establishing:2 lev:1 path:14 becoming:1 might:1 therein:1 studied:1 ankur:1 limited:1 directed:2 razvan:1 area:1 bipartition:3 attain:1 matching:47 griffith:2 regular:28 get:2 cannot:1 layered:1 context:1 impossible:1 map:1 demonstrated:3 straightforward:1 regardless:1 attention:1 independently:1 simplicity:1 identifying:1 decomposable:1 contradiction:1 insight:3 subgraphs:2 century:1 autonomous:1 construction:5 suppose:1 exact:1 designing:1 roy:1 satisfying:1 rumelhart:1 bottom:7 capture:1 connected:8 richness:1 sompolinsky:1 trade:1 benny:1 acd:8 constrains:1 trained:1 solving:1 tight:1 concurrency:1 incur:1 predictive:1 upon:1 bipartite:27 efficiency:5 linial:1 matchings:13 darpa:1 hopfield:1 represented:3 soviet:1 walter:1 distinct:1 ramalho:1 london:1 cogsci:1 artificial:3 quite:1 whose:4 widely:1 larger:6 foregoing:1 say:4 tested:1 interconnection:1 ability:7 neil:1 think:1 associative:2 net:2 interaction:2 maximal:3 neighboring:1 combining:1 subgraph:5 poorly:1 iff:1 achieve:1 flexibility:1 academy:1 shiffrin:1 milan:1 convergence:1 regularity:1 r1:1 perfect:11 executing:1 help:1 alon:3 andrew:1 implemented:1 implies:2 direction:1 drawback:1 stochastic:2 exploration:1 human:3 saxe:1 opinion:1 everything:1 adjacency:1 require:2 fix:2 microstructure:1 preliminary:1 proposition:2 biological:2 strictly:2 m0i:3 rong:1 hold:6 sufficiently:4 considered:2 hall:1 cognition:3 mapping:3 predict:1 major:1 achieves:1 consecutive:1 adopt:1 desjardins:1 radio:1 combinatorial:2 coordination:1 title:1 largest:2 establishes:2 reflects:1 hope:1 mit:2 concurrently:1 clearly:2 rather:1 rusu:1 minc:2 publication:2 focus:2 greatly:2 contrast:1 glass:1 a0:3 among:1 aforementioned:1 denoted:1 development:1 uc:3 equal:4 once:1 construct:1 beach:1 veness:1 hop:1 ted:1 represents:4 icml:1 igor:2 constitutes:1 nearly:2 future:2 connectionist:2 inherent:3 richard:1 simultaneously:3 ve:2 national:1 william:1 n1:1 detection:1 interest:1 message:1 joel:1 alignment:1 szl:1 kirkpatrick:1 light:1 hewlett:1 activated:1 behind:1 reichman:2 edge:42 kpr:2 bregman:3 laszlo:1 experience:1 vi0:3 divide:1 e0:2 psychological:1 formalism:1 modeling:1 leslie:1 vertex:32 subset:3 uniform:1 gregory:1 st:1 density:2 fundamental:1 contract:3 off:1 vm:1 probabilistic:1 michael:2 connecting:3 together:1 transmitting:1 sanjeev:1 central:1 reflect:1 postulate:1 moitra:1 possibly:2 worse:1 cognitive:9 stark:1 account:1 de:2 permanent:3 vi:13 performed:1 view:2 later:1 traffic:1 red:2 start:1 parallel:13 capability:5 contrapositive:1 contribution:3 spin:1 greg:1 largely:1 maximized:1 correspond:2 identify:1 upperbounding:1 yield:2 directional:1 generalize:1 ags85:2 eln:1 economical:1 simultaneous:2 explain:1 sebastian:4 ed:1 definition:8 against:1 james:2 e2:3 associated:7 di:2 proof:15 attributed:1 mi:4 proved:1 ask:1 color:2 focusing:1 feed:1 manuscript:1 tension:1 wherein:1 improved:1 inspire:1 tom:1 execute:1 dey:4 generality:1 furthermore:1 uriel:1 hand:3 overlapping:1 lack:1 interfere:3 birk:1 rabinowitz:1 aviv:1 grows:2 believe:1 usa:1 effect:2 hypothesized:1 contain:4 validity:1 hence:6 read:2 neal:1 illustrated:1 adjacent:2 width:1 samuel:2 m:2 generalized:1 paturi:1 theoretic:7 demonstrate:1 complete:3 logd:3 novel:1 recently:1 sigmoid:1 common:1 tending:3 physical:1 cohen:6 endpoint:2 volume:1 extend:1 hypergraphs:1 m1:1 kp88:2 significant:1 composition:1 appleton:1 automatic:2 mathematics:1 similarly:1 operating:1 recent:2 scenario:1 certain:2 inequality:1 patwary:1 fault:1 meeting:1 der:3 transmitted:1 greater:3 additional:1 promoted:1 mr:1 schneider:1 forty:1 resolving:1 multiple:4 desirable:1 full:8 stem:1 d0:2 exceeds:1 faster:1 long:1 naming:1 serial:2 prevented:1 e1:3 plugging:2 sponsor:1 controlled:2 prediction:1 arxiv:1 represent:1 irregular:4 completes:1 source:2 noga:3 induced:34 subject:1 tend:1 quan:1 call:1 integer:1 hanoch:1 counting:1 easy:1 concerned:1 affect:1 psychology:3 architecture:16 economic:1 idea:2 regarding:1 avenue:1 tradeoff:8 whether:3 effort:2 e3:1 york:1 cause:1 repeatedly:1 deep:2 useful:1 generally:1 clear:2 aimed:3 locally:2 induces:1 mcclelland:1 documented:2 exist:2 designer:1 neuroscience:2 disjoint:12 blue:2 write:2 shall:2 vol:1 group:1 key:2 salient:1 putting:1 threshold:1 pj:1 verified:1 n66001:1 ram:1 graph:82 asymptotically:1 v1:2 turing:1 letter:1 fourth:1 family:3 throughout:1 ob:2 decision:1 layer:15 internet:1 bound:22 haim:1 annual:2 badly:1 nontrivial:1 nonnegative:1 constraint:1 infinity:3 multiplexing:3 tal:3 aspect:1 nathan:1 span:1 performing:1 pseudorandom:2 tengyu:1 conjecture:3 according:1 kd:1 belonging:1 smaller:4 feige:1 em:1 endre:1 templeton:2 partitioned:2 modification:1 interference:17 taken:1 ln:4 resource:1 equation:2 remains:2 turn:2 discus:2 mechanism:1 mind:1 mechanistic:1 ge:1 end:1 koml:1 davg:5 available:1 operation:1 decentralized:1 generalizes:1 apply:2 observe:1 spectral:2 robustness:1 gate:2 existence:3 thomas:1 original:1 top:7 denotes:2 remaining:1 tiago:1 build:1 amit:1 hypercube:2 society:2 feng:1 g0:5 question:3 occurs:2 fa:2 primary:1 exclusive:2 nr:1 unclear:1 graphtheoretic:1 capacity:22 reason:1 provable:1 toward:1 laying:1 assuming:1 illustration:2 providing:1 willke:1 mostly:1 kieran:1 statement:2 stated:1 rise:1 design:2 reliably:1 motivates:1 policy:1 implementation:1 allowing:1 upper:19 neuron:1 observation:4 inevitably:1 immediate:1 extended:1 communication:3 precise:2 pdp:1 arbitrary:6 overcoming:1 david:1 namely:5 required:1 connection:1 established:1 nip:1 address:2 beyond:1 suggested:5 dokl:1 parallelism:1 below:1 pattern:1 summarize:1 memory:2 ia:2 critical:1 demanding:2 rely:2 eh:3 szemer:2 representing:1 arora:1 concludes:2 naive:1 vojtech:1 review:2 understanding:1 contracting:4 expect:2 rationale:1 interesting:1 limitation:11 proven:1 gershman:1 versus:2 foundation:4 shay:1 degree:47 incident:1 consistent:1 dd:1 ulrich:1 storing:1 share:2 pi:1 supported:2 last:2 copy:2 verbal:1 side:13 allow:2 deeper:2 neighbor:1 wagner:3 emerge:1 sparse:2 distributed:3 tolerance:1 van:3 depth:13 benefit:1 author:1 made:2 forward:1 agnieszka:1 ec:4 constituting:1 employing:1 transaction:2 dmitry:1 understating:1 instantiation:1 receiver:2 conceptual:1 assumed:1 search:1 decomposes:1 promising:1 nature:2 channel:1 ca:1 tel:1 expansion:1 necessarily:3 complex:1 constructing:1 domain:2 protocol:1 pk:3 main:3 dense:5 bounding:1 motivation:1 expanders:1 rmg:2 andrei:1 aid:1 vr:2 wish:1 pinpoint:1 third:2 croft:1 bhaskara:1 theorem:28 removing:1 specific:1 er:4 normative:1 decay:2 dk:1 evidence:1 dl:1 exists:3 albeit:1 sequential:1 valiant:1 execution:1 push:1 depicted:1 logarithmic:1 broadcasting:1 simply:1 explore:1 sender:2 girth:1 expressed:1 contained:4 aditya:1 partially:1 mccloskey:1 applies:1 corresponds:3 environmental:1 acm:1 ma:2 presentation:1 consequently:1 towards:2 shared:4 hard:1 infinite:3 specifically:1 determined:1 lemma:14 called:1 catastrophic:2 schrijver:3 cybersecurity:1 guillaume:1 combinatorica:1 support:3 people:1 latter:1 arises:1 jonathan:5 alexander:1 evaluate:1 princeton:4 d1:7 phenomenon:2 |
6,419 | 6,806 | Consistent Robust Regression
Kush Bhatia?
University of California, Berkeley
[email protected]
Prateek Jain
Microsoft Research, India
[email protected]
Parameswaran Kamalaruban?
EPFL, Switzerland
[email protected]
Purushottam Kar
Indian Institute of Technology, Kanpur
[email protected]
Abstract
We present the first efficient and provably consistent estimator for the robust
regression problem. The area of robust learning and optimization has generated a
significant amount of interest in the learning and statistics communities in recent
years owing to its applicability in scenarios with corrupted data, as well as in
handling model mis-specifications. In particular, special interest has been devoted
to the fundamental problem of robust linear regression where estimators that can
tolerate corruption in up to a constant fraction of the response variables are widely
studied. Surprisingly however, to this date, we are not aware of a polynomial time
estimator that offers a consistent estimate in the presence of dense, unbounded
corruptions. In this work we present such an estimator, called CRR. This solves an
open problem put forward in the work of [3]. Our consistency analysis requires
a novel two-stage proof technique involving a careful analysis of the stability of
ordered lists which may be of independent interest. We show that CRR not only
offers consistent estimates, but is empirically far superior to several other recently
proposed algorithms for the robust regression problem, including extended Lasso
and the T ORRENT algorithm. In comparison, CRR offers comparable or better
model recovery but with runtimes that are faster by an order of magnitude.
1
Introduction
The problem of robust learning involves designing and analyzing learning algorithms that can extract
the underlying model despite dense, possibly malicious, corruptions in the training data provided to
the algorithm. The problem has been studied in a dizzying variety of models and settings ranging
from regression [19], classification [11], dimensionality reduction [4] and matrix completion [8].
In this paper we are interested in the Robust Least Squares Regression (RLSR) problem that finds
several applications to robust methods in face recognition and vision [22, 21], and economics [19].
In this problem, we are given a set of n covariates in d dimensions, arranged as a data matrix
X = [x1 , . . . , xn ], and a response vector y ? Rn . However, it is known apriori that a certain number
k of these responses cannot be trusted since they are corrupted. These may correspond to corrupted
pixels in visual recognition tasks or untrustworthy measurements in general sensing tasks.
Using these corrupted data points in any standard least-squares solver, especially when k = O (n), is
likely to yield a poor model with little predictive power. A solution to this is to exclude corrupted
?
?
Work done in part while Kush was a Research Fellow at Microsoft Research India.
Work done in part while Kamalaruban was interning at Microsoft Research India.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Table 1: A comparison of different RLSR algorithms and their properties. CRR is the first efficient
RLSR algorithm to guarantee consistency in the presence of a constant fraction of corruptions.
Paper
Wright & Ma, 2010 [21]
Chen & Dalalyan, 2010 [7]
Chen et al., 2013 [6]
Nguyen & Tran, 2013 [16]
Nguyen & Tran, 2013b [17]
McWilliams et al., 2014 [14]
Bhatia et al., 2015 [3]
This paper
Breakdown Point
??1
? ? ?(1)
? ? ? ?1d
??1
? ? 1
? ? ? ?1d
? ? ? (1)
? ? ? (1)
Adversary
Oblivious
Adaptive
Consistent
No
No
Technique
L1 regularization
SOCP
Adaptive
Oblivious
Oblivious
No
No
No
Robust thresholding
L1 regularization
L1 regularization
Oblivious
Adaptive
Oblivious
No
No
Yes
Weighted subsampling
Hard thresholding
Hard thresholding
points from consideration. The RLSR problem formalizes this requirement as follows:
X
b = arg min
b S)
(w,
(yi ? xTi w)2 ,
(1)
w?Rp ,S?[n] i?S
|S|=n?k
This formulation seeks to simultaneously extract the set of uncorrupted points and estimate the
least-squares solutions over those uncorrupted points. Due to the combinatorial nature of the RLSR
formulation (1), solving it directly is challenging and in fact, NP-hard in general [3, 20].
Literature in robust statistics suggests several techniques to solve (1). The most common model
assumes a realizable setting wherein there exists a gold model w? that generates the non-corrupted
responses. A vector of corruptions is then introduced to model the corrupted responses i.e.
y = X T w? + b? .
?
(2)
?
d
n
The goal of RLSR is to recover w ? R , the true model. The vector b ? R is a k-sparse vector
which takes non-zero values on at most k corrupted samples out of the n total samples, and a zero
value elsewhere. A more useful, but challenging model is one in which (mostly heteroscedastic and
i.i.d.) Gaussian noise is injected into the responses in addition to the corruptions.
y = X T w? + b? + .
(3)
Note that the Gaussian noise vector is not sparse. In fact, we have kk0 = n almost surely.
2
Related Works
A string of recent works have looked at the RLSR problem in various settings. To facilitate a
comparison among these, we set the following benchmarks for RLSR algorithms
1. (Breakdown Point) This is the number of corruptions k that an RLSR algorithm can tolerate
is a direct measure of its robustness. This limit is formalized as the breakdown point of
the algorithm in statistics literature. The breakdown point k is frequently represented as a
fraction ? of the total number of data points i.e. k = ? ? n.
2. (Adversary Model) RLSR algorithms frequently resort to an adversary model to specify
how are the corruptions introduced into the regression problem. The strictest is the adaptive
adversarial model wherein the adversary is able to view X and w? (as well as if Gaussian
noise is present) before deciding upon b? . A weaker model is the oblivious adversarial
model wherein the adversary generates a k-sparse vector in complete ignorance of X and
w? (and ). However, the adversary is still free to make arbitrary choices for the location
and values of corruptions.
3. (Consistency) RLSR algorithms that are able to operate in the hybrid noise model with
sparse adversarial corruptions as well as dense Gaussian noise are more valuable. An RLSR
algorithms is said to be consistent if, when invoked in the hybrid noise model on n data
points sampled from a distribution with appropriate characteristics, the RLSR algorithm
b n such that limn?? E [w
b n ? w? ]2 = 0 (for simplicity, assume a fixed
returns an estimate w
covariate design with the expectation being over random Gaussian noise in the responses).
2
In Table 1, we present a summarized view of existing RLSR techniques and their performance
vis-a-vis the benchmarks discussed above. Past work has seen the application of a wide variety
of algorithmic techniques to solve this problem, including more expensive methods involving L1
regularization (for example minw,b ?w kwk1 + ?b kbk1 + kX > w + b ? yk22 ) and second-order cone
programs such as [21, 7, 16, 17], as well as more scalable methods such as the robust thresholding
and iterative hard thresholding [6, 3]. As the work of [3] shows, L1 regularization and other expensive
methods struggle to scale to even moderately sized problems.
The adversary models considered by these works is also quite diverse. Half of the works consider an
oblivious adversary and the other half brace themselves against an adaptive adversary. The oblivious
adversary model, although weaker, can model some important practical situations where there is
systematic error in the sensing equipment being used, such as a few pixels in a camera becoming
unresponsive. Such errors are surely not random, and hence cannot be modeled as Gaussian noise,
but introduce corruptions the final measurement in a manner that is oblivious of the signal actually
being sensed, in this case the image being photographed.
An important point of consideration is the breakdown point of these methods. Among those cited
in Table 1, the works of [21] and [16] obtain the best breakdown points that allow a fraction of
points to be corrupted that is arbitrarily close to 1. They require the data to be generated from either
an isotropic Gaussian ensemble or be row-sampled from an incoherent orthogonal matrix. Most
results mentioned in the table allow a constant fraction of points to be corrupted i.e. allow k = ? ? n
corruptions for some fixed constant ? > 0. This is still impressive since it allows a dense subset of
data points to be corrupted and yet guarantees recovery. However, as we shall see below, these results
cannot guarantee consistency while allowing k = ? ? n corruptions.
We note that we use the term dense to refer to the corruptions in our model since they are a constant
fraction of the total available data. Moreover, as we shall see, this constant shall be universal and
independent of the ambient dimensionality d. This terminology is used to contrast against some other
works which can tolerate only o(n) corruptions which is arguably much sparser. For instance, as we
shall see below, the work of [17] can tolerate only o(n/ log n) corruptions if a consistent estimate?is
expected. The work of [6] also offers a weak guarantee wherein they are only able to tolerate a 1/ d
fraction of corruptions. However, [6] allow corruptions in covariates as well.
However, we note that none of the algorithms listed here, and to the best of our knowledge elsewhere
as well, are able to guarantee a consistent solution, irrespective of assumptions on the adversary
model. More specifically, none of these methods are able to guarantee exact recovery of w? , even
with n ? ? and constant fraction of corruptions ? = ? (1) (i.e. k = ? (n)) . At best, they
guarantee kw ? w? k2 ? O (?) when k = ? (n) where ? is the standard deviation of the white noise
(see Equation 3). Thus, their estimation error is of the order of the white noise in the system, even if
the algorithm is supplied with an infinite amount of data. This is quite unsatisfactory, given our deep
understanding of the consistency guarantees for least squares models.
For example, consider the work of [17] which considers a corruption model similar to (3). The work
makes deterministic assumptions on the data matrix and proposes the following convex program.
min ?w kwk1 + ?b kbk1 + kX > w + b ? yk22 .
w,b
(4)
For Gaussian designs, which we also consider, their results guarantee that for n = O (s log d),
!
r
r
? 2 s log d log n
? 2 k log n
?
?
b
b ? w k2 + kb ? b k2 ? O
kw
+
n
n
where s?
is the sparsity
index of the regressor w? . Note that for k = ?(n), the right hand side behaves
b n ? w? ]2 = 0.
as ? ? log n . Thus, the result is unable to ensure limn?? E [w
We have excluded some classical approaches to the RLSR problem from the table such as [18, 1, 2]
which use the Least Median of Squares (LMS) and Least Trimmed Squares (LTS) methods that
guaranteed consistency but may require an exponential running time. Our focus is on polynomial
time algorithms, more specifically those that are efficient and scalable. We note a recent work [5]
in robust stochastic optimization which is able to tolerate a constant fraction of corruptions ? ? 1.
However, their
operate in the list-decoding model wherein they output not one, but as
algorithms
1
many as O 1?? models, of which one (unknown) model is guaranteed to be correct.
3
Recovering Sparse High-dimensional Models: We note that several previous works extend their
methods and analyses to handle the case of sparse robust recovery in high-dimensional settings as
well, including [3, 7, 17]. A benefit of such extensions is the ability to work even in data starved
settings n d if the true model w? is s-sparse with s d. However, previous works continue
to require the number of corruptions to be of the order of k = o(n) or else k = O (n/s) in order
b n ? w? ]2 = 0 and cannot ensure consistency if k = O (n). This is
to ensure that limn?? E [w
evident, for example from the recovery guarantee offered by [17] discussed above, which requires
k = o(n/ log n). We do believe our CRR estimator can be adapted to high dimensional settings as
well. However, the details are tedious and we reserve them for an expanded version of the paper.
3
Our Contributions
In this paper, we remedy the above problem by using a simple and scalable iterative hard-thresholding
algorithm called CRR along with a novel two-stage proof technique. Given n covariates that form a
b n s.t. kw
b n ? w? k2 ? 0 as
Gaussian ensemble, our method in time poly(n, d), outputs an estimate w
n ? ? (see Theorem 4 for a precise
statement). In fact, our method guarantees a nearly optimal
q
b n ? w? k2 ? ? nd . It is noteworthy that CRR can tolerate a constant fraction of
error rate of kw
corruptions i.e. tolerate k = ? ? n corruptions for some fixed ? > 0.
We note that although hard thresholding techniques have been applied to the RLSR problem earlier
[3, 6], none of those methods are able to guarantee a consistent solution to the problem. Our results
hold in the setting where a constant fraction of the responses are corrupted by an oblivious adversary
(i.e. the one which corrupts observations
without information of the data points themselves). Our
e d3 + nd , where d is the dimensionality of the data. Moreover, as we shall
algorithm runs in time O
see, our technique makes more efficient use of data than previous hard thresholding methods such as
T ORRENT [3].
To the best of our knowledge, this is the first efficient and consistent estimator for the RLSR problem
in the challenging setting where a constant fraction of the responses may be corrupted in the presence
of dense noise. We would like to note that the problem of consistent robust regression is especially
challenging because without the assumption of an oblivious adversary, consistent estimation with a
constant fraction of corruptions (even for an arbitrarily small constant) may be impossible even when
supplied with infinitely many data points.
However, by crucially using the restriction of obliviousness on the adversary along with a novel proof
technique, we are able to provide a consistent estimator for RLSR with optimal (up to constants)
statistical and computational complexity.
Discussion on Problem Setting: We clarify that our improvements come at a cost. Our results
assume an oblivious adversary whereas several previous works allowed a fully adaptive adversary.
Indeed there is no free-lunch: it seems unlikely that consistent estimators are even possible in the
face of a fully adaptive adversary who can corrupt a constant fraction of responses since such an
adversary can use his power to introduce biased noise into the model in order to defeat any estimator.
An oblivious adversary is prohibited from looking at the responses before deciding the corruptions
and is thus unable to do the above.
Paper Organization: We will begin our discussion by introducing the problem formulation, relevant
notation, and tools in Section 4. This is followed by Section 5 where we develop CRR, a near-linear
time algorithm that gives consistent estimates for the RLSR problem, which we analyze in Section 6.
Finally in Section 7, we present rigorous experimental benchmarking of this algorithm. In Section 8
we offer some clarifications on how the manuscript was modified in response to reviewer comments.
4
Problem Formulation
We are given n data points X = [x1 , . . . , xn ] ? Rd?n , where xi ? Rd are the covariates and, for
some true model w? ? Rd , the vector of responses y ? Rn is generated
y = X > w? + b? + .
(5)
2
The responses suffer two kinds of perturbations ? dense white noise i ? N (0, ? ) that is chosen
in an i.i.d. fashion independently of the data X and the model w? , and adversarial corruptions
4
Algorithm 1 CRR: Consistent Robust Regression
Input: Covariates X = [x1 , . . . , xn ], responses y = [y1 , . . . , yn ]> , corruption index k, tolerance
1: b0 ? 0, t ? 0,
PX ?
X > (XX >
)?1 X
2: while
bt ? bt?1
2 > do
3:
bt+1 ? HTk (PX bt + (I ? PX )y)
4:
t?t+1
5: end while
6: return wt ? (XX > )?1 X(y ? bt )
in the form of b? . We assume that b? is a k ? -sparse vector albeit one with potentially unbounded
entries. The constant k ? will be called the corruption index of the problem. We assume the oblivious
adversary model where b? is chosen independently of X, w? and .
Although there exist works that operate under a fully adaptive adversary [3, 7], none of these works
are able to give a consistent estimate, whereas our algorithm CRR does provide a consistent estimate.
We also note that existing works are unable to give consistent estimates even in the oblivious adversary
model. Our result requires a significantly finer analysis; the standard `2 -norm style analysis used by
existing works [3, 7] seems incapable of offering a consistent estimation result in the robust regression
setting.
We will require the notions of Subset Strong Convexity and Subset Strong Smoothness similar to [3]
and reproduce the same below. For any set S ? [n], let XS := [xi ]i?S ? Rd?|S| denote the matrix
with columns in that set. We define vS for a vector v ? Rn similarly. ?min (X) and ?max (X) will
denote, respectively, the smallest and largest eigenvalues of a square symmetric matrix X.
Definition 1 (SSC Property). A matrix X ? Rd?n is said to satisfy the Subset Strong Convexity
Property at level m with constant ?m if the following holds:
?m ? min ?min (XS XS> )
|S|=m
Definition 2 (SSS Property). A matrix X ? Rd?n is said to satisfy the Subset Strong Smoothness
Property at level m with constant ?m if the following holds:
max ?max (XS XS> ) ? ?m .
|S|=m
Intuitively speaking, the SSC and SSS properties ensure that the regression problem remains well
conditioned, even if restricted to an arbitrary subset of the data points. This allows the estimator to
recover the exact model no matter what portion of the data was left uncorrupted by the adversary. We
refer the reader to the Appendix A for SSC/SSS bounds for Gaussian ensembles.
5
CRR: A Hard Thresholding Approach to Consistent Robust Regression
We now present a consistent method CRR for the RLSR problem. CRR takes a significantly different
approach to the problem than previous works. Instead of attempting to exclude data points deemed
unclean (as done by the T ORRENT algorithm proposed by [3]), CRR focuses on correcting the errors.
This allows CRR to work with the entire dataset at all times, as opposed to T ORRENT that works
with a fraction of the data at any given point of time.
To motivate the CRR algorithm,
we start with the RLSR formulation
2
b of the corminw?Rp ,kbk0 ?k? 12
X > w ? (y ? b)
2 , and realize that given any estimate b
ruption vector, the optimal model with respect to this estimate is given by the expression
b Plugging this expression for w
b = (XX > )?1 X(y ? b).
b into the formulation allows us to
w
reformulate the RLSR problem.
1
2
min f (b) = k(I ? PX )(y ? b)k2
(6)
2
kbk0 ?k?
where PX = X > (XX > )?1 X. This greatly simplifies the problem by casting it as a sparse parameter
estimation problem instead of a data subset selection problem (as done by T ORRENT). CRR directly
5
optimizes (6) by using a form of iterative hard thresholding. Notice that this approach allows CRR
to keep using the entire set of data points at all times, all the while using the current estimate of the
parameter b to correct the errors in the observations. At each step, CRR performs the following
update: bt+1 = HTk (bt ? ?f (bt )), where k is a parameter for CRR. Any value k ? 2k ? suffices
to ensure convergence and consistency, as will be clarified in the theoretical analysis. The hard
thresholding operator HTk (?) is defined below.
Definition 3 (Hard Thresholding). For any v ? Rn , let the permutation ?v ? Sn order elements
of v in descending order of their magnitudes. Then for any k ? n, we define the hard thresholding
b = HTk (v) where v
bi = vi if ?v?1 (i) ? k and 0 otherwise.
operator as v
We note that CRR functions with a fixed, unit step length, which is convenient in practice as it avoids
step length tuning, something most IHT algorithms [12, 13] require. For simplicity of exposition, we
will consider only Gaussian ensembles for the RLSR problem i.e. xi ? N (0, ?); our proof technique
works for general sub-Gaussian ensembles with appropriate distribution dependent parameters. Since
CRR interacts with the data only using the projection matrix PX , for Gaussian ensembles, one can
assume without loss of generality that the data points are generated from a spherical Gaussian i.e.
xi ? N (0, Id?d ). Our analysis will take care of the condition number of the data ensemble whenever
it is apparent in the convergence rates.
Before moving to present the consistency and convergence guarantees for CRR, we note that
Gaussian ensembles are known to satisfy the SSC/SSS properties with high probability. For instance,
in the case of the standard Gaussian ensemble,
we have
constants
of the order of ?m ?
q SSC/SSS ?
p
?
n
n
O m log m
+ n and ?m ? n ? O (n ? m) log n?m
+ n . These results are known
from previous works [3, 10] and are reproduced in Appendix A.
6
Consistency Guarantees for CRR
Theorem 4. Let xi ? Rd , 1 ? i ? n be generated i.i.d. from a Gaussian distribution, let yi ?s
be generated using (5) for a fixed w? , and let ? 2 be the noise variance. Also let the number of
corruptions k ? be s.t. 2k ? ? k ? n/10000. Then for any , ? > 0, with probability
at least 1??, after
q
kb? k2
n
?
nd
d
t
?
O log ?k+ + log d steps, CRR ensures that kw ? w k2 ? + O ?
.
n log ?
?min (?)
p
?
The above result establishes consistency of the CRR method with an error rate of O(?
d/n) that is
known to be statistically optimal. It is notable that this optimal rate is being ensured in the presence of
gross and unbounded outliers. We reiterate that to the best of our knowledge, this is the first instance
of a poly-time algorithm being shown to be consistent for the RLSR problem. It is also notable that
the result allows the corruption index to be k ? = ?(n), i.e. allows upto a constant factor of the total
number of data points to be arbitrarily corrupted, while ensuring consistency, which existing results
[3, 6, 16] do not ensure.
We pause a bit to clarify some points regarding the result. Firstly we note that the upper bound on
recovery error consists of two terms. The first term is which can be made arbitrarily small simply by
executing the CRR algorithm for several iterations. The second term is more
crucial and underscores
p
the consistency properties of CRR. The second term is of the form O ? d log(nd)/n and is
easily seen to vanish with n ? ? for any constant d, ?. Secondly we note that the result requires
k ? ? n/20000 i.e. ? ? 1/20000. Although this constant might seem small, we stress that these
constants are not the best possible since we preferred analyses that were more accessible. Indeed,
in our experiments, we found CRR to be robust to much higher corruption levels than what the
Theorem 4 guarantees. Thirdly, we notice that the result requires the CRR to be executed with the
corruption index set to a value k ? 2k ? . In practice the value of k can be easily tuned using a simple
binary search because of the speed of execution that CRR offers (see Section 7).
For our analysis, we will divide CRR?s execution into two phases ? a coarse convergence phase and
a fine convergence phase. CRR will enjoy a linear rate of convergence in both phases. However, the
coarse convergence analysis will only ensure kwt ? w? k2 = O (?). The fine convergence phase
will then use a much more careful analysis of the algorithm to show that in at most O (log n) more
6
p
?
iterations, CRR ensures kwt ? w? k2 = O(?
d/n), thus establishing consistency of the method.
Existing methods, such as T ORRENT, ensure an error level of O (?), but no better.
As shorthand notation, let ?t := (XX > )?1 X(bt ? b? ), g := (I ? PX ), and vt = X > ?t + g. Let
S ? := supp(b? ) be the true locations of the corruptions and I t := supp(bt ) ? supp(b? ).
Coarse convergence: Here we establish a result that guarantees that after a certain number of steps
T0 , CRR
identifies
the corruption vector with a relatively high accuracy and consequently ensures
that
wT0 ? w?
2 ? O (?).
2?
?
< 1,
Lemma 5. For any data matrix X that satisfies the SSC and SSS properties such that ?k+k
n
CRR, when executed with k ? k ? , ensures for any , ? > 0, with probability atleast 1 ? ? (over
kb? k
the random Gaussian noise in the responses ? see (3)) that after T0 = O log e0 +2 steps,
q
T
n
b 0 ? b?
? 3e0 + , where e0 = O ? (k + k ? ) log
?
?(k+k ) for standard Gaussian designs.
2
Using Lemma 12 (see the appendix), we can translate the above result to show that
wT0 ? w?
2 ?
n
. However, Lemma 5 will be more useful in the following fine
0.95? + , assuming k ? ? k ? 150
convergence analysis.
Fine convergence: We now show that CRR progresses further at a linear rate to achieve a consistent
solution. In Lemma 6, we show?that kX(bt ? b? )k2 has a linear decrease for every iteration t > T0
? dn). The proof proceeds by showing that for any fixed ?t such that
along with a term which is O(
t
?
k? k2 ? 100 , we obtain a linear decrease in k?t+1 k2 = k(XX T )?1 X(bt+1 ? b? )k2 . We then take
a union bound over a fine -net over all possible values of ?t to obtain the final result.
Lemma 6. Let X = [x1 , x2 , . . . , xn ] be a data matrix consisting of i.i.d. standard normal vectors i.e
xi ? N (0, Id?d ), and ? N (0, ? 2 ? In?n ) be a standard normal vector of white noise values drawn
?
independently of X. For any ? ? Rd such that k?k2 ? 100
, define bnew = HTk (X > ? + + b? ),
new
new
new
?
T ?1
new
z = b ? b and ? = (XX ) Xz , where k ? 2k ? , |supp(b? )| ? k ? , k ? ? n/10000,
?
and d ? n/10000. Then, with probability at least 1 ? 1/n5 , for every ? s.t. k?k2 ? 100
, we have
?
kXznew k2 ? .9nk?k2 + 100? d ? n log2 n,
r
d
new
k? k2 ? .91k?k2 + 110?
log2 n.
n
Putting all these results together establishes Theorem 4. See Appendix B for a detailed proof. Note
that while both the coarse/fine stages offer a linear rate of convergence, it is the fine phase that ensures
consistency. Indeed, the coarse phase only acts as a sort of good-enough initialization. Several
results in non-convex optimization assume a nice initialization ?close? to the optimum (alternating
minimization, EM etc). In our case, we have a happy situation where the initialization and main
algorithms are one and the same. Note that we could have actually used other algorithms e.g.
T ORRENT to perform initialization as well since T ORRENT [3, Theorem 10] essentially offers the
same (weak) guarantee as Lemma 5 offers.
7
Experiments
Experiments were carried out on synthetically generated linear regression datasets with corruptions.
All implementations were done in Matlab and were run on a single core 2.4GHz machine with
8GB RAM. The experiments establish the following: 1) CRR gives consistent estimates of the
regression model, especially in situations with a large number of corruptions where the ordinary least
squares estimator fails catastrophically, 2) CRR scales better to large datasets than the T ORRENT-FC
algorithm of [3] (upto 5? faster) and the Extended Lasso algorithm of [17] (upto 20? faster). The
main reason behind this speedup is that T ORRENT keeps changing its mind on which active set of
points it wishes to work with. Consequently, it expends a lot of effort processing each active set.
CRR on the other hand does not face such issues since it always works with the entire set of points.
Data: The model w? ? Rd was chosen to be a random unit norm vector. The data was generated
as xi ? N (0, Id ). The k ? responses to be corrupted were chosen uniformly at random and the
7
0
2000
4000
6000
8000
2
0
100
200
Number of Datapoints n
300
400
500
n = 2000, d = 500, ? = 1
4
OLS
ex-Lasso
TORRENT-FC
CRR
600
2
0
200
Dimensionality d
(a)
300
400
500
600
n = 2000, d = 500, k = 600
4
OLS
ex-Lasso
TORRENT-FC
CRR
|| w-w *||2
2
n = 2000, ? = 1, k = 600
4
|| w-w *||2
|| w-w *||2
OLS
ex-Lasso
TORRENT-FC
CRR
|| w-w *||2
d = 500, ? = 1, k = 600
4
700
2
OLS
ex-Lasso
TORRENT-FC
CRR
0
0
0.5
Number of Corruptions k
(b)
1
1.5
White Noise ?
(c)
(d)
Figure 1: Variation of recovery error with varying number of data points n, dimensionality d, number of
corruptions k? and white noise variance ?. CRR and TORRENT show better recovery properties than the
non-robust OLS on all experiments. Extended
Lasso offers comparable or slightly worse recovery in most
p
e
settings. Figure 1(a) ascertains the O
1/n -consistency of CRR as is shown in the theoretical analysis.
n = 500 d = 100 k = 0.37*n
200
150
100
50
0
2000
4000
6000
Number of Datapoints n
(a)
8000
1
102
n = 2000 d = 500 k = 0.37*n
0.95
?
?
?
?
0.9
= 0.01
= 0.05
= 0.1
= 0.5
100
0.85
0
10
20
30
40
Iteration Number
0
?
?
?
?
||bt - b * ||2
ex-Lasso
TORRENT-FC
CRR
||bt -b * ||2
d = 1000, ? = 7.5, k = 0.3*n
Fraction of Corruptions
Identified
Time (in sec)
250
= 0.01
= 0.05
= 0.1
= 0.5
10
20
30
Iteration Number
(b)
(c)
40
50
n = 5000 d = 100 k = 0.37*n
?
?
?
?
100
10-2
0
10
20
= 0.01
= 0.05
= 0.1
= 0.5
30
40
50
Iteration Number
(d)
Figure 2: Figure 2(a) show the average CPU run times of CRR, T ORRENT and Extended Lasso with varying
sample sizes. CRR can be an order of magnitude faster than TORRENT and Extended Lasso on problems
in 1000 dimensions while ensuring similar recovery properties.. Figure 2(b), 2(c) and 2(d) show that CRR
eventually not only captures the total mass of corruptions, but also does support recovery of the corrupted points
in an accurate manner. With every iteration, CRR improves upon its estimate of b? and provides cleaner points
for estimation of w. CRR is also able to very effectively utilize larger data sets to offer much faster convergence.
Notice the visibly faster convergence in Figure 2(d) which uses 10x more points than figure (c).
value of the corruptions was sets as b?i ? Unif (10, 20). Responses were then generated as yi =
hxi , w? i + ?i + b?i where ?i ? N (0, ? 2 ). All reported results were averaged over 20 randomly trials.
Evaluation Metric: We measure the performance of various algorithms using the standard L2 error:
b ? w? k2 . For the timing experiments, we deemed an algorithm to converge on an instance
rw
b = kw
if it obtained a model wt such that kwt ? wt?1 k2 ? 10?4 .
Baseline Algorithms: CRR was compared to two baselines 1) the Ordinary Least Squares (OLS)
estimator which is oblivious of the presence of any corruptions in the responses, 2) the T ORRENT
algorithm of [3] which is a recently proposed method for performing robust least squares regression,
and 3) the Extended Lasso (ex-Lasso) approach of [15] for which we use the FISTA implementation
of [23] and choose the regularization paramaters for our model data as mentioned by the authors.
Recovery Properties & Timing: CRR, T ORRENT and ex-Lasso were found to be competitive, and
offered much lower residual errors kw ? w? k2 than the non-robust OLS method when varying
dataset size Figure 1(a), dimensionality Figure 1(b), number of corrupted responses Figure 1(c), and
magnitude of white noise Figure 1(d). In terms of scaling properties, CRR exhibited faster runtimes
than T ORRENT-FC as depicted in Figure 2(a). CRR can be upto 5? faster than T ORRENT and upto
20? faster than ex-Lasso on problems of 1000 dimensions. Figure 2(a) suggests that executing
both T ORRENT and ex-Lasso becomes very expensive with an order of magnitude increase in the
dimension parameter of the problem while CRR scales gracefully. Also, Figures 2(c) and 2(d) show
the variation of kbt ? b? k2 for various values of the noise parameter ?. The plot depicts the fact
that as ? ? 0, CRR is correctly able to identify all the corrupted points and estimate the level of
corruption correctly, thereby returning the exact solution w? . Notice that in Figure 2(d) which utilizes
more data points, CRR offers uniformly faster convergence across all white noise levels.
Choice of Potential Function: In Lemmata 5 and 6, we show that kbt ? b? k2 decreases with every
iteration. Figures 2(c) and (d) back this theoretical statement by showing that CRR?s estimate of b?
improves with every iteration. Along with estimating the magnitude of b? , Figure 2(b) shows that
CRR is also able to correctly identify the support of the corrupted points with increasing iterations.
8
8
Response to Reviewer Comments
We are thankful to the reviewers for their comments aimed at improving the manuscript. Below we
offer some clarifications regarding the same.
1. We have fixed all typographical errors pointed out in the reviews.
2. We have included additional references as pointed out in the reviews.
3. We have improved the presentation of the statement of the results to make the theorem and
lemma statements more crisp and self contained.
4. We have fixed minor inconsistencies in the figures by executing experiments afresh.
5. We note that CRR?s reduction of the robust recovery problem to sparse recovery is not
only novel, but also one that offers impressive speedups in practice over the fully corrective
version of the existing T ORRENT algorithm [3]. However, note that the reduction to sparse
recovery actually hides a sort of ?fully-corrective? step wherein the optimal model for a
particular corruption estimate is used internally in the formulation. Thus, CRR is implicitly
a fully corrective algorithm as well.
6. We agree with the reviewers that further efforts are needed to achieve results with sharper
constants. For example, CRR offers robustness upto a breakdown fraction of 1/20000 which,
although a constant, nevertheless leaves room for improvement. Having shown for the first
time that tolerating a non-trivial, universally constant fraction of corruptions is possible
in polynomial time, it is indeed encouraging to study how far can the breakdown point be
pushed for various families of algorithms.
7. Our current efforts are aimed at solving the robust sparse recovery problems in high dimensional settings in a statistically consistent manner, as well as extending the consistency
properties established in this paper for non-Gaussian, for example fixed, designs.
Acknowledgments
The authors thank the reviewers for useful comments. PKar is supported by the Deep Singh and
Daljeet Kaur Faculty Fellowship and the Research-I Foundation at IIT Kanpur, and thanks Microsoft
Research India and Tower Research for research grants.
References
[1] J. ?mos Vi?sek. The least trimmed squares. Part I: Consistency. Kybernetika, 42:1?36, 2006.
?
[2] J. ?mos Vi?sek. The least trimmed squares. Part II: n-consistency. Kybernetika, 42:181?202, 2006.
[3] K. Bhatia, P. Jain, and P. Kar. Robust Regression via Hard Thresholding. In Proceedings of the 29th
Annual Conference on Neural Information Processing Systems (NIPS), 2015.
[4] E. J. Cand?s, X. Li, and J. Wright. Robust Principal Component Analysis? Journal of the ACM, 58(1):1?37,
2009.
[5] M. Charikar, J. Steinhardt, and G. Valiant. Learning from Untrusted Data. arXiv:1611.02315 [cs.LG],
2016.
[6] Y. Chen, C. Caramanis, and S. Mannor. Robust Sparse Regression under Adversarial Corruption. In
Proceedings of the 30th International Conference on Machine Learning (ICML), 2013.
[7] Y. Chen and A. S. Dalalyan. Fused sparsity and robust estimation for linear models with unknown variance.
In Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS), 2012.
[8] Y. Cherapanamjeri, K. Gupta, and P. Jain. Nearly-optimal Robust Matrix Completion. arXiv:1606.07315
[cs.LG], 2016.
[9] F. Cucker and S. Smale. On the Mathematical Foundations of Learning. Bulleting of the American
Mathematical Society, 39(1):1?49, 2001.
[10] M. A. Davenport, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk. A Simple Proof that Random Matrices
are Democratic. Technical Report TREE0906, Rice University, Department of Electrical and Computer
Engineering, 2009.
[11] J. Feng, H. Xu, S. Mannor, and S. Yan. Robust Logistic Regression and Classification. In Proceedings of
the 28th Annual Conference on Neural Information Processing Systems (NIPS), 2014.
9
[12] R. Garg and R. Khandekar. Gradient Descent with Sparsification: An Iterative Algorithm for Sparse
Recovery with Restricted Isometry Property. In Proceedings of the 26th International Conference on
Machine Learning (ICML), 2009.
[13] P. Jain, A. Tewari, and P. Kar. On Iterative Hard Thresholding Methods for High-dimensional M-estimation.
In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS), 2014.
[14] B. McWilliams, G. Krummenacher, M. Lucic, and J. M. Buhmann. Fast and Robust Least Squares
Estimation in Corrupted Linear Models. In 28th Annual Conference on Neural Information Processing
Systems (NIPS), 2014.
[15] N. M. Nasrabadi, T. D. Tran, and N. Nguyen. Robust Lasso with Missing and Grossly Corrupted
Observations. In Advances in Neural Information Processing Systems, pages 1881?1889, 2011.
[16] N. H. Nguyen and T. D. Tran. Exact recoverability from dense corrupted observations via `1 -minimization.
IEEE transactions on information theory, 59(4):2017?2035, 2013.
[17] N. H. Nguyen and T. D. Tran. Robust Lasso With Missing and Grossly Corrupted Observations. IEEE
Transaction on Information Theory, 59(4):2036?2058, 2013.
[18] P. J. Rousseeuw. Least Median of Squares Regression. Journal of the American Statistical Association,
79(388):871?880, 1984.
[19] P. J. Rousseeuw and A. M. Leroy. Robust Regression and Outlier Detection. John Wiley and Sons, 1987.
[20] C. Studer, P. Kuppinger, G. Pope, and H. B?lcskei. Recovery of Sparsely Corrupted Signals. IEEE
Transaction on Information Theory, 58(5):3115?3130, 2012.
[21] J. Wright and Y. Ma. Dense Error Correction via `1 Minimization. IEEE Transactions on Information
Theory, 56(7):3540?3560, 2010.
[22] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust Face Recognition via Sparse Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210?227, 2009.
[23] A. Y. Yang, Z. Zhou, A. G. Balasubramanian, S. S. Sastry, and Y. Ma. Fast `1 -minimization algorithms for
robust face recognition. IEEE Transactions on Image Processing, 22(8):3234?3246, 2013.
10
| 6806 |@word trial:1 faculty:1 version:2 polynomial:3 seems:2 norm:2 nd:4 tedious:1 open:1 unif:1 seek:1 sensed:1 crucially:1 thereby:1 catastrophically:1 reduction:3 offering:1 tuned:1 past:1 existing:6 current:2 com:1 yet:1 john:1 realize:1 plot:1 update:1 v:1 half:2 leaf:1 intelligence:1 isotropic:1 core:1 coarse:5 provides:1 cse:1 location:2 clarified:1 mannor:2 firstly:1 purushot:1 unbounded:3 mathematical:2 along:4 dn:1 direct:1 consists:1 shorthand:1 manner:3 introduce:2 indeed:4 expected:1 themselves:2 frequently:2 xz:1 cand:1 spherical:1 balasubramanian:1 little:1 xti:1 cpu:1 solver:1 increasing:1 becomes:1 provided:1 begin:1 underlying:1 moreover:2 estimating:1 mass:1 notation:2 xx:7 prateek:1 what:2 kind:1 string:1 kybernetika:2 sparsification:1 formalizes:1 guarantee:17 berkeley:2 fellow:1 every:5 act:1 ensured:1 k2:25 returning:1 mcwilliams:2 unit:2 enjoy:1 yn:1 internally:1 arguably:1 grant:1 before:3 engineering:1 timing:2 struggle:1 limit:1 despite:1 analyzing:1 id:3 establishing:1 becoming:1 noteworthy:1 might:1 garg:1 initialization:4 studied:2 suggests:2 challenging:4 heteroscedastic:1 bi:1 statistically:2 averaged:1 practical:1 camera:1 acknowledgment:1 practice:3 union:1 area:1 universal:1 yan:1 significantly:2 convenient:1 projection:1 studer:1 cannot:4 close:2 selection:1 operator:2 put:1 impossible:1 descending:1 restriction:1 crisp:1 deterministic:1 reviewer:5 missing:2 dalalyan:2 economics:1 independently:3 convex:2 formalized:1 recovery:18 simplicity:2 correcting:1 estimator:12 his:1 datapoints:2 stability:1 handle:1 notion:1 variation:2 exact:4 us:1 designing:1 element:1 recognition:4 expensive:3 breakdown:8 sparsely:1 electrical:1 capture:1 ensures:5 rlsr:24 decrease:3 valuable:1 mentioned:2 gross:1 convexity:2 complexity:1 covariates:5 moderately:1 motivate:1 bnew:1 solving:2 singh:1 predictive:1 upon:2 untrusted:1 easily:2 k0:1 iit:1 various:4 represented:1 corrective:3 caramanis:1 jain:4 fast:2 bhatia:3 quite:2 apparent:1 widely:1 solve:2 larger:1 otherwise:1 ability:1 statistic:3 final:2 reproduced:1 eigenvalue:1 net:1 encouraging:1 tran:5 relevant:1 date:1 translate:1 achieve:2 gold:1 convergence:15 requirement:1 optimum:1 extending:1 executing:3 thankful:1 develop:1 ac:1 completion:2 minor:1 b0:1 progress:1 strong:4 solves:1 recovering:1 c:2 involves:1 come:1 switzerland:1 correct:2 owing:1 kbt:2 stochastic:1 kb:3 require:5 suffices:1 secondly:1 extension:1 clarify:2 hold:3 correction:1 considered:1 wright:4 normal:2 deciding:2 prohibited:1 algorithmic:1 mo:2 lm:1 reserve:1 smallest:1 estimation:8 combinatorial:1 largest:1 establishes:2 tool:1 trusted:1 weighted:1 minimization:4 gaussian:20 always:1 modified:1 zhou:1 varying:3 casting:1 focus:2 improvement:2 unsatisfactory:1 greatly:1 contrast:1 adversarial:5 equipment:1 baseline:2 realizable:1 parameswaran:2 rigorous:1 underscore:1 visibly:1 dependent:1 epfl:2 entire:3 unlikely:1 bt:14 reproduce:1 interested:1 corrupts:1 provably:1 pixel:2 issue:1 arg:1 classification:2 among:2 proposes:1 special:1 laska:1 apriori:1 aware:1 having:1 beach:1 runtimes:2 kw:7 icml:2 nearly:2 np:1 report:1 oblivious:16 few:1 randomly:1 simultaneously:1 kwt:3 phase:7 consisting:1 microsoft:5 detection:1 organization:1 interest:3 evaluation:1 behind:1 devoted:1 accurate:1 ambient:1 typographical:1 minw:1 orthogonal:1 divide:1 e0:3 theoretical:3 instance:4 column:1 earlier:1 ordinary:2 applicability:1 cost:1 deviation:1 afresh:1 subset:7 entry:1 introducing:1 reported:1 interning:1 corrupted:24 st:1 cited:1 fundamental:1 thanks:1 international:2 accessible:1 systematic:1 decoding:1 regressor:1 cucker:1 together:1 fused:1 opposed:1 choose:1 possibly:1 ssc:6 davenport:1 worse:1 resort:1 american:2 style:1 return:2 li:1 supp:4 exclude:2 potential:1 socp:1 summarized:1 sec:1 unresponsive:1 matter:1 satisfy:3 notable:2 vi:5 reiterate:1 view:2 lot:1 analyze:1 portion:1 start:1 recover:2 sort:2 competitive:1 contribution:1 square:14 accuracy:1 variance:3 characteristic:1 who:1 ensemble:9 correspond:1 yield:1 identify:2 yes:1 weak:2 crr:64 tolerating:1 none:4 corruption:47 finer:1 whenever:1 iht:1 definition:3 against:2 grossly:2 proof:7 mi:1 sampled:2 dataset:2 knowledge:3 dimensionality:6 improves:2 actually:3 back:1 manuscript:2 tolerate:8 htk:5 higher:1 response:21 wherein:6 specify:1 improved:1 arranged:1 done:5 formulation:7 generality:1 stage:3 torrent:7 hand:2 ganesh:1 logistic:1 believe:1 usa:1 facilitate:1 true:4 remedy:1 regularization:6 hence:1 excluded:1 symmetric:1 alternating:1 lts:1 ignorance:1 white:8 self:1 stress:1 evident:1 complete:1 performs:1 l1:5 lucic:1 ranging:1 image:2 consideration:2 novel:4 recently:2 invoked:1 superior:1 common:1 ols:7 behaves:1 sek:2 empirically:1 defeat:1 thirdly:1 discussed:2 extend:1 association:1 significant:1 measurement:2 refer:2 smoothness:2 rd:9 tuning:1 consistency:19 sastry:2 similarly:1 pointed:2 hxi:1 moving:1 specification:1 impressive:2 etc:1 something:1 isometry:1 recent:3 purushottam:1 hide:1 optimizes:1 scenario:1 certain:2 kar:3 binary:1 incapable:1 arbitrarily:4 kwk1:2 continue:1 yi:3 uncorrupted:3 vt:1 inconsistency:1 seen:2 additional:1 care:1 surely:2 converge:1 nasrabadi:1 signal:2 ii:1 technical:1 faster:10 offer:15 long:1 plugging:1 ensuring:2 involving:2 regression:20 scalable:3 n5:1 vision:1 expectation:1 essentially:1 metric:1 arxiv:2 iteration:10 addition:1 whereas:2 fine:7 fellowship:1 else:1 median:2 malicious:1 limn:3 crucial:1 biased:1 operate:3 brace:1 exhibited:1 comment:4 seem:1 near:1 presence:5 yk22:2 synthetically:1 yang:2 enough:1 variety:2 lasso:17 orrent:17 identified:1 simplifies:1 regarding:2 t0:3 kush:2 expression:2 gb:1 trimmed:3 effort:3 suffer:1 speaking:1 matlab:1 deep:2 useful:3 tewari:1 detailed:1 listed:1 cleaner:1 aimed:2 amount:2 rousseeuw:2 rw:1 supplied:2 exist:1 notice:4 correctly:3 diverse:1 shall:5 putting:1 terminology:1 nevertheless:1 drawn:1 d3:1 changing:1 utilize:1 ram:1 fraction:18 year:1 cone:1 run:3 injected:1 baraniuk:1 kuppinger:1 almost:1 reader:1 family:1 utilizes:1 appendix:4 scaling:1 comparable:2 bit:1 pushed:1 bound:3 guaranteed:2 followed:1 annual:5 leroy:1 adapted:1 krummenacher:1 x2:1 generates:2 speed:1 min:7 attempting:1 expanded:1 performing:1 photographed:1 px:7 relatively:1 speedup:2 charikar:1 department:1 poor:1 across:1 slightly:1 em:1 son:1 lunch:1 intuitively:1 restricted:2 outlier:2 handling:1 equation:1 agree:1 remains:1 eventually:1 needed:1 mind:1 prajain:1 end:1 available:1 appropriate:2 upto:6 robustness:2 rp:2 assumes:1 running:1 subsampling:1 ensure:8 log2:2 especially:3 establish:2 classical:1 society:1 feng:1 looked:1 interacts:1 said:3 gradient:1 kbk1:2 unable:3 thank:1 gracefully:1 tower:1 considers:1 trivial:1 reason:1 khandekar:1 assuming:1 length:2 modeled:1 index:5 reformulate:1 happy:1 lg:2 mostly:1 executed:2 statement:4 potentially:1 sharper:1 smale:1 design:4 implementation:2 unknown:2 perform:1 allowing:1 upper:1 observation:5 datasets:2 benchmark:2 descent:1 situation:3 extended:6 looking:1 precise:1 y1:1 rn:4 perturbation:1 arbitrary:2 recoverability:1 community:1 introduced:2 california:1 established:1 nip:6 able:12 adversary:23 proceeds:1 below:5 pattern:1 democratic:1 sparsity:2 program:2 including:3 max:3 power:2 hybrid:2 buhmann:1 pause:1 residual:1 technology:1 identifies:1 irrespective:1 deemed:2 carried:1 incoherent:1 ss:6 extract:2 sn:1 nice:1 literature:2 understanding:1 l2:1 review:2 fully:6 loss:1 permutation:1 foundation:2 offered:2 consistent:26 thresholding:15 corrupt:1 row:1 elsewhere:2 surprisingly:1 supported:1 free:2 side:1 weaker:2 allow:4 institute:1 india:4 wide:1 face:5 sparse:15 benefit:1 tolerance:1 ghz:1 dimension:4 xn:4 avoids:1 obliviousness:1 forward:1 made:1 adaptive:8 author:2 universally:1 nguyen:5 far:2 transaction:6 preferred:1 iitk:1 implicitly:1 keep:2 active:2 xi:7 search:1 iterative:5 wt0:2 table:5 nature:1 robust:35 ca:1 untrustworthy:1 improving:1 poly:2 dense:9 main:2 strictest:1 noise:21 allowed:1 x1:4 xu:1 pope:1 benchmarking:1 depicts:1 fashion:1 wiley:1 sub:1 fails:1 wish:1 exponential:1 vanish:1 kanpur:2 theorem:6 covariate:1 showing:2 sensing:2 list:2 x:5 starved:1 gupta:1 exists:1 albeit:1 effectively:1 valiant:1 magnitude:6 execution:2 conditioned:1 kx:3 nk:1 chen:4 sparser:1 depicted:1 fc:7 simply:1 likely:1 infinitely:1 visual:1 steinhardt:1 ordered:1 kbk0:2 contained:1 ch:1 expends:1 satisfies:1 acm:1 ma:4 rice:1 goal:1 sized:1 presentation:1 consequently:2 careful:2 exposition:1 room:1 hard:14 fista:1 included:1 specifically:2 infinite:1 uniformly:2 wt:3 lemma:8 principal:1 called:3 total:5 clarification:2 boufounos:1 experimental:1 support:2 indian:1 ex:9 |
6,420 | 6,807 | Natural Value Approximators:
Learning when to Trust Past Estimates
Zhongwen Xu
DeepMind
[email protected]
Andre Barreto
DeepMind
[email protected]
Joseph Modayil
DeepMind
[email protected]
David Silver
DeepMind
[email protected]
Hado van Hasselt
DeepMind
[email protected]
Tom Schaul
DeepMind
[email protected]
Abstract
Neural networks have a smooth initial inductive bias, such that small changes in
input do not lead to large changes in output. However, in reinforcement learning
domains with sparse rewards, value functions have non-smooth structure with
a characteristic asymmetric discontinuity whenever rewards arrive. We propose
a mechanism that learns an interpolation between a direct value estimate and a
projected value estimate computed from the encountered reward and the previous
estimate. This reduces the need to learn about discontinuities, and thus improves
the value function approximation. Furthermore, as the interpolation is learned
and state-dependent, our method can deal with heterogeneous observability. We
demonstrate that this one change leads to significant improvements on multiple
Atari games, when applied to the state-of-the-art A3C algorithm.
1
Motivation
The central problem of reinforcement learning is value function approximation: how to accurately
estimate the total future reward from a given state. Recent successes have used deep neural networks
to approximate the value function, resulting in state-of-the-art performance in a variety of challenging
domains [9]. Neural networks are most effective when the desired target function is smooth. However,
value functions are, by their very nature, discontinuous functions with sharp variations over time. In
this paper we introduce a representation of value that matches the natural temporal structure of value
functions.
A value function represents the expected sum of future discounted rewards. If non-zero rewards occur
infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as
such rewarding moments approach and drops immediately after. This is depicted schematically with
the dashed black line in Figure 1. The true value function is quite smooth, except immediately after
receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains
associate positive or negative reinforcements to salient events (like picking up an object, hitting a
wall, or reaching a goal position). The problem is that the agent?s observations tend to be smooth
in time, so learning an accurate value estimate near those sharp drops puts strain on the function
approximator ? especially when employing differentiable function approximators such as neural
networks that naturally make smooth maps from observations to outputs.
To address this problem, we incorporate the temporal structure of cumulative discounted rewards into
the value function itself. The main idea is that, by default, the value function can respect the reward
sequence. If no reward is observed, then the next value smoothly matches the previous value, but
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: After the same amount of training, our proposed method (red) produces much more accurate
estimates of the true value function (dashed black), compared to the baseline (blue). The main plot
shows discounted future returns as a function of the step in a sequence of states; the inset plot shows
the RMSE when training on this data, as a function of network updates. See section 4 for details.
becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from
the previous value: in other words a reward that was expected has now been consumed. The natural
value approximator (NVA) combines the previous value with the observed rewards and discounts,
which makes this sequence of values easy to represent by a smooth function approximator such as a
neural network.
Natural value approximators may also be helpful in partially observed environments. Consider a
situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it
will take until the agent has crossed a valley to another hill top in the distance. There is fog in the
valley, which means that if the agent?s state is a single observation from the valley it will not be able
to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top
may be much better, because the observation is richer. This case is depicted schematically in Figure 2.
Natural value approximators may be effective in these situations, since they represent the current
value in terms of previous value estimates.
2
Problem definition
We consider the typical scenario studied in reinforcement learning, in which an agent interacts with
an environment at discrete time intervals: at each time step t the agent selects an action as a function
of the current state, which results in a transition to the next state and a reward. The goal of the agent
is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12].
The interaction between the agent and the environment is modelled as a Markov Decision Process
(MDP). An MDP is a tuple (S, A, R, ?, P ) where S is a state space, A is an action space, R :
S ?A?S ? D(R) is a reward function that defines a distribution over the reals for each combination
of state, action, and subsequent state, P : S ? A ? D(S) defines a distribution over subsequent
states for each state and action, and ?t ? [0, 1] is a scalar, possibly time-dependent, discount factor.
One common goal is to make accurate predictions under a behaviour policy ? : S ? D(A) of the
value
v? (s) ? E [R1 + ?1 R2 + ?1 ?2 R3 + . . . | S0 = s] .
(1)
The expectation is over the random variables At ? ?(St ), St+1 ? P (St , At ), and Rt+1 ?
R(St , At , St+1 ), ?t ? N+ . For instance, the agent can repeatedly use these predictions to improve
its policy. The values satisfy the recursive Bellman equation [2]
v? (s) = E [Rt+1 + ?t+1 v? (St+1 ) | St = s] .
We consider the common setting where the MDP is not known, and so the predictions must be
learned from samples. The predictions made by an approximate value function v(s; ?), where ? are
parameters that are learned. The approximation of the true value function can be formed by temporal
2
difference (TD) learning [10], where the estimate at time t is updated towards
n
X
1
n
n
Zt ? Rt+1 + ?t+1 v(St+1 ; ?) or Zt ?
(?i?1
k=1 ?t+k )Rt+i + (?k=1 ?t+k )v(St+n ; ?) ,(2)
i=1
where Ztn is the n-step bootstrap target, and the TD-error is ?tn ? Ztn ? v(St ; ?).
3
Proposed solution: Natural value approximators
The conventional approach to value function approximation produces a value estimate from features
associated with the current state. In states where the value approximation is poor, it can be better to
rely more on a combination of the observed sequence of rewards and older but more reliable value
estimates that are projected forward in time. Combining these estimates can potentially be more
accurate than using one alone.
These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate,
Vt ? v(St ; ?), is a conventional value function estimate at time t. The second estimate,
G?t?1 ? Rt
if ?t > 0 and t > 0 ,
(3)
?t
is a projected value estimate computed from the previous value estimate, the observed reward, and
the observed discount for time t. The third estimate,
G? ? Rt
G?t ? ?t Gpt + (1 ? ?t )Vt = (1 ? ?t )Vt + ?t t?1
,
(4)
?t
is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient
?t . This coefficient is a learned function of state ?(?; ?) : S ? [0, 1], over the same parameters ?,
and we denote ?t ? ?(St ; ?). We call G?t the natural value estimate at time t and we call the overall
approach natural value approximators (NVA). Ideally, the natural value estimate will become more
accurate than either of its constituents from training.
Gpt ?
The value is learned by minimizing the sum of two losses. The first loss captures the difference
between the conventional value estimate Vt and the target Zt , weighted by how much it is used in the
natural value estimate,
JV ? E [[1 ? ?t ]]([[Zt ]] ? Vt )2 ,
(5)
where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient
everywhere, that is, gradients are not back-propagated through this function. The second loss captures
the difference between the natural value estimate and the target, but it provides gradients only through
the coefficient ?t ,
J? ? E ([[Zt ]] ? (?t [[Gpt ]] + (1 ? ?t )[[Vt ]]))2 .
(6)
These two losses are summed into a joint loss,
J = JV + c? J? ,
(7)
where c? is a scalar trade-off parameter. When conventional stochastic gradient descent is applied
to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of ?t are
adapted with the second loss.
When bootstrapping on future values, the most accurate value estimate is best, so using G?t instead of
Vt leads to refined prediction targets
n
X
?
n
Zt?,1 ? Rt+1 + ?t+1 G?t+1 or Zt?,n ?
(?i?1
(8)
k=1 ?t+k )Rt+i + (?k=1 ?t+k )Gt+n .
i=1
4
Illustrative Examples
We now provide some examples of situations where natural value approximations are useful. In both
examples, the value function is difficult to estimate well uniformly in all states we might care about,
and the accuracy can be improved by using the natural value estimate G?t instead of the direct value
estimate Vt .
1
Note the mixed recursion in the definition, Gp depends on G? , and vice-versa.
3
Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns,
this is a supervised learning setup (regression) with the true value targets provided (dashed black
line). Each point 0 ? t ? 100 on the horizontal axis corresponds to one state St in a single sequence.
The shape of the target values stems from a handful of reward events, and discounting with ? = 0.9.
We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions,
so St ? R4 . The approximators v(s) and ?(s) are two small neural networks with one hidden layer
of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input to ? is
augmented with the last k = 16 rewards. For the baseline experiment, we fix ?t = 0. The networks
are trained for 5000 steps using Adam [5] with minibatch size 32. Because of the small capacity of
the v-network, the baseline struggles to make accurate predictions and instead it makes systematic
errors that smooth over the characteristic peaks and drops in the value function. The natural value
estimation obtains ten times lower root mean squared error (RMSE), and it also closely matches the
qualitative shape of the target function.
Heterogeneous observability Our approach is not limited to the sparse-reward setting. Imagine
an agent that stands on the top of a hill. By looking in the distance, the agent may be able to predict
how many steps should be taken to take it to the next hill top. When the agent starts descending the
hill, it walks into fog in the valley between the hills. There, it can no longer see where it is. However,
it could still determine how many steps until the next hill by using the estimate from the first hill and
then simply counting steps. This is exactly what the natural value estimate G?t will give us, assuming
?t = 1 on all steps in the fog. Figure 2 illustrates this example, where we assumed each step has
a reward of ?1 and the discount is one. The best observation-dependent value v(St ) is shown in
dashed blue. In the fog, the agent can then do no better than to estimate the average number of steps
from a foggy state until the next hill top. In contrast, the true value, shown in red, can be achieved
exactly with natural value estimates. Note that in contrast to Figure 1, rewards are dense rather than
sparse.
In both examples, we can sometimes trust past value functions more than current estimations, either
because of function approximation error, as in the first example, or partial observability.
value
0
fog
50
100
0
50
step
100
Figure 2: The value is the negative number of steps until reaching the destination at t = 100. In some
parts of the state space, all states are aliased (in the fog). For these aliased states, the best estimate
based only on immediate observations is a constant value (dashed blue line). Instead, if the agent
relies on the value just before the fog and then decrements it by encountered rewards, while ignoring
observations, then the agent can match the true value (solid red line).
5
Deep RL experiments
In this section, we integrate our method within A3C (Asynchronous advantage actor-critic [9]), a
widely used deep RL agent architecture that uses a shared deep neural network to both estimate the
policy ? (actor) and a baseline value estimate v (critic). We modify it to use G?t estimates instead
of the regular value baseline Vt . In the simplest, feed-forward variant, the network architecture
is composed of three layers of convolutions, followed by a fully connected layer with output h,
which feeds into the two separate heads (? with an additional softmax, and a scalar v, see the black
components in the diagram below). The updates are done online with a buffer of the past 20-state
transitions. The value targets are n-step targets Ztn (equation 2) where each n is chosen such that it
bootstraps on the state at the end of the 20-state buffer. In addition, there is a loss contribution from
the actor?s policy gradient update on ?. We refer the reader to [9] for details.
4
Table 1: Mean and median human-normalized scores on 57 Atari games, for the A3C baselines and
our method, using both evaluation metrics. N75 indicates the number of games that achieve at least
75% human performance.
human starts
no-op starts
Agent
N75
median
mean
N75
median
mean
A3C baseline 28/57 68.5% 310.4% 31/57 91.6% 334.0%
A3C + NVA
30/57 93.5% 373.3% 32/57 117.0% 408.4%
Our method differs from the baseline A3C setup in the form of the
value estimator in the critic (G?t instead of Vt ), the bootstrap targets
(Zt?,n instead of Ztn ) and the value loss (J instead of JV ) as discussed
in section 3. The diagram on the right shows those new components in
green; thick arrows denote functions with learnable parameters, thin
ones without. In terms of the network architecture, we parametrize the
blending coefficient ? as a linear function of the hidden representation
h concatenated with a window of past rewards Rt?k:t followed by a
sigmoid:
?t
?(St ; ?) ?
1 + exp
,
??> [h(St ); Rt?k:t ]
(9)
G?t
G?t-1
Rt
?t
Rt-k:t
?
v
?
h
St
where ?? are the parameters of the ? head of the network, and we set k to 50. The extra factor of
?t handles the otherwise undefined beginnings of episode (when ?0 = 0), and it ensures that the
time-scale across which estimates can be projected forward cannot exceed the time-scale induced by
the discounting2 .
We investigate the performance of natural value estimates on a collection of 57 video games games
from the Atari Learning Environment [1], which has become a standard benchmark for Deep RL
methods because of the rich diversity of challenges present in the various games. We train agents for
80 Million agent steps (320 Million Atari game frames) on a single machine with 16 cores, which
corresponds to the number of frames denoted as ?1 day on CPU? in the original A3C paper. All agents
are run with one seed and a single, fixed set of hyper-parameters. Following [8], the performance of
the final policy is evaluated under two modes, with a random number of no-ops at the start of each
episode, and from randomized starting points taken from human trajectories.
5.1
Results
Table 1 summarizes the aggregate performance results across all 57 games, normalized by human
performance. The evaluation results are presented under two different conditions, the human starts
condition evaluates generalization to a different starting state distribution than the one used in training,
and the no-op starts condition evaluates performance on the same starting state distribution that was
used in training. We summarize normalized performance improvements in Figure 3. In the appendix,
we provide full results for each game in Table 2 and Table 3. Across the board, we find that adding
NVA improves the performance on a number of games, and improves the median normalized score
by 25% or 25.4% for the respective evaluation metrics.
The second measure of interest is the change in value error when using natural value estimates; this is
shown in Figure 4. The summary across all games is that the the natural value estimates are more
accurate, sometimes substantially so. Figure 4 also shows detailed plots from a few representative
games, showing that large accuracy gaps between Vt and G? lead to the learning of larger blending
proportions ?.
The fact that more accurate value estimates improve final performance on only some games should
not be surprising, as they only directly affect the critic and they affect the actor indirectly. It is also
2
This design choice may not be ideal in all circumstances, sometimes projecting old estimates further can
perform better?our variant however has the useful side-effect that the weight for the Vt update (Equation 5) is
now greater than zero independently of ?. This prevents one type of vicious cycle, where an initially inaccurate
Vt leads to a large ?, which in turn reduces the learning of Vt , and leads to an unrecoverable situation.
5
enduro 0%
freeway 0%
venture 0%
private_eye 0%
seaquest 0%
frostbite 1%
skiing 1%
bowling 1%
yars_revenge 1%
robotank 1%
double_dunk 1%
riverraid 2%
kung_fu_master 2%
fishing_derby 2%
kangaroo 2%
zaxxon 3%
road_runner 4%
jamesbond 4%
surround 5%
gopher 5%
amidar 7%
hero 8%
krull 9%
defender 9%
qbert 10%
crazy_climber 11%
time_pilot 11%
wizard_of_wor 18%
asterix 20%
phoenix 24%
tennis 25%
atlantis 25%
demon_attack 25%
name_this_game 25%
breakout 31%
berzerk 36%
up_n_down 38%
asteroids 50%
space_invaders 70%
video_pinball 453%
-12% assault
-12% ms_pacman
-10% chopper_command
-8% tutankham
-5% battle_zone
-5% centipede
-4% ice_hockey
-3% star_gunner
-2% alien
-1% boxing
-1% gravitar
-0% bank_heist
-0% pong
-0% pitfall
-0% solaris
-0% beam_rider
-0% montezuma_revenge
Figure 3: The performance gains of the proposed architecture over the baseline system, with
proposed?baseline
the performance normalized for each game with the formula max(human,baseline)?random
used
previously in the literature [15].
unclear for how many games the bottleneck is value accuracy instead of exploration, memory, local
optima, or sample efficiency.
6
Variants
We explored a number of related variants on the subset of tuning games, with mostly negative results,
and report our findings here, with the aim of adding some additional insight into what makes NVA
work?and to prevent follow-up efforts from blindly repeating our mistakes.
?-capacity We experimented with adding additional capacity to the ?-network in Equation 9,
namely inserting a hidden ReLU layer with nh ? {16, 32, 64}; this neither helped nor hurt performance, so opted for the simplest architecture (no hidden layer). We hypothesize that learning a binary
gate is much easier than learning the value estimate, so no additional capacity is required.
Weighted v-updates We also validated the design choice of weighting the update to v by its
usage (1 ? ?) (see Equation 5). On the 6 tuning games, weighting by usage obtains slightly higher
performance than an unweighted loss on v. One hypothesis is that the weighting permits the direct
estimates to be more accurate in some states than in others, freeing up function approximation
capacity for where it is most needed.
Semantic versus aggregate losses Our proposed method separates the semantically different
updates on ? and v, but of course a simpler alternative would be to directly regress the natural
value estimate G?t toward its target, and back-propagate the aggregate loss into both ? and v jointly.
This alternative performs substantially worse, empirically. We hypothesize one reason for this: in
a state where Gpt structurally over-estimates the target value, an aggregate loss will encourage v to
compensate by under-estimating it. In contrast, the semantic losses encourage v to simply be more
accurate and then reduce ?.
Training by back-propagation through time The recursive form of Equation 4 lends itself to
an implementation as a specific form of recurrent neural network, where the recurrent connection
transmits a single scalar G?t . In this form, the system can be trained by back-propagation through
time (BPTT [17]). This is semantically subtly different from our proposed method, as the gates ?
no longer make a local choice between Vt and Gpt , but instead the entire sequence of ?t?k to ?t is
6
-25%
0.5
40
20
0
error on v
error on G ?
0.0
1.0
6000
up_n_down
1.0
5000
4000
0.5
3000
0.5
2000
error on v
error on G ?
1000
0.0
0
error on v
error on G ?
surround
40
30
0.5
20
10
0.0
0
1.0
average ?
60
time_pilot
average ?
80
80
70
60
50
40
30
20
10
0
average squared TD error
1.0
average ?
average squared TD error
seaquest
average ?
average squared TD error
100
average squared TD error
-50%
centipede
enduro
venture
seaquest
atlantis
surround
pong
up_n_down
jamesbond
time_pilot
hero
beam_rider
bank_heist
asterix
frostbite
tennis
demon_attack
space_invaders
wizard_of_wor
defender
name_this_game
breakout
battle_zone
fishing_derby
freeway
amidar
qbert
road_runner
riverraid
zaxxon
crazy_climber
robotank
double_dunk
alien
asteroids
solaris
tutankham
assault
video_pinball
berzerk
phoenix
kung_fu_master
yars_revenge
krull
ice_hockey
montezuma_revenge
gopher
star_gunner
ms_pacman
gravitar
chopper_command
private_eye
bowling
boxing
skiing
pitfall
kangaroo
Relative change in value loss
0%
error on v
error on G ?
0.0
Figure 4: Reduction in value estimation error compared to the baseline. The proxies we use are average
squared TD-errors encountered during training, comparing v = 12 (Zt ? v(St ; ?))2 and ? = 12 (Zt ? G?t )2 .
Top: Summary graph for all games, showing relative change in error (? ? v )/v , averaged over the full
training run. As expected, the natural value estimate consistently has equal or lower error, validating our core
hypothesis. Bottom: Detailed plots on a handful of games. It shows the direct estimate error v (blue) and
natural value estimate error ? (red). In addition, the blending proportion ? (cyan) adapts over time to use more
of the prospective value estimate if that is more accurate.
trained to provide the best estimate G?t at time t (where k is the truncation horizon of BPTT). We
experimented with this variant as well: it led to a clear improvement over the baseline as well, but its
performance was substantially below the simpler feed-forward setup with reward buffer in Equation 9
(median normalized scores of 78% and 103% for the human and no-op starts respectively).
7
Discussion
Relation to eligibility traces In TD(?) [11], a well-known and successful variant of TD, the value
function (1) is not learned by a one-step update, but instead relies on multiple value estimates from
further in the future. Concretely, the target for the update of the estimate Vt is then G?t , which can
be defined recursively by G?t = Rt+1 + ?t+1 (1 ? ?)Vt+1 + ?t+1 ?G?t+1 , or as a mixture of several
n-step targets [12]. The trace parameter ? is similar to our ? parameter, but faces backwards in time
rather than forwards.
A quantity very similar to G?t was discussed by van Hasselt and Sutton [13], where this quantity
was then used to update values prior to time t. The inspiration was similar, in the sense that it was
acknowledged that G?t may be a more accurate target to use than either the Monte Carlo return or any
single estimated state value. The use of G?t itself for online predictions, apart from using it as a target
to update towards, was not yet investigated.
Extension to action-values There is no obstacle to extend our approach to estimators of actionvalues q(St , At , ?). One generalization from TD to SARSA is almost trivial. The quantity G?t then
has the semantics of the value of action At in state St .
It is also possible to consider off-policy learning. Consider the Bellman optimality equation
Q? (s, a) = E [Rt+1 + ?t+1 maxa0 Q? (St+1 , a0 )]. This implies that for the optimal value function
Q? ,
?
h
i
Q (St?1 , At?1 ) ? Rt
E max Q? (St , a) = E
.
a
?t
7
This implies that we may be able to use the quantity (Q(St?1 , At?1 ) ? Rt )/?t as an estimate for the
greedy value maxa Q(St , a). For instance, we could blend the value as in SARSA, and define
G?t = (1 ? ?t )Q(St , At ) + ?t
G?t?1 ? Rt
.
?t
Perhaps we could require ?t = 0 whenever At 6= arg maxa Q(St , a), in a similar vein as Watkins?
Q(?) [16] that zeros the eligibility trace for non-greedy actions. We leave this and other potential
variants for more detailed consideration in future work.
Memory NVA adds a small amount of memory to the system (a single scalar), which raises the
question of whether other forms of memory, such as the LSTM [4], provide a similar benefit. We
do not have a conclusive answer, but the existing empirical evidence indicates that the benefit of
natural value estimation goes beyond just memory. This can be seen by comparing to the A3C+LSTM
baseline (also proposed in [9]), which has vastly larger memory and number of parameters, yet did not
achieve equivalent performance (median normalized scores of 81% for the human starts). To some
extent this may be caused by the fact that recurrent neural networks are more difficult to optimize.
Regularity and structure Results from the supervised learning literature indicate that computing
a reasonable approximation of a given target function is feasible when the learning algorithm exploits
some kind of regularity in the latter [3]. For example, one may assume that the target function is
bounded, smooth, or lies in a low-dimensional manifold. These assumptions are usually materialised
in the choice of approximator. Making structural assumptions about the function to approximate
is both a blessing and a curse. While a structural assumption makes it possible to compute an
approximation with a reasonable amount of data, or using a smaller number of parameters, it can also
compromise the quality of the solution from the outset. We believe that while our method may not be
the ideal structural assumption for the problem of approximating value functions, it is at least better
than the smooth default.
Online learning By construction, the natural value estimates are an online quantity, that can only be
computed from a trajectory. This means that the extension to experience replay [6] is not immediately
obvious. It may be possible to replay trajectories, rather than individual transitions, or perhaps it
suffices to use stale value estimates at previous states, which might still be of better quality than the
current value estimate at the sampled state. We leave a full investigation of the combination of these
methods to future work.
Predictions as state In our proposed method the value is estimated in part as a function of a single
past prediction, and this has some similarity to past work in predictive state representations [7].
Predictive state representations are quite different in practice: their state consists of only predictions,
the predictions are of future observations and actions (not rewards), and their objective is to provide a
sufficient representation of the full environmental dynamics. The similarities are not too strong with
the work proposed here, as we use a single prediction of the actual value, this prediction is used as a
small but important part of the state, and the objective is to estimate only the value function.
8
Conclusion
This paper argues that there is one specific structural regularity that underlies the value function
of many reinforcement learning problems, which arises from the temporal nature of the problem.
We proposed natural value approximation, a method that learns how to combine a direct value
estimate with ones projected from past estimates. It is effective and simple to implement, which
we demonstrated by augmenting the value critic in A3C, and which significantly improved median
performance across 57 Atari games.
Acknowledgements
The authors would like to thank Volodymyr Mnih for his suggestions and comments on the early
version of the paper, the anonymous reviewers for constructive suggestions to improve the paper. The
authors also thank the DeepMind team for setting up the environments and building helpful tools
used in the paper.
8
References
[1] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning
environment: An evaluation platform for general agents. Journal of Artificial Intelligence
Research, 47:253?279, 2013.
[2] Richard Bellman. A Markovian decision process. Technical report, DTIC Document, 1957.
[3] L?szl? Gy?rfi. A Distribution-Free Theory of Nonparametric Regression. Springer Science &
Business Media, 2002.
[4] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735?1780, 1997.
[5] Diederik Kingma and Jimmy Ba. ADAM: A method for stochastic optimization. In ICLR, 2014.
[6] Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and
teaching. Machine learning, 8(3-4):293?321, 1992.
[7] Michael L Littman, Richard S Sutton, and Satinder Singh. Predictive representations of state.
In NIPS, pages 1555?1562, 2002.
[8] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G
Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan
Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement
learning. Nature, 518(7540):529?533, 2015.
[9] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep
reinforcement learning. In ICML, pages 1928?1937, 2016.
[10] Richard S Sutton. Temporal credit assignment in reinforcement learning. PhD thesis, University
of Massachusetts Amherst, 1984.
[11] Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning,
3(1):9?44, 1988.
[12] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1.
1998.
[13] Hado van Hasselt and Richard S. Sutton. Learning to predict independent of span. CoRR,
abs/1508.04582, 2015.
[14] Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double
Q-learning. In AAAI, pages 2094?2100, 2016.
[15] Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas.
Dueling network architectures for deep reinforcement learning. In ICML, pages 1995?2003,
2016.
[16] Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis,
University of Cambridge England, 1989.
[17] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of
the IEEE, 78(10):1550?1560, 1990.
9
| 6807 |@word version:1 proportion:2 bptt:2 propagate:1 solid:1 recursively:1 reduction:1 moment:1 initial:3 score:4 document:1 past:7 existing:1 hasselt:5 current:5 com:6 comparing:2 surprising:1 freitas:1 yet:2 diederik:1 must:1 guez:1 john:1 subsequent:2 shape:2 hypothesize:2 drop:4 plot:4 update:11 alone:1 greedy:2 intelligence:1 amir:1 beginning:1 core:2 short:1 provides:1 simpler:2 centipede:2 wierstra:1 direct:5 become:2 qualitative:1 consists:1 combine:2 introduce:2 expected:3 nor:1 planning:1 bellman:3 discounted:5 pitfall:2 td:10 little:1 cpu:1 window:1 freeway:2 curse:1 becomes:1 provided:1 estimating:1 bounded:1 actual:1 medium:1 aliased:2 what:3 atari:5 kind:1 substantially:3 deepmind:7 maxa:2 robotank:2 dharshan:1 finding:1 bootstrapping:1 temporal:6 exactly:2 control:1 unit:2 atlantis:2 positive:1 before:1 local:2 modify:1 struggle:1 mistake:1 sutton:6 interpolation:2 matteo:1 black:4 might:2 studied:1 r4:1 challenging:1 limited:1 averaged:1 recursive:2 practice:1 implement:1 differs:1 backpropagation:1 cornish:1 bootstrap:3 demis:1 riedmiller:1 empirical:1 significantly:1 word:1 radial:1 regular:1 outset:1 arcade:1 petersen:1 cannot:1 valley:4 put:1 bellemare:2 descending:1 optimize:1 conventional:4 map:1 equivalent:1 demonstrated:1 reviewer:1 helen:1 go:1 sepp:1 starting:3 independently:1 convex:1 jimmy:1 immediately:3 estimator:2 insight:1 his:1 handle:1 variation:1 hurt:1 updated:1 target:19 imagine:1 construction:1 us:1 hypothesis:2 associate:1 infrequently:1 asymmetric:1 hessel:1 werbos:1 vein:1 observed:7 bottom:1 wang:1 capture:2 ensures:1 connected:1 cycle:1 episode:2 trade:1 environment:6 pong:2 reward:30 ideally:1 littman:1 dynamic:1 trained:3 raise:1 singh:1 compromise:1 subtly:1 predictive:3 efficiency:1 basis:1 joint:1 various:1 train:1 effective:3 monte:1 artificial:1 aggregate:4 hyper:1 refined:1 kangaroo:2 quite:2 richer:1 larger:3 widely:1 otherwise:1 gp:1 jointly:1 itself:3 final:2 online:4 sequence:6 differentiable:1 advantage:1 propose:1 interaction:1 inserting:1 combining:1 achieve:2 adapts:1 schaul:3 breakout:2 venture:2 constituent:1 regularity:3 optimum:1 r1:1 double:1 produce:3 silver:4 adam:2 leave:2 object:1 tim:1 recurrent:3 andrew:1 augmenting:1 freeing:1 op:3 strong:1 implies:2 indicate:1 thick:1 closely:1 discontinuous:1 stochastic:2 exploration:1 human:10 nando:1 require:1 maxa0:1 behaviour:1 fix:1 generalization:2 wall:1 suffices:1 investigation:1 anonymous:1 sarsa:2 blending:4 extension:2 credit:1 exp:1 seed:1 predict:5 solaris:2 rgen:1 vary:1 early:1 estimation:4 gpt:5 vice:1 tool:1 weighted:2 aim:1 reaching:2 rather:3 rusu:1 barto:1 zaxxon:2 pervasive:1 validated:1 improvement:3 consistently:1 legg:1 indicates:2 alien:2 contrast:4 opted:1 baseline:14 sense:1 helpful:2 dependent:4 inaccurate:1 entire:1 a0:1 initially:1 hidden:4 relation:1 selects:1 semantics:1 arg:1 overall:1 denoted:1 seaquest:3 nva:6 qbert:2 art:2 summed:1 softmax:1 platform:1 equal:1 beach:1 veness:2 koray:2 represents:1 icml:2 thin:1 future:8 mimic:1 report:2 others:1 mirza:1 defender:2 few:1 richard:6 composed:1 individual:1 delayed:1 harley:1 ab:1 interest:1 ostrovski:1 investigate:1 mnih:3 evaluation:4 unrecoverable:1 joel:2 szl:1 mixture:1 fog:7 undefined:1 accurate:14 tuple:1 encourage:2 partial:1 arthur:1 experience:1 respective:1 old:1 puigdomenech:1 walk:1 a3c:9 desired:1 instance:2 obstacle:1 markovian:1 assignment:1 subset:1 successful:1 too:1 answer:1 st:29 peak:1 randomized:1 ops:1 lstm:2 amherst:1 systematic:1 rewarding:1 receiving:1 off:2 picking:1 destination:1 asterix:2 michael:2 squared:6 central:1 vastly:1 thesis:2 aaai:1 possibly:1 worse:1 return:2 volodymyr:3 potential:1 diversity:1 de:1 gy:1 ioannis:1 coefficient:4 satisfy:1 caused:1 depends:1 crossed:1 root:1 helped:1 red:4 start:8 rmse:2 contribution:1 minimize:1 formed:2 accuracy:3 characteristic:2 spaced:1 ztn:4 modelled:1 kavukcuoglu:2 accurately:2 carlo:1 trajectory:3 andre:1 whenever:2 definition:2 evaluates:2 regress:1 obvious:1 naturally:1 associated:1 transmits:1 propagated:1 stop:1 gain:1 sampled:1 massachusetts:1 improves:3 enduro:2 back:4 feed:3 higher:1 supervised:2 day:1 tom:2 follow:1 improved:2 done:1 evaluated:1 furthermore:1 just:2 until:4 horizontal:1 trust:2 mehdi:1 christopher:1 propagation:2 google:6 minibatch:1 defines:2 mode:1 quality:2 perhaps:2 stale:1 mdp:3 krull:2 building:1 believe:1 lillicrap:1 usa:1 normalized:7 true:6 effect:1 inductive:1 discounting:1 usage:2 inspiration:1 semantic:2 deal:1 game:20 bowling:3 during:1 eligibility:2 self:1 illustrative:1 hill:10 demonstrate:1 tn:1 performs:1 argues:1 consideration:1 charles:1 common:2 sigmoid:2 rl:3 phoenix:2 empirically:1 ji:1 nh:1 million:2 discussed:2 extend:1 volume:1 significant:1 refer:1 versa:1 surround:3 cambridge:1 tuning:2 teaching:1 badia:1 actor:4 longer:2 tennis:2 similarity:2 gt:1 add:1 recent:1 skiing:2 apart:1 scenario:2 schmidhuber:1 buffer:3 binary:1 success:1 vt:18 approximators:7 seen:1 additional:4 care:1 greater:1 determine:1 maximize:1 dashed:5 multiple:2 full:4 reduces:2 stem:1 smooth:10 technical:1 match:4 england:1 long:4 compensate:1 lin:1 equally:1 prediction:14 underlies:1 regression:2 variant:7 heterogeneous:2 circumstance:1 expectation:1 metric:2 blindly:1 hado:5 represent:2 sometimes:3 achieved:1 hochreiter:1 schematically:2 addition:2 interval:1 diagram:2 median:7 extra:1 comment:1 induced:1 tend:1 validating:1 shane:1 call:2 structural:4 near:1 counting:1 ideal:2 exceed:1 backwards:1 easy:1 variety:1 affect:2 relu:2 architecture:6 observability:3 idea:2 reduce:1 andreas:1 consumed:1 bottleneck:1 whether:1 effort:1 action:8 repeatedly:1 deep:9 berzerk:2 useful:2 rfi:1 detailed:3 clear:1 amount:3 repeating:1 discount:5 nonparametric:1 ten:1 simplest:2 estimated:2 blue:4 naddaf:1 discrete:1 georg:1 salient:1 acknowledged:1 jv:3 frostbite:2 prevent:1 neither:1 assault:2 graph:1 sum:3 run:3 everywhere:1 arrive:1 almost:1 reader:1 reasonable:2 tutankham:2 decision:2 summarizes:1 appendix:1 lanctot:1 layer:5 cyan:1 followed:2 encountered:3 adapted:2 occur:1 handful:2 alex:2 optimality:1 span:1 yavar:1 martin:1 combination:4 poor:1 remain:1 across:6 slightly:1 smaller:1 joseph:1 making:1 projecting:1 modayil:2 taken:2 equation:8 previously:1 turn:1 r3:1 mechanism:1 needed:1 hero:2 antonoglou:1 end:1 parametrize:1 boxing:2 permit:1 indirectly:1 stig:1 subtracted:1 alternative:2 hassabis:1 gate:2 original:1 top:7 exploit:1 concatenated:1 especially:1 approximating:1 objective:2 question:1 quantity:5 blend:1 rt:17 interacts:1 unclear:1 gradient:6 lends:1 iclr:1 distance:2 separate:3 thank:2 fidjeland:1 capacity:5 prospective:1 manifold:1 collected:1 extent:1 trivial:1 toward:1 reason:1 assuming:1 minimizing:1 difficult:2 setup:3 mostly:1 potentially:1 trace:3 negative:3 rise:1 ba:1 design:2 reliably:1 zt:10 policy:6 implementation:1 perform:1 observation:9 convolution:1 markov:1 kumaran:1 benchmark:1 daan:1 descent:1 immediate:1 situation:4 looking:1 strain:1 head:2 frame:2 team:1 incorporate:1 sharp:3 david:4 namely:1 required:1 connection:1 conclusive:1 learned:6 amidar:2 kingma:1 discontinuity:2 nip:2 address:1 able:3 beyond:1 below:2 usually:1 challenge:1 summarize:1 reliable:1 green:1 video:1 max:2 memory:7 dueling:1 event:2 natural:25 rely:1 gopher:2 business:1 recursion:1 older:1 improve:3 axis:1 prior:1 literature:2 acknowledgement:1 relative:2 graf:2 loss:16 fully:1 mixed:1 suggestion:2 approximator:4 versus:1 integrate:1 agent:22 sufficient:1 proxy:1 s0:1 critic:5 course:1 summary:2 last:1 asynchronous:2 truncation:1 free:1 bias:1 side:1 face:1 sparse:4 van:5 benefit:2 default:2 stand:2 cumulative:2 transition:3 rich:1 unweighted:1 forward:5 made:1 reinforcement:12 projected:5 collection:1 concretely:1 author:2 employing:1 approximate:3 obtains:2 satinder:1 assumed:1 ziyu:1 table:4 learn:1 nature:3 ca:1 ignoring:1 improving:1 investigated:1 domain:3 marc:3 did:1 main:2 dense:1 decrement:1 motivation:1 arrow:1 paul:1 xu:1 augmented:1 representative:1 board:1 andrei:1 structurally:1 position:1 lie:1 replay:2 watkins:2 third:1 weighting:3 learns:2 formula:1 davidsilver:1 specific:2 inset:1 showing:2 learnable:1 r2:1 explored:1 experimented:2 concern:1 evidence:1 adding:3 corr:1 phd:2 illustrates:1 horizon:1 dtic:1 gap:1 easier:1 smoothly:2 depicted:2 led:1 timothy:1 simply:2 prevents:1 hitting:1 partially:1 scalar:5 sadik:1 springer:1 corresponds:2 environmental:1 relies:2 goal:4 identity:1 king:1 adria:1 towards:2 shared:1 feasible:1 change:6 typical:1 except:1 uniformly:1 asteroid:2 semantically:2 beattie:1 total:1 blessing:1 gravitar:2 latter:1 arises:1 reactive:1 barreto:1 constructive:1 |
6,421 | 6,808 | Bandits Dueling on Partially Ordered Sets
Julien Audiffren
CMLA
ENS Paris-Saclay, CNRS
Universit?e Paris-Saclay, France
[email protected]
Liva Ralaivola
Lab. Informatique Fondamentale de Marseille
CNRS, Aix Marseille University
Institut Universitaire de France
F-13288 Marseille Cedex 9, France
[email protected]
Abstract
We address the problem of dueling bandits defined on partially ordered sets, or
posets. In this setting, arms may not be comparable, and there may be several
(incomparable) optimal arms. We propose an algorithm, UnchainedBandits,
that efficiently finds the set of optimal arms ?the Pareto front? of any poset
even when pairs of comparable arms cannot be a priori distinguished from pairs
of incomparable arms, with a set of minimal assumptions. This means that UnchainedBandits does not require information about comparability and can be
used with limited knowledge of the poset. To achieve this, the algorithm relies
on the concept of decoys, which stems from social psychology. We also provide
theoretical guarantees on both the regret incurred and the number of comparison
required by UnchainedBandits, and we report compelling empirical results.
1
Introduction
Many real-life optimization problems pose the issue of dealing with a few, possibly conflicting,
objectives: think for instance of the choice of a phone plan, where a right balance between the price,
the network coverage/type, and roaming options has to be found. Such multi-objective optimization
problems may be studied from the multi-armed bandits perspective (see e.g. Drugan and Nowe
[2013]), which is what we do here from a dueling bandits standpoint.
Dueling Bandits on Posets. Dueling bandits [Yue et al., 2012] pertain to the K-armed bandit
framework, with the assumption that there is no direct access to the reward provided by any single
arm and the only information that can be gained is through the simultaneous pull of two arms: when
such a pull is performed the agent is informed about the winner of the duel between the two arms.
We extend the framework of dueling bandits to the situation where there are pairs of arms that are not
comparable, that is, we study the case where there might be no natural order that could help decide
the winner of a duel?this situation may show up, for instance, if the (hidden) values associated with
the arms are multidimensional, as is the case in the multi-objective setting mentioned above. The
notion of incomparability naturally links this problem with the theory of posets and our approach
take inspiration from works dedicated to selecting and sorting on posets [Daskalakis et al., 2011].
Chasing the Pareto Front. In this setting, the best arm may no longer be unique, and we consider
the problem of identifying among all available K arms the set of maximal incomparable arms, or
the Pareto front, with minimal regret. This objective significantly differs from the usual objective
of dueling bandit algorithms, which aim to find one optimal arm?such as a Condorcet winner, a
Copeland winner or a Borda winner?and pull it as frequently as possible to minimize the regret.
Finding the entire Pareto front (denoted P) is more difficult, but pertains to many real-world applications. For instance, in the discussed phone plan setting, P will contain both the cheapest plan and the
plan offering the largest coverage, as well as any non dominated plan in-between; therefore, every
customer may then find a a suitable plan in P in accordance with her personal preferences.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Key: Indistinguishability. In practice, the incomparability information might be difficult to obtain.
Therefore, we assume the underlying incomparability structure, is unknown and inaccessible. A
pivotal issue that arises is that of indistinguishability. In the assumed setting, the pull of two arms that
are comparable and that have close values?and hence a probability for either arm to win a duel close
to 0.5?is essentially driven by the same random process, i.e. an unbiased coin flip, as the draw of
two arms that are not comparable. This induces the problem of indistinguishability: that of deciding
from pulls whether a pair of arms is incomparable or is made of arms of similar strengths.
Contributions. Our main contribution, the UnchainedBandits algorithm, implements a strategy
based on a peeling approach (Section 3). We show that UnchainedBandits can find a nearly
optimal approximation of the the set of optimal
while
? arms of S with probability
?at least 1
P
K
1
incurring a regret upper bounded by R ? O Kwidth(S) log
, where i is the
i,i2P
/
i
regret associated with arm i, K the size of the poset and width(S) its width, and that this regret
is essentially optimal. Moreover, we show that with little additional information, UnchainedBandits can recover the exact set of optimal arms, and that even when no additional information
is available, UnchainedBandits can recover P by using decoy arms?an idea stemming from
social psychology, where decoys are used to lure an agent (e.g., a customer) towards a specific
good/action (e.g. a product) by presenting her a choice between the targetted good and a degraded
version of it (Section 4). Finally, we report results on the empirical performance of our algorithm in
different settings (Section 5).
Related Works. Since the seminal paper of Yue et al. [2012] on Dueling Bandits, numerous works
have proposed settings where the total order assumption is relaxed, but the existence of a Condorcet
winner is assumed [Yue and Joachims, 2011, Ailon et al., 2014, Zoghi et al., 2014, 2015b]. More
recent works [Zoghi et al., 2015a, Komiyama et al., 2016], which envision bandit problems from the
social choice perspective, pursue the objective of identifying a Copeland winner. Finally, the works
closest to our partial order setting are [Ramamohan et al., 2016] and [Dud??k et al., 2015]. The former
proposes a general algorithm which can recover many sets of winners?including the uncovered set,
which is akin to the Pareto front; however, it is assumed the problems do not contain ties while in our
framework, any pair of incomparable arms is encoded as a tie. The latter proposes an extension of
dueling bandits using contexts, and introduces several algorithms to recover a Von Neumann winner,
i.e. a mixture of arms that is better that any other?and in our setting, any mixture of arms from
the Pareto front is a Von Neumann winner. It is worth noting that the aforementioned works aim to
identify a single winner, either Condorcet, Copeland or Von Neumann. This is significantly different
from the task of identifying the entire Pareto front. Moreover, the incomparability property is not
addressed in previous works; if some algorithms may still be applied if incomparability is encoded as
a tie, they are not designed to fully use this information, which is reflected by their performances
in our experiments. Moreover, our lower bound illustrates the fact that our algorithm is essentially
optimal for the task of identifying the Pareto front. Regarding decoys, the idea originates from
social psychology; they introduce the idea that the introduction of strictly dominated alternatives may
influence the perceived value of items. This has generated an abundant literature that studied decoys
and their uses in various fields (see e.g. Tversky and Kahneman [1981], Huber et al. [1982], Ariely
and Wallsten [1995], Sedikides et al. [1999]). From the computer science literature, we may mention
the work of Daskalakis et al. [2011], which addresses the problem of selection and sorting on posets
and provides relevant data structures and accompanying analyses.
2
Problem: Dueling Bandits on Posets
We here briefly recall base notions and properties at the heart of our contribution.
Definition 2.1 (Poset). Let S be a set of elements. (S, <) is a partially ordered set or poset if < is a
partial reflexive, antisymmetric and transitive binary relation on S.
Transitivity relaxation. Recent works on dueling bandits (see e.g. Zoghi et al. [2014]) have shown
that the transitivity property is not required for the agent to successfully identify the maximal element
(in that cas,e the Condorcet winner), if it is assumed to exists. Similarly, most of the results we
provide do not require transitivity. In the following, we dub social poset a transitivity-free poset, i.e.
a partial binary relation which is solely reflexive and antisymmetric.
Remark 2.2. Throughout, we will use S to denote indifferently the set S or the social poset (S, <),
the distinction being clear from the context. We make use of the additional notation: 8a, b 2 S
2
? a k b if a and b are incomparable (neither a < b nor b < a);
? a b if a < b and a 6= b;
Definition 2.3 (Maximal element and Pareto front). An element a 2 S is a maximal element of S
.
if 8b 2 S, a < b or a k b. We denote by P(S) = {a : a < b or a k b, 8b 2 S}, the set of maximal
elements or Pareto front of the social poset.
Similarly to the problem of the existence of a Condorcet winner, P might be empty for social poset
(in with posets there always is at least one maximal element). In the following, we assume that
|P| > 0. The notions of chain and antichain are key to identify P.
Definition 2.4 (Chain, Antichain, Width and Height). C ? S is a chain (resp. an antichain) if
8a, b 2 C, a < b or b < a (resp. a k b). C is maximal if 8a 2 S \ C, C [ {a} is not a chain (resp.
an antichain). The height (resp. width) of S is the size of its longest chain (resp. antichain).
K-armed Dueling Bandit on posets. The K-armed dueling bandit problem on a social poset
S = {1, . . . , K} of arms might be formalized as follows. For all maximal chains {i1 , . . . , im } of
m arms there exist a family { ip iq }1?p,q?m of parameters such that ij 2 ( 1/2, 1/2) and the pull
of a pair (ip , iq ) of arms from the same chain is the independent realization of a Bernoulli random
variable Bip iq with expectation E(Bip iq ) = 1/2 + ip iq , where Bip iq = 1 means that i is the winner
of the duel between i and j and conversely (note that: 8i, j, ji =
ij ). In the situation where the
pair of arms (ip , iq ) selected by the agent corresponds to arms such that ip k iq , a pull is akin to the
toss of an unbiased coin flip, that is, ip iq = 0. This is summarized by the following assumption:
Assumption 1 (Order Compatibility). 8i, j 2 S, (i
j) if and only if
ij
> 0.
Regret on posets. In the total order setting, the regret incurred by pulling an arm i is defined as the
difference between the best arm and arm i. In the poset framework, there might be multiple ?best?
arms, and we chose to define regret as the maximum of the difference between arm i and the best arm
comparable to i. Formally, the regret i is defined as :
i
= max{
ji , 8j
2 P such that j < i}.
We then define the regret incurred by comparing two arms i and j by i + j . Note the regret of a
comparison is zero if and only if the agent is comparing two elements of the Pareto front.
Problem statement. The problem that we want to tackle is to identify the Pareto front P(S) of S as
efficiently as possible. More precisely, we want to devise pulling strategies such that for any given
2 (0, 1), we are ensured that the agent is capable, with probability 1
to identify P(S) with a
controlled number of pulls and a bounded regret.
"-indistinguishability. In our model, we assumed that if i k j, then ij = 0: if two arms cannot be
compared, the outcome of the their comparison will only depend on circumstances independent from
the arms (like luck or personal tastes). Our encoding of such framework makes us assume that when
considered over many pulls, the effects of those circumstances cancel out, so that no specific arm
is favored, whence ij = 0. The limit of this hypothesis and the robustness of our results when not
satisfied are discussed in Section 5.
This property entails the problem of indistinguishability evoked previously. Indeed, given two arms i
and j, regardless of the number of comparisons, an agent may never be sure if either the two arms are
very close to each other ( ij ? 0 and i and j are comparable) or if they are not comparable ( ij = 0).
This raises two major difficulties. First, any empirical estimation ?ij of ij being close to zero is no
longer a sufficient condition to assert that i and j have similar values; insisting on pulling the pair
(i, j) to decide whether they have similar value may incur a very large regret if they are incomparable.
Second, it is impossible to ensure that two elements are incomparable?therefore, identifying the
exact Pareto set is intractable if no additional information is provided. Indeed,the agent might never
be sure if the candidate set no longer contains unnecessary additional elements?i.e. arms very close
to the real maximal elements but nonetheless dominated. This problem motivates the following
definition, which quantifies the notion of indistinguishability:
Definition 2.5 ("-indistinguishability). Let a, b 2 S and " > 0. a and b are "-indistinguishable,
noted a k" b, if | ab | ? ".
As the notation k" implies, the "-indistinguishability of two arms can be seen as a weaker form of
incomparability, and note that as "-decreases, previously indistinguishable pairs of arms become dis3
Algorithm 1 Direct comparison
Given (S, ) a social poset, , " > 0, a, b 2 S
Define pab the average number of victories of a over b and Iab its 1
Compare a and b until |Iab | < " or 0.5 62 Iab .
return a k" b if |Iab | < ", else a b if pab > 0.5, else b a.
confidence interval.
Algorithm 2 UnchainedBandits
N
Given S = {s1 , . . . , sK } a social poset, > 0, N > 0, ("t )N
t=1 2 R+
K
Define Set S0 = S. Maintain p? = (?
pij )i,j=1 the average number of victories of i against j and
?q
?
log(N K 2 / )
K
I = (Iij )i,j=1 = min
,
1
the corresponding 1
/N K 2 confidence interval.
2nij
b
Peel P:
for t = 1 to N do St+1 = UBSRoutine (St , "t , /N, A = Algorithm 1).
b = SN +1
return P
tinguishable, and the only 0 indistinguishable pair of arms are the incomparable pairs. The classical
notions of a poset related to incomparability can easily be extended to fit the "-indistinguishability:
Definition 2.6 ("-antichain, "-width and "-approximation of P). Let " > 0. C ? S is an "-antichain
if 8a 6= b 2 C, we have a k" b. Additionally, P 0 ? S is an "-approximation of P (noted P 0 2 P" ) if
P ? P 0 and P 0 is an "-antichain. Finally, width" (S) is the size of the largest "-antichain of S.
Features of P" . While the Pareto front is always unique, it might possess multiple "-approximations.
The interest of working with P" is threefold: i) to find an "-approximation of P, the agent only
has to remove the elements of S which are not "-indistinguishable from P; thus, if P cannot be
recovered in the partially observable setting, an "-approximation of P can be obtained; ii) any set
in P" contains P, so no maximal element is discarded; iii) for any B 2 P" all the elements of B
are nearly optimal, in the sense that 8i 2 B, i < ". It is worth noting that "-approximations of
P may structurally differ from P in some settings, though. For instance, if S includes an isolated
cycle, an "-approximation of the Pareto front may contain elements of the cycle and in such case,
approximating the Pareto front using "-approximation may lead to counterintuitive results.
Finding an "-approximation of P is the focus of the next subsection.
3
3.1
Chasing P" with UnchainedBandits
Peeling and the UnchainedBandits Algorithm
While deciding if two arms are incomparable or very close is intractable, the agent is able to find
if two arms a and b are "-indistinguishable, by using for instance the direct comparison process
provided by Algorithm 1. Our algorithm, UnchainedBandits, follows this idea to efficiently
retrieve an "-approximation of the Pareto front. It is based on a peeling technique: given N > 0 and
bt of the Pareto front,
a decreasing sequence ("t )1?t?N it computes and refines an "t -approximation P
using UBSRoutine (Algorithm 3), which considers "t -indistinguishable arms as incomparable.
Peeling S. Peeling provides a way to control the time spent on pulling indistinguishable arms, and it
is used to upper bound the regret.Without peeling, i.e. if the algorithm were directly called with "N ,
the agent could use a number of pulls proportional to 1/"2N trying to distinguish two incomparable
arms, even though one of them is a regret inducing arm (e.g. an arm j with a large | i,j | for some
i 2 P). The peeling strategy ensures that inefficient arms are eliminated in early epochs, before the
agent can focus on the remaining arms with an affordable larger number of comparisons.
Algorithm subroutine. At each epoch, UBSRoutine (Algorithm 3), called on St with parameter
" > 0 and > 0, works as follows. It chooses a single initial pivot?an arm to which other arms are
compared?and successively examines all the elements of St . The examined element p is compared to
all the pivots (the current pivot and the previously collected ones), using Algorithm 1 with parameters
" and /K 2 . Each pivot that is dominated by p is removed from the pivot set. If after being compared
to all the pivots, p has not been dominated, it is added to the pivot set. At the end, the set of remaining
pivots is returned.
4
Algorithm 3 UBSRoutine
Given St a social poset, "t > 0 a precision criterion, 0 an error parameter
b = {p} the set of pivots.
Initialisation Choose p 2 St at random. Define P
b
Construct P
for c 2 St \ {p} do
b compare c and c0 using Algorithm 1 with ( = 0 /|St |2 , " = "t ).
for c0 2 P,
0
b such that c c0 , remove c0 from P
b
8c 2 P,
0
b c k" c0 then add c to P
b
if 8c 2 P,
t
return P?
Reuse of informations. To optimize the efficiency of the peeling process, UnchainedBandits
reuses previous comparison results: the empirical estimates pab and the corresponding confidence
intervals Iab are initialized using the statistics collected from previous pulls of a and b.
3.2
Regret Analysis
In this part, we focus on geometrically decreasing peeling sequence, i.e. 9 > 0 such that "t =
t
8n 0. We now introduce the following Theorem1 which gives an upper bound on the regret
incurred by UnchainedBandits.
Theorem 1. Let R be the regret generated by Algorithm 2 applied on S with parameters , N and
t
with a decreasing sequence ("t )N
, 8t 0. Then with probability at least 1
,
t=1 such that "t =
UnchainedBandits successfully returns P? 2 P"N after at most T comparisons, with
T ? O Kwidth"N (S)log(N K 2 / )/"2N
?
? K
2K
2N K 2 X 1
R ? 2 log
i=1
(1)
(2)
i
The 1/ 2 reflects the fact that a careful peeling, i.e. close to 1, is required to avoid unnecessary
expensive (regret-wise) comparisons: this prevents the algorithm from comparing two incomparable?
yet severely suboptimal?arms for an extended period of time. Conversely, for a given approximation
accuracy "N = ", N increases as 1/ log , since N = ", which illustrates the fact that unnecessary
peeling, i.e. peeling that do not remove any arms, lead to a slightly increased regret. In general,
should be chosen close to 1 (e.g. 0.95), as the advantages tend to surpass the drawbacks?unless
additional information about the poset structure are known.
Influence of the complexity of S. In the bounds of Theorem 1, the complexity of S influences the
result through its total size |S| = K and its width. One of the features of UnchainedBandits is
that the dependency in S in Theorem 1 is |S|width(S) and not |S|2 . For instance, if S is actually
equipped with a total order, then width(S) = 1 and we recover the best possible dependency in
|S|?which is highlighted by the lower bound (see Theorem 2).
Comparison Lower Bound. We will now prove that the previous result is nearly optimal in order. Let
A denotes a dueling bandit algorithm on hidden posets. We first introduce the following Assumption:
Assumption 2. 8K > W 2 N+
> 0, 1/8 > " > 0, for any poset S such that |S| ? K
? , for all
and max (|P" (S)|) ? W , A identify an "-approximation of the Pareto front P" of S with probability
at least 1
with at most TA," (K, W ) comparisons.
Theorem 2. Let A be a dueling bandit algorithm satisfying Assumption 2. Then for any > 0,
1/8 > " > 0, K and W two positive integers such that K > W > 0, there exists a poset S such that
|S| = K, width(S) = |P(S)| = W , max (|P" (S)|) ? W and
?
?
E TA," (K, W )|A(S) = P(S)
?
?
e KW log(1/ ) .
?
"2
The main discrepancy between the usual dueling bandit upper and lower bounds for regret is the K
factor (see e.g. [Komiyama et al., 2015]) and ours is arguably the K factor. It is worth noting that
1
The complete proof for all our results can be found in the supplementary material.
5
Algorithm 4 Decoy comparison
Given (S, ) a poset, , > 0, a, b 2 S
Initialisation Create a0 , b0 the respective - decoy of a, b. Maintains pab the average number of
victory of a over b and Iab its 1
/2 confidence interval,
Compare a and b0 , b and a0 , until max(|Iab0 |, |Iba0 |) < or pab0 > 0.5 or pa0 b > 0.5.
return a k" b if max(|Iab0 |, |Iba0 |) < , else a b if pab0 > 0.5, else b a.
this additional complexity is directly related to the goal of finding the entire Pareto front, as can be
seen in the proof of Theorem 2 (see Supplementary).
4
Finding P using Decoys
In this section, we discuss several methods to recover the exact Pareto front from an "-approximation,
when S is a poset. First, note that P can be found if additional information on the poset is available.
For instance, if a lower bound c > 0 on the minimum distance of any arm to the Pareto set?defined
as d(P) = min{ ij , 8i 2 P, j 2 S \ P, such that i j}?is known, then since Pc = {P}, UnchainedBandits used with "N = c will produce the Pareto front of S. Alternatively, if the size k
of the Pareto front is known, P can be found by peeling St until it achieves the desired size. This can
be achieved by successively calling UBSRoutine with parameters St , "t = t , and t = 6 /? 2 t2 ,
and by stopping as soon as |St | = k.
This additional information may be unavailable in practice, so we propose an approach which does
not rely on external information to solve the problem at hand. We devise a strategy which rests
on the idea of decoys, that we now fully develop. First, we formally define decoys for posets, and
we prove that it is a sufficient tool to solve the incomparability problem (Algorithm 4). We also
present methods for building those decoys, both for the purely formal model of posets and for real-life
problems. In the following, is a strictly positive real number.
Definition 4.1 ( -decoy). Let a 2 S. Then b 2 S is said to be a -decoy of a if :
1. a < b and a,b
;
2. 8c 2 S, a k c implies b k c;
3. 8c 2 S such that c < a, c,b
.
The following proposition illustrates how decoys can be used to assess incomparability.
Proposition 4.2 (Decoys and incomparability). Let a and b 2 S. Let a0 (resp. b0 ) be a -decoy of a
(resp. b). Then a and b are comparable if and only if max( b,a0 , a,b0 )
.
Algorithm 4 is derived from this result. The next proposition, which is an immediate consequence of
Proposition 4.2, gives a theoretical guarantee on its performance.
Proposition 4.3. Algorithm 4 returns the correct incomparability result with probability at least
1
after at most T comparisons, where T = 4log(4/ )/ 2 .
Adding decoys to a poset. A poset S may not contain all the necessary decoys. To alleviate this, the
following proposition states that it is always possible to add relevant decoys to a poset.
Proposition 4.4 (Extending a poset with a decoy). Let (S, <, ) be a dueling bandit problem on a
poset S and a 2 S. Define a0 , S 0 , 0 , 0 as follows:
? S 0 = S [ {a0 }
0
? 8b, c 2 S, b < c i.f.f. b <0 c and b,c
= b,c
0
0
? 8b 2 S, if b < a then b < a and b,a
). Otherwise, b k a0 .
0 = max( b,a ,
0
0
0
0
Then (S , < , ) defines a dueling bandit problem on poset, |S = , and a0 is a -decoy of a.
Note that the addition of decoys in a poset does not disqualify previous decoys, so that this proposition
can be used iteratively to produce the required number of decoys.
Decoys in real-life. The intended goal of a decoy a0 of a is to have at hand an arm that is known to
be lesser than a. Creating such a decoy in real-life can be done by using a degraded version of a: for
the case of an item in a online shop, a decoy can be obtained by e.g. increasing the price. Note that
while for large values of the parameter of the decoys Algorithm 4 requires less comparisons (see
6
Table 1: Comparison between the five films with the highest average scores (bottom line) and the five films of
the computed "-pareto set (top line).
Pareto Front Pulp Fiction Fight Club
Shawshank Redemption The Godfather Star Wars Ep. V
Top Five
Pulp Fiction Usual Suspect Shawshank Redemption The Godfather The Godfather II
Proposition 4.3), in real-life problems, the second point of Definition 4.1 tends to become false: the
new option is actually so worse than the original that the decoy becomes comparable (and inferior)
to all the other arms, including previously non comparable arms (example: if the price becomes
absurd). In that case, the use of decoys of arbitrarily large can lead to erroneous conclusions about
the Pareto front and should be avoided. Given a specific decoy, the problem of estimating in a
real-life problem may seem difficult. However, as decoys are not new?even though the use we make
of them here is?a number of methods [Heath and Chatterjee, 1995] have been designed to estimate
the quality of a decoy, which is directly related to , and, with limited work, this parameter may be
estimated as well. We refer the interested reader to the aforementioned paper (and references therein)
for more details on the available estimation methods.
Using decoys. As a consequence of Proposition 4.3, Algorithm 3 used with decoys instead of direct
comparison and " =
will produce the exact Pareto front. But this process can be very costly,
as the number of required comparison is proportional to 1/ 2 , even for strongly suboptimal arms.
Therefore, our algorithm, UnchainedBandits, when combined with decoys, first produces an
b of P using a peeling approach and direct comparisons before refining it into
"-approximation P
P by using Algorithm 3 together with decoys. The following theorems provide guarantees on the
performances of this modification of UnchainedBandits.
Theorem 3. UnchainedBandits applied on Sqwith
decoys, parameters ,N and with a
N 1
K
decreasing sequence ("t )t=1 lower bounded by
width(S) , returns the Pareto front P of S with
probability at least 1
after at most T comparisons, with
(3)
T ? O Kwidth(S)log(N K 2 / )/ 2
Theorem 4. UnchainedBandits applied on S with
decoys, parameters ,N and with a
p
1
decreasing sequence ("t )N
such
that
"
?
K.
returns
the Pareto front P of S with
N
1
t=1
probability at least 1
while incurring a regret R such that
?
? K
?
?
X
2K
2N K 2 X 1
2N K 2
1
R ? 2 log
+ Kwidth(S) log
,
(4)
i=1
i
i,
i <"N
/
1 ,i2P
i
Compared to (2), (4) includes an extra term due to the regret incurred by the use of decoys. In this
term, the dependency in S is slightly worse (Kwidth(S) instead of K). However, this extra regret is
limited to arms belonging to an "-approximation of the Pareto front, i.e. nearly optimal arms.
p
Constraints on ". Theorem 4 require that "t ? K , which implies that only near-optimal arms
remain during the decoy step. This is crucial to obtain a reasonable upper bound on the incurred
regret, as the number of comparisons using decoys is large (? 1/ 2 ) and is the same for every arm,
regardless of its regret. Conversely, in Theorem 3?which provides an upper bound on the number of
comparisons required to find the Pareto front?the "t are required to be lower bounded. This bound
is tight in the (worst-case) scenario where all the arms are -indistinguishable, i.e. peeling cannot
eliminate any arm. In that case, any comparison done during the peeling is actually wasted, and the
lower bound on "t allows to control the number of comparisons
p made during the peeling
pstep. In
order to satisfy both
constraints,
"
must
be
chosen
such
that
K/width(S)
?
"
?
K . In
N
N
p
particular "N = K satisfy both condition and does not rely on the knowledge of width(S).
5
5.1
Numerical Simulations
Simulated Poset
Here, we test UnchainedBandits on randomly generated posets of different sizes, widths and
heights. To evaluate the performance of UnchainedBandits, we compare it to three variants of
dueling bandit algorithms which were naively modified to handle partial orders and incomparability:
7
Figure 1: Regret incurred by Modified IF2, Modified RUCB, UniformSampling and UnchainedBandits,
when the structure of the poset varies. Dependence on (left:) height, (center:) size of the Pareto front and (right:)
addition of suboptimal arms.
1. A simple algorithm, UniformSampling, inspired from the successive elimination algorithm [Even-Dar et al., 2006], which simultaneously compares all possible pairs of arms
until one of the arms appears suboptimal, at which point it is removed from the set of
selected arms. When only -indistinguishable elements remain, it uses -decoys.
2. A modified version of the single-pivot IF2 algorithm [Yue et al., 2012]. Similarly to the
regular IF2 algorithm, the agent maintains a pivot which is compared to every other elements;
suboptimal elements are removed and better elements replace the pivot. This algorithm is
useful to illustrate consequences of the multi-pivot approach.
3. A modified version of RUCB [Zoghi et al., 2014]. This algorithm is useful to provide a non
pivot based perspective.
More precisely, IF2 and RUCB were modified as follows: the algorithms were provided with the
additional knowledge of d(P), the minimum gap between one arm of the Pareto front and any other
given comparable arm. When during the execution of the algorithm, the empirical gap between two
arms reaches this threshold, the arms were concluded to be incomparable. This allowed the agent to
retrieve the Pareto front iteratively, one element at a time.
The random posets are generated as follows: a Pareto front of size p is created, and w disjoint chains
of length h 1 are added. Then, the top of the chains are connected to a random number of elements
of the Pareto front. This creates the structure of the partial order . Finally, the exact values of
the ij ?s are obtained from a uniform distribution, conditioned to satisfy Assumption 1 and to have
d(P) 0.01. When needed, -decoys are created according to Proposition 4.4. For each experiment,
we changed the value of one parameter, and left the other to their default values (p = 5, w = 2p,
h = 10). Additionally, we provide one experiment where we studied the influence of the quality of
the arms ( i ) on the incurred regret, by adding clearly suboptimal arms2 to an existing poset. The
results are averaged over ten runs, and can be foundp
in reported on Figure 1. By default, we use
= 1/1000 and = 1/100, = 0.9 and N = blog( K )/ log )c.
Result Analysis. While UniformSampling implements a naive approach, it does outperform the
modified IF2. This can be explained as in modified IF2, the pivot is constantly compared to all the
remaining arms, including all the uncomparable, and potentially strongly suboptimal arms. These
uncomparable arms can only be eliminated after the pivot has changed, which can take a large number
of comparison, and produces a large regret. UnchainedBandits and modified RUCB produce
much better results than UniformSampling and modified IF2, and their advantage increases
with the complexity of S. While UnchainedBandits performs better that modified RUCB in
all the experiments, it is worth noting that this difference is particularly important when additional
suboptimal arms are added. In RUCB, the general idea is roughly to compare the best optimistic arm
available to its closest opponent. While this approach works greatly in totally ordered set, in poset it
produces a lot of comparisons between an optimal arm i and an uncomparable arm j?because in this
case ij = 0.5, and j appears to be a close opponent to i, even though j can be clearly suboptimal.
2
For this experiment, we say that an arm j is clearly suboptimal if 9c 2 P s.t.
8
cj
> 0.15
5.2
MovieLens Dataset
To illustrate the application of UnchainedBandits to a concrete example, we used the 20 millions
items MovieLens dataset (Harper and Konstan [2015]), which contains movie evaluations. Movies
can be seen as a poset, as two movies may be incomparable because they are from different genres
(e.g. a horror movie and a documentary). To simulate a dueling bandit on a poset we proceed as
follows: we remove all films with less than 50000 evaluations, thus obtaining 159 films, represented
as arms. Then, when comparing two arms, we pick at random a user which has evaluated both films,
and compare those evaluations (ties are broken with an unbiased coin toss). Since the decoy tool
cannot be used in an offline dataset, we restrict ourselves to finding an "-approximation of the Pareto
front, with " = 0.05, and parameters = 0.9, = 0.001 and N = blog "/ log c = 28.
Due to the lack of a ground-truth for this experiment, no regret estimation can be provided. Instead,
the resulting "-Pareto front, which contains 5 films, is listed in Table 1, and compared to the five
films among the original 159 with the highest average scores. It is interesting to note that three films
are present in both list, which reflects the fact that the best films in term of average score have a
high chance of being in the Pareto Front. However, the films contained in the Pareto front are more
diverse in term of genre, which is expected of a Pareto front. For instance, the sequel of the film ?The
Godfather? has been replaced by a a film of a totally different genre. It is important to remember
that UnchainedBandits does not have access to any information about the genre of a film: its
results are based solely on the pairwise evaluation, and this result illustrates the effectiveness of our
approach.
Limit of the uncomparability model. The hypothesis that i k j ) ij = 0 might not always hold
true in all real life settings: for instance movies of a niche genre will probably get dominated in
users reviews by movies of popular genre?even if they are theoretically incomparable?resulting in
their elimination by UnchainedBandit. This might explains why only 5 movies are present in our
" pareto front. However, even in this case, the algorithm will produce a subset of the Pareto Front,
made of uncomparable movies from popular genres. Hence, while the algorithm fails at finding all
the different genre, it still provides a significant diversity.
6
Conclusion
We introduced dueling bandits on posets and the problem of "-indistinguishability. We provided
a new algorithm, UnchainedBandits, together with theoretical performance guarantees and
compelling experiments to identify the Pareto front. Future work might include the study of the
influence of additional hypotheses on the structure of the social poset, and see if some ideas proposed
here may carry over to lattices or upper semi-lattices. Additionally, it is an interesting question
whether different approaches to dueling bandits, such as Thompson Sampling [Wu and Liu, 2016],
could be applied to the partial order setting, and whether results for the von Neumann problem
[Balsubramani et al., 2016] can be rendered valid in the poset setting.
Acknowledgement
We would like to thank the anonymous reviewers of this work for their useful comments, particularly
regarding the future work section.
9
References
Nir Ailon, Zohar Karnin, and Thorsten Joachims. Reducing dueling bandits to cardinal bandits. In Proceedings
of The 31st International Conference on Machine Learning, pages 856?864, 2014.
Dan Ariely and Thomas S Wallsten. Seeking subjective dominance in multidimensional space: An explanation of
the asymmetric dominance effect. Organizational Behavior and Human Decision Processes, 63(3):223?232,
1995.
Akshay Balsubramani, Zohar Karnin, Robert E Schapire, and Masrour Zoghi. Instance-dependent regret bounds
for dueling bandits. In Conference on Learning Theory, pages 336?360, 2016.
Constantinos Daskalakis, Richard M Karp, Elchanan Mossel, Samantha J Riesenfeld, and Elad Verbin. Sorting
and selection in posets. SIAM Journal on Computing, 40(3):597?622, 2011.
Madalina M Drugan and Ann Nowe. Designing multi-objective multi-armed bandits algorithms: a study. In
Neural Networks (IJCNN), The 2013 International Joint Conference on, pages 1?8. IEEE, 2013.
Miroslav Dud??k, Katja Hofmann, Robert E Schapire, Aleksandrs Slivkins, and Masrour Zoghi. Contextual
dueling bandits. In Conference on Learning Theory, pages 563?587, 2015.
Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the multiarmed bandit and reinforcement learning problems. The Journal of Machine Learning Research, 7:1079?1105,
2006.
Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with noisy information. SIAM Journal
on Computing, 23(5):1001?1018, 1994.
F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. ACM Transactions on
Interactive Intelligent Systems (TiiS), 5(4):19, 2015.
Timothy B Heath and Subimal Chatterjee. Asymmetric decoy effects on lower-quality versus higher-quality
brands: Meta-analytic and experimental evidence. Journal of Consumer Research, 22(3):268?284, 1995.
Joel Huber, John W Payne, and Christopher Puto. Adding asymmetrically dominated alternatives: Violations of
regularity and the similarity hypothesis. Journal of consumer research, pages 90?98, 1982.
Junpei Komiyama, Junya Honda, Hisashi Kashima, and Hiroshi Nakagawa. Regret lower bound and optimal
algorithm in dueling bandit problem. In Conference on Learning Theory, pages 1141?1154, 2015.
Junpei Komiyama, Junya Honda, and Hiroshi Nakagawa. Copeland dueling bandit problem: Regret lower bound,
optimal algorithm, and computationally efficient algorithm. arXiv preprint arXiv:1605.01677, 2016.
Siddartha Y Ramamohan, Arun Rajkumar, and Shivani Agarwal. Dueling bandits: Beyond condorcet winners
to general tournament solutions. In Advances in Neural Information Processing Systems, pages 1253?1261,
2016.
B Robert. Ash. information theory, 1990.
Constantine Sedikides, Dan Ariely, and Nils Olsen. Contextual and procedural determinants of partner selection:
Of asymmetric dominance and prominence. Social Cognition, 17(2):118?139, 1999.
Amos Tversky and Daniel Kahneman. The framing of decisions and the psychology of choice. Science, 211
(4481):453?458, 1981.
Huasen Wu and Xin Liu. Double thompson sampling for dueling bandits. In Advances in Neural Information
Processing Systems, pages 649?657, 2016.
Yisong Yue and Thorsten Joachims. Beat the mean bandit. In Proceedings of the 28th International Conference
on Machine Learning (ICML-11), pages 241?248, 2011.
Yisong Yue, Josef Broder, Robert Kleinberg, and Thorsten Joachims. The k-armed dueling bandits problem.
Journal of Computer and System Sciences, 78(5):1538?1556, 2012.
Masrour Zoghi, Shimon Whiteson, Remi Munos, and Maarten D Rijke. Relative upper confidence bound for the
k-armed dueling bandit problem. In Proceedings of the 31st International Conference on Machine Learning
(ICML-14), pages 10?18, 2014.
Masrour Zoghi, Zohar S Karnin, Shimon Whiteson, and Maarten de Rijke. Copeland dueling bandits. In
Advances in Neural Information Processing Systems, pages 307?315, 2015a.
Masrour Zoghi, Shimon Whiteson, and Maarten de Rijke. Mergerucb: A method for large-scale online ranker
evaluation. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining,
pages 17?26. ACM, 2015b.
10
| 6808 |@word katja:1 determinant:1 version:4 briefly:1 c0:5 simulation:1 prominence:1 pick:1 mention:1 carry:1 initial:1 liu:2 uncovered:1 contains:4 selecting:1 score:3 initialisation:2 offering:1 ours:1 daniel:1 envision:1 subjective:1 existing:1 recovered:1 com:1 comparing:4 current:1 contextual:2 gmail:1 liva:2 yet:1 must:1 john:1 stemming:1 refines:1 numerical:1 hofmann:1 ramamohan:2 analytic:1 remove:4 designed:2 selected:2 item:3 provides:4 mannor:1 club:1 preference:1 successive:1 honda:2 five:4 height:4 direct:5 become:2 prove:2 dan:2 introduce:3 theoretically:1 pairwise:1 huber:2 expected:1 indeed:2 behavior:1 roughly:1 frequently:1 nor:1 multi:6 inspired:1 decreasing:5 little:1 armed:7 equipped:1 increasing:1 becomes:2 provided:6 estimating:1 bounded:4 underlying:1 moreover:3 notation:2 totally:2 what:1 pursue:1 informed:1 finding:6 guarantee:4 assert:1 remember:1 every:3 multidimensional:2 absurd:1 tackle:1 interactive:1 tie:4 universit:1 ensured:1 mergerucb:1 indistinguishability:10 originates:1 control:2 reuses:1 arguably:1 before:2 positive:2 accordance:1 tends:1 limit:2 severely:1 consequence:3 samantha:1 encoding:1 solely:2 might:10 lure:1 chose:1 therein:1 studied:3 examined:1 evoked:1 conversely:3 tournament:1 limited:3 averaged:1 unique:2 poset:37 regret:34 chasing:2 differs:1 practice:2 implement:2 empirical:5 significantly:2 confidence:5 regular:1 masrour:5 get:1 cannot:5 close:9 pertain:1 ralaivola:2 selection:3 context:3 influence:5 seminal:1 impossible:1 optimize:1 customer:2 center:1 reviewer:1 regardless:2 thompson:2 formalized:1 identifying:5 examines:1 counterintuitive:1 pull:11 retrieve:2 handle:1 notion:5 maarten:3 resp:7 yishay:1 user:2 exact:5 cmla:1 us:2 designing:1 hypothesis:4 element:23 rajkumar:1 expensive:1 satisfying:1 particularly:2 asymmetric:3 bottom:1 ep:1 preprint:1 worst:1 ensures:1 cycle:2 connected:1 decrease:1 luck:1 marseille:3 removed:3 highest:2 mentioned:1 redemption:2 inaccessible:1 complexity:4 broken:1 reward:1 personal:2 tversky:2 depend:1 raise:1 tight:1 incur:1 purely:1 creates:1 efficiency:1 kahneman:2 easily:1 joint:1 various:1 represented:1 genre:8 univ:1 informatique:1 universitaire:1 hiroshi:2 outcome:1 encoded:2 larger:1 supplementary:2 solve:2 film:13 say:1 otherwise:1 pab:4 elad:1 statistic:1 think:1 highlighted:1 noisy:1 ip:6 online:2 sequence:5 advantage:2 propose:2 product:1 maximal:10 fr:1 if2:7 relevant:2 payne:1 realization:1 horror:1 achieve:1 incomparability:12 inducing:1 empty:1 regularity:1 neumann:4 extending:1 produce:8 prabhakar:1 double:1 posets:16 help:1 iq:9 spent:1 develop:1 pose:1 illustrate:2 ij:13 b0:4 coverage:2 huasen:1 implies:3 peleg:1 differ:1 drawback:1 correct:1 human:1 raghavan:1 material:1 elimination:3 explains:1 require:3 alleviate:1 anonymous:1 proposition:11 im:1 extension:1 strictly:2 hold:1 accompanying:1 considered:1 ground:1 deciding:2 cognition:1 major:1 achieves:1 early:1 perceived:1 estimation:3 largest:2 create:1 successfully:2 tool:2 reflects:2 arun:1 amos:1 clearly:3 always:4 aim:2 modified:11 avoid:1 karp:1 derived:1 focus:3 refining:1 joachim:4 longest:1 bernoulli:1 zoghi:9 greatly:1 sense:1 whence:1 dependent:1 stopping:2 cnrs:2 eliminate:1 entire:3 bt:1 fight:1 hidden:2 bandit:39 relation:2 a0:9 tiis:1 france:3 i1:1 her:2 josef:1 compatibility:1 issue:2 among:2 aforementioned:2 subroutine:1 denoted:1 priori:1 favored:1 proposes:2 plan:6 lif:1 documentary:1 field:1 construct:1 never:2 karnin:3 beach:1 eliminated:2 sampling:2 kw:1 cancel:1 nearly:4 constantinos:1 icml:2 discrepancy:1 future:2 report:2 t2:1 intelligent:1 cardinal:1 few:1 richard:1 randomly:1 simultaneously:1 replaced:1 intended:1 ourselves:1 maintain:1 ab:1 peel:1 interest:1 mining:1 evaluation:5 joel:1 introduces:1 mixture:2 violation:1 pc:1 chain:9 capable:1 partial:6 necessary:1 respective:1 institut:1 unless:1 elchanan:1 initialized:1 abundant:1 desired:1 isolated:1 theoretical:3 minimal:2 nij:1 miroslav:1 instance:10 increased:1 compelling:2 lattice:2 reflexive:2 organizational:1 subset:1 uniform:1 front:42 reported:1 drugan:2 dependency:3 varies:1 chooses:1 combined:1 st:14 international:5 siam:2 broder:1 sequel:1 aix:1 godfather:4 together:2 concrete:1 verbin:1 von:4 satisfied:1 successively:2 yisong:2 choose:1 possibly:1 worse:2 external:1 creating:1 inefficient:1 return:8 de:4 diversity:1 star:1 summarized:1 hisashi:1 includes:2 satisfy:3 performed:1 lot:1 lab:1 optimistic:1 eyal:1 recover:6 option:2 maintains:2 borda:1 contribution:3 ass:1 minimize:1 degraded:2 iab:6 accuracy:1 efficiently:3 identify:7 rijke:3 dub:1 worth:4 history:1 simultaneous:1 reach:1 duel:4 definition:8 against:1 nonetheless:1 naturally:1 associated:2 copeland:5 proof:2 dataset:3 popular:2 recall:1 knowledge:3 subsection:1 pulp:2 cj:1 actually:3 appears:2 maxwell:1 ta:2 higher:1 reflected:1 done:2 though:4 strongly:2 evaluated:1 uriel:1 until:4 working:1 hand:2 web:1 christopher:1 lack:1 defines:1 quality:4 pulling:4 siddartha:1 usa:1 effect:3 building:1 concept:1 contain:4 unbiased:3 true:1 former:1 hence:2 inspiration:1 dud:2 iteratively:2 indistinguishable:9 transitivity:4 width:14 during:4 inferior:1 noted:2 criterion:1 trying:1 presenting:1 complete:1 performs:1 dedicated:1 wise:1 ji:2 winner:15 million:1 extend:1 discussed:2 refer:1 significant:1 multiarmed:1 similarly:3 access:2 entail:1 longer:3 similarity:1 base:1 add:2 closest:2 recent:2 perspective:3 constantine:1 driven:1 phone:2 scenario:1 theorem1:1 meta:1 binary:2 arbitrarily:1 blog:2 life:7 devise:2 seen:3 minimum:2 additional:12 relaxed:1 mr:1 period:1 ii:2 semi:1 multiple:2 stem:1 long:1 controlled:1 victory:3 variant:1 essentially:3 expectation:1 circumstance:2 shawshank:2 affordable:1 pa0:1 arxiv:2 agarwal:1 achieved:1 addition:2 want:2 addressed:1 interval:4 else:4 concluded:1 standpoint:1 crucial:1 extra:2 rest:1 posse:1 heath:2 sure:2 cedex:1 yue:6 tend:1 suspect:1 probably:1 comment:1 shie:1 seem:1 effectiveness:1 integer:1 near:1 noting:4 iii:1 fit:1 psychology:4 restrict:1 suboptimal:10 incomparable:16 idea:7 regarding:2 lesser:1 pivot:16 ranker:1 whether:4 war:1 reuse:1 akin:2 returned:1 proceed:1 action:2 remark:1 dar:2 useful:3 clear:1 listed:1 ten:1 induces:1 shivani:1 schapire:2 outperform:1 exist:1 fiction:2 estimated:1 disjoint:1 diverse:1 threefold:1 dominance:3 key:2 procedural:1 threshold:1 neither:1 wasted:1 relaxation:1 geometrically:1 run:1 eli:1 throughout:1 family:1 decide:2 reader:1 reasonable:1 wu:2 draw:1 decision:2 comparable:12 bound:16 distinguish:1 strength:1 ijcnn:1 precisely:2 constraint:2 junya:2 calling:1 dominated:7 kleinberg:1 simulate:1 min:2 rendered:1 ailon:2 according:1 belonging:1 remain:2 slightly:2 feige:1 joseph:1 modification:1 s1:1 explained:1 thorsten:3 heart:1 computationally:1 previously:4 discus:1 needed:1 bip:3 flip:2 end:1 available:5 incurring:2 komiyama:4 opponent:2 balsubramani:2 targetted:1 distinguished:1 kashima:1 alternative:2 coin:3 robustness:1 existence:2 original:2 rucb:6 denotes:1 remaining:3 ensure:1 top:3 include:1 thomas:1 interested:1 madalina:1 kwidth:5 approximating:1 classical:1 seeking:1 objective:7 added:3 question:1 strategy:4 costly:1 dependence:1 usual:3 said:1 comparability:1 win:1 distance:1 link:1 thank:1 simulated:1 condorcet:6 partner:1 considers:1 collected:2 consumer:2 length:1 decoy:47 balance:1 difficult:3 robert:4 statement:1 potentially:1 motivates:1 unknown:1 upper:8 datasets:1 discarded:1 beat:1 immediate:1 situation:3 extended:2 mansour:1 aleksandrs:1 introduced:1 david:1 pair:12 paris:2 required:7 slivkins:1 distinction:1 conflicting:1 framing:1 nip:1 address:2 able:1 zohar:3 beyond:1 eighth:1 saclay:2 including:3 max:7 explanation:1 dueling:32 suitable:1 natural:1 difficulty:1 rely:2 arm:90 shop:1 movie:8 mossel:1 numerous:1 julien:2 created:2 transitive:1 naive:1 audiffren:2 sn:1 nir:1 epoch:2 literature:2 taste:1 review:1 acknowledgement:1 relative:1 fully:2 antichain:9 interesting:2 proportional:2 versus:1 ash:1 upfal:1 incurred:8 agent:14 sufficient:2 pij:1 s0:1 pareto:45 changed:2 free:1 soon:1 offline:1 formal:1 weaker:1 akshay:1 munos:1 roaming:1 default:2 world:1 valid:1 computes:1 made:3 reinforcement:1 avoided:1 social:14 transaction:1 observable:1 olsen:1 dealing:1 assumed:5 unnecessary:3 alternatively:1 daskalakis:3 search:1 quantifies:1 sk:1 why:1 table:2 additionally:3 ca:2 ariely:3 obtaining:1 unavailable:1 whiteson:3 antisymmetric:2 cheapest:1 main:2 i2p:2 allowed:1 pivotal:1 en:1 iij:1 precision:1 structurally:1 fails:1 konstan:2 candidate:1 peeling:17 niche:1 shimon:3 theorem:11 erroneous:1 specific:3 list:1 evidence:1 exists:2 intractable:2 naively:1 false:1 adding:3 gained:1 execution:1 illustrates:4 chatterjee:2 conditioned:1 sorting:3 gap:2 timothy:1 remi:1 prevents:1 ordered:4 contained:1 partially:4 corresponds:1 truth:1 chance:1 relies:1 constantly:1 insisting:1 acm:3 goal:2 ann:1 careful:1 towards:1 toss:2 price:3 replace:1 movielens:3 nakagawa:2 reducing:1 surpass:1 total:4 called:2 asymmetrically:1 nil:1 experimental:1 xin:1 brand:1 formally:2 junpei:2 latter:1 arises:1 harper:2 pertains:1 evaluate:1 |
6,422 | 6,809 | Elementary Symmetric Polynomials for Optimal
Experimental Design
Zelda Mariet
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Suvrit Sra
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Abstract
We revisit the classical problem of optimal experimental design (OED) under a
new mathematical model grounded in a geometric motivation. Specifically, we
introduce models based on elementary symmetric polynomials; these polynomials
capture ?partial volumes? and offer a graded interpolation between the widely
used A-optimal design and D-optimal design models, obtaining each of them as
special cases. We analyze properties of our models, and derive both greedy and
convex-relaxation algorithms for computing the associated designs. Our analysis
establishes approximation guarantees on these algorithms, while our empirical
results substantiate our claims and demonstrate a curious phenomenon concerning
our greedy method. Finally, as a byproduct, we obtain new results on the theory
of elementary symmetric polynomials that may be of independent interest.
1
Introduction
Optimal Experimental Design (OED) develops the theory of selecting experiments to perform in
order to estimate a hidden parameter as well as possible. It operates under the assumption that
experiments are costly and cannot be run as many times as necessary or run even once without
tremendous difficulty [33]. OED has been applied in a large number of experimental settings [35, 9,
28, 46, 36], and has close ties to related machine-learning problems such as outlier detection [15, 22],
active learning [19, 18], Gaussian process driven sensor placement [27], among others.
We revisit the classical setting where each experiment depends linearly on a hidden parameter ? ?
Rm . We assume there are n possible experiments whose outcomes yi ? R can be written as
yi = x?
i ? + ?i
1 ? i ? n,
where the xi ? R and ?i are independent, zero mean, and homoscedastic noises. OED seeks to
answer the question: how to choose a set S of k experiments that allow us to estimate ? without bias
and with minimal variance?
?
Given a feasible set S of experiments (i.e., i?S xi x?
i is invertible), the Gauss-Markov theorem
? ?1
? = (?
shows that the lowest variance for an unbiased estimate ?? satisfies Var[?]
. Howi?S xi xi )
?
ever, Var[?] is a matrix, and matrices do not admit a total order, making it difficult to compare
different designs. Hence, OED is cast as an optimization problem that seeks an optimal design S ?
(( ?
)?1 )
S ? ? argmin ?
xi x?
,
(1.1)
i
m
S?[n],|S|?k
i?S
where ? maps positive definite matrices to R to compare the variances for each design, and may
help elicit different properties that a solution should satisfy, either statistical or structural.
Elfving [16] derived some of the earliest theoretical results for the linear dependency setting, focusing on the case where one is interested in reconstructing a predefined linear combination of the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
underlying parameters c? ? (C-optimal design). Kiefer [26] introduced a more general approach
to OED, by considering matrix means on positive definite matrices as a general way of evaluating
optimality [33, Ch. 6], and Yu [48] derived general conditions for a map ? under which a class of
multiplicative algorithms for optimal design has guaranteed monotonic convergence.
Nonetheless, the theory of OED branches into multiple variants of (1.1) depending on the choice of
?, among which A-optimal design (? = trace) and D-optimal design (? = determinant) are probably the two most popular choices. Each of these choices has a wide range of applications as well
as statistical, algorithmic, and other theoretical results. We refer the reader to the classic book [33],
which provides an excellent overview and introduction to the topic; see also the summaries in [1, 35].
For A-optimal design, recently Wang et al. [44] derived greedy and convex-relaxation approaches;
[11] considers the problem of constrained adaptive sensing, where ? is supposed sparse. D-optimal
design has historically been more popular, with several approaches to solving the related optimization problem [17, 38, 31, 20]. The dual problem of D-optimality, Minimum Volume Covering Ellipsoid (MVCE) is also a well-known and deeply studied optimization problem [3, 34, 43, 41, 14, 42].
Experimental design has also been studied in more complex settings: [8] considers Bayesian optimal
design; under certain conditions, non-linear settings can be approached with linear OED [13, 25].
Due to the popularity of A- and D-optimal design, the theory surrounding these two sub-problems
has diverged significantly. However, both the trace and the determinant are special cases of fundamental spectral polynomials of matrices: elementary symmetric polynomials (ESP), which have
been extensively studied in matrix theory, combinatorics, information theory, and other areas due to
their importance in the theory of polynomials [24, 30, 21, 6, 23, 4].
These considerations motivate us to derive a broader view of optimal design which we call ESPDesign, where ? is obtained from an elementary symmetric polynomial. This allows us to consider
A-optimal design and D-optimal design as special cases of ESP-design, and thus treat the entire
ESP-class in a unified manner. Let us state the key contributions of this paper more precisely below.
Contributions
?
?
?
We introduce ESP-design, a new, general framework for OED that leverages geometric properties of positive definite matrices to interpolate between A- and D-optimality. ESP-design offers
an intuitive setting in which to gradually scale between A-optimal and D-optimal design.
We develop a convex relaxation as well as greedy algorithms to compute the associated designs.
As a byproduct of our convex relaxation, we prove that ESPs are geodesically log-convex on the
Riemannian manifold of positive definite matrices; this result may be of independent interest.
We extend a result of Avron and Boutsidis [2] on determinantal column-subset selection to ESPs;
as a consequence we obtain a greedy algorithm with provable optimality bounds for ESP-design.
Experiments on synthetic and real data illustrate the performance of our algorithms and confirm that
ESP-design can be used to obtain designs with properties that scale between those of both A- and
D-optimal designs, allowing users to tune trade-offs between their different benefits (e.g. predictive
error, sparsity, etc.). We show that our greedy algorithm generates designs of equal quality to the
famous Fedorov exchange algorithm [17], while running in a fraction of the time.
2
Preliminaries
We begin with some background material that also serves to set our notation. We omit proofs for
brevity, as they can be found in standard sources such as [6].
We define [n] ? {1, 2, . . . , n}. For S ? [n] and M ? Rn?m , we write MS the |S| ? m matrix
created by keeping only the rows of M indexed by S, and M [S|S ? ] the submatrix with rows indexed
by S and columns indexed by S ? ; by x(i) we denote the vector x with its i-th component removed.
For a vector v ? Rm , the elementary symmetric polynomial (ESP) of order ? ? N is defined by
?
??
?
?
e? (v) ?
vij =
vj ,
(2.1)
1?i1 <...<i? ?m
j=1
I?[m],|I|=?
j?I
++
where e? ? 0 for ? = 0 and ? > m. Let S+
m (Sm ) be the cone of positive semidefinite (positive
definite) matrices of order m. We denote by ?(M ) the eigenvalues (in decreasing order) of a symmetric matrix M . Def. (2.1) extends to matrices naturally; ESPs are spectral functions, as we set
2
E? (M ) ? e? ? ?(M ); additionally, they enjoy another representation that allows us to interpret them
as ?partial volumes?, namely,
?
E? (M ) =
det(M [S|S]).
(2.2)
S?[n],|S|=?
The following proposition captures basic properties of ESPs that we will require in our analysis.
Proposition 2.1. Let M ? Rm?m be symmetric and 1 ? ? ? m; also let A, B ? S+
m . We have the
following properties: (i) If A ? B in L?wner order, then E? (A) ? E? (B); (ii) If M is invertible,
then E? (M ?1 ) = det(M ?1 )Em?? (M ); (iii) ?e? (?) = [e??1 (?(i) )]1?i?m .
3
ESP-design
A-optimal design uses ? ? tr in (1.1), and thus selects designs with low average variance. Geometrically, this translates into selecting confidence ellipsoids whose bounding boxes have a small
diameter. Conversely, D-optimal design uses ? ? det in (1.1), and selects vectors that correspond
to the ellipsoid with the smallest volume; as a result it is more sensitive to outliers in the data1 . We
introduce a natural model that scales between A- and D-optimal design. Indeed, by recalling that
both the trace and the determinant are special cases of ESPs, we obtain a new model as fundamental
as A- and D-optimal design, while being able to interpolate between the two in a graded manner.
Unless otherwise indicated, we consider that we are selecting experiments without repetition.
3.1
Problem formulation
Let X ? Rn?m (m ? n) be a design matrix with full column rank, and k ? N be the budget
(m ? k ? n). Define ?k = {S ? [n] s.t. |S| ? k, XS? XS ? 0} to be the set of feasible designs
that allow unbiased ? estimates. For ? ? {1, . . . , m}, we introduce the ESP-design model:
(
)
min f? (S) ? 1? log E? (XS? XS )?1 .
(3.1)
S??k
We keep the 1/?-factor in (3.1) to highlight the homogeneity (E? is a polynomial of degree ?) of our
design criterion, as is advocated in [33, Ch. 6].
For ? = 1, (3.1) yields A-optimal design, while for ? = m, it yields D-optimal design. For 1 < ? <
m, ESP-design interpolates between these two extremes. Geometrically, we may view it as seeking
an ellipsoid with the smallest average volume for ?-dimensional slices (taken across sets of size ?).
Alternatively, ESP-design can be also be interpreted as a regularized version of D-optimal design
via Prop. 2.1-(ii). In particular, for ? = m ? 1, we recover a form of regularized D-optimal design:
[
(
)
]
1
fm?1 (S) = m?1
log det (XS? XS )?1 + log ?XS ?22 .
(3.1) is a known hard combinatorial optimization problem (in particular for ? = m), which precludes
an exact optimal solution. However, its objective enjoys remarkable properties that help us derive
efficient algorithms for its approximate solution. The first one of these is based on a natural convex
relaxation obtained below.
3.2
Continuous relaxation
We describe below a traditional approach of relaxing (3.1) by relaxing the constraint on S, allowing
elements in the set to have fractional multiplicities. The new optimization problem takes the form
(
)
minc 1? log E? (X ? Diag(z)X)?1 ,
(3.2)
z??k
where we
denotes the set of vectors {z ? Rn | 0 ? zi ? 1} such that X ? Diag(z)X remains
invertible and 1? z ? k. The following is a direct consequence of Prop 2.1-(i):
Proposition 3.1. Let z ? be the optimal solution to (3.2). Then ?z ? ?1 = k.
?ck
Convexity of f? on ?ck (where by abuse of notation, f? also denotes the continuous relaxation in (3.2))
can be obtained as a consequence of [32]; however, we obtain it as a corollary Lemma 3.3, which
shows that log E? is geodesically convex; this result seems to be new, and is stronger than convexity
of f? ; hence it may be of independent interest.
1
For a more in depth discussion of the geometric interpretation of various optimal designs, refer to e.g. [7,
Section 7.5].
3
Definition 3.2 (geodesic-convexity). A function f : S++
m ? R defined on the Riemannian manifold
S++
m is called geodesically convex if it satisfies
f (P #t Q) ? (1 ? t)f (P ) + tf (Q),
t ? [0, 1], and P, Q ? 0.
1/2
?1/2
where we use the traditional notation P #t Q := P (P
QP ?1/2 )t P 1/2 to denote the geodesic
++
between P and Q ? Sm under the Riemannian metric gP (X, Y ) = tr(P ?1 XP ?1 Y ).
Lemma 3.3. The function E? is geodesically log-convex on the set of positive definite matrices.
Corollary 3.4. The map M 7? E? ((X ? M X)?1 ) is log-convex on the set of PD matrices.
1/?
For further details on the theory of geodesically convex functions on S+
m and their optimization, we
refer the reader to [40]. We prove Lemma 3.3 and Corollary 3.4 in Appendix A.
From Corollary 3.4, we immediately obtain that (3.2) is a convex optimization problem, and can
therefore be solved using a variety of efficient algorithms. Projected gradient descent turns out to
be particularly easy to apply because we only require projection onto the intersection of the cube
0 ? z ? 1 and the plane {z | z ? 1 = k} (as a consequence of Prop 3.1). Projection onto this
intersection is a special case of the so-called continuous quadratic knapsack problem, which is a
very well-studied problem and can be solved essentially in linear time [10, 12].
Remark 3.5. The convex relaxation remains log-convex when points can be chosen with multiplicity, in which case the projection step is also significantly simpler, requiring only z ? 0.
We conclude the analysis of the continuous relaxation by showing a bound on the support of its
solution under some mild assumptions:
Theorem 3.6. Let ? be the mapping from Rm to Rm(m+1)/2 such that ?(x) = (?ij xi xj ))1?i,j?m
?
with ?ij = 1 if i = j and 2 otherwise. Let ?(x)
= (?(x), 1) be the affine version of ?. If for any
set of m(m + 1)/2 distinct rows of X, the mapping under ?? is independent, then the support of the
optimum z ? of (3.2) satisfies ?z ? ?0 ? k +
m(m+1)
.
2
The proof is identical to that of [44, Lemma 3.5], which shows such a result for A-optimal design;
we relegate it to Appendix B.
4
Algorithms and analysis
Solving the convex relaxation (3.2) does not directly provide a solution to (3.1); first, we must round
the relaxed solution z ? ? ?ck to a discrete solution S ? ?k . We present two possibilities: (i) rounding
the solution of the continuous relaxation (?4.1); and (ii) a greedy approach (?4.2).
4.1
Sampling from the continuous relaxation
For conciseness, we concentrate on sampling without replacement, but note that these results extend
with minor changes to with replacement sampling (see [44]). Wang et al. [44] discuss the sampling
scheme described in Alg. 1) for A-optimal design; the same idea easily extends to ESP-design. In
particular, Alg. 1, applied to a solution of (3.2), provides the same asymptotic guarantees as those
proven in [44, Lemma 3.2] for A-optimal design.
Algorithm 1: Sample from z ?
Data: budget k, z ? ? Rn
Result: S of size k
S??
while |S| < k do
Sample i ? [n] \ S uniformly at random
Sample x ? Bernoulli(zi? )
if x = 1 then S ? S ? {i}
return S
2
Theorem 4.1. Let ?? = X ? Diag(z ? )X. Suppose ???1
? ?2 ?(?? )?X?? log m = O(1). The subset
S constructed by sampling as above verifies with probability p = 0.8
((
((
)?1 )1/?
)?1 )1/?
E? XS? XS
? O(1) ? E? XS?? XS ?
.
4
Theorem 4.1 shows that under reasonable conditions, we can probabilistically construct a good
approximation to the optimal solution in linear time, given the solution z ? to the convex relaxation.
4.2
Greedy approach
In addition to the solution based on convex relaxation, ESP-design admits an intuitive greedy approach, despite not being a submodular optimization problem in general. Here, elements are removed one-by-one from a base set of experiments; this greedy removal, as opposed to greedy addition, turns out to be much more practical. Indeed, since f? is not defined for sets of size smaller
than k, it is hard to greedily add experiments to the empty set and then bound the objective function
after k items have been added. This difficulty precludes analyses such as [45, 39] for optimizing
non-submodular set functions by bounding their ?curvature?.
Algorithm 2: Greedy algorithm
Data: matrix X, budget k, initial set S0
Result: S of size k
S ? S0
while |S| > k do
Find i ? S such that S \ {i} is feasible and i minimizes f? (S \ {i})
S ? S \ {i}
return S
Bounding the performance of Algorithm 2 relies on the following lemma.
Lemma 4.2. Let X ? Rn?m (n ? m) be a matrix with full column rank, and let k be a budget
m ? k ? n. Let S of size k be subset of [n] drawn with probability P ? det(XS? XS ). Then
((
[ ((
)?1 )
)?1 )] ?? n ? m + i
? E? X ? X
,
(4.1)
ES?P E? XS? XS
?
i=1 k ? m + i
with equality if XS? XS ? 0 for all subsets S of size k.
Lemma 4.2 extends a result from [2, Lemma 3.9] on column-subset selection via volume sampling
to all ESPs. In particular, it follows that removing one element (by volume sampling a set of size
n ? 1) will in expectation decrease f by a multiplicative factor which is clearly also attained by a
greedy minimization. This argument then entails the following bound on Algorithm 2?s performance.
Proofs of both results are in Appendix C.
Theorem 4.3. Algorithm 2 initialized with a set S0 of size n0 produces a set S + of size k such that
((
((
)?1 ) ?? n0 ? m + j
)?1 )
E? XS?+ XS +
?
? E? XS?0 XS0
(4.2)
j=1 k ? m + j
As Wang et al. [44] note regarding A-optimal design, (4.2) provides a trivial optimality bound on
the greedy algorithm when initialized with S0 = {1, . . . , n} (S ? denotes the optimal set):
((
)?1 )1/?
)?1 )1/?
n?m+?
n ? m + ? (( ?
E? XS?+ XS +
?
f ({1, . . . , n}) ?
E? XS ? XS ?
k?m+1
k?m+1
(4.3)
However, this naive initialization can be replaced by the support ?z ? ?0 of the convex relaxation
solution; in the common scenario described by Theorem 3.6, we then obtain the following result:
Theorem 4.4. Let ?? be the mapping defined in 3.6, and assume that all choices of m(m + 1)/2
? Then the outcome of the
distinct rows of X always have their mapping independent mappings for ?.
greedy algorithm initialized with the support of the solution to the continuous relaxation verifies
(
)
k + m(m ? 1)/2 + ?
f? (S + ) ? log
+ f? (S ? ).
k?m+1
4.3
Computational considerations
Computing the ?-th elementary symmetric polynomial on a vector of size m can be done in
O(m log2 ?) using Fast Fourier Transform for polynomial multiplication, due to the construction
introduced by Ben-Or (see [37]); hence, computing f? (S) requires O(nm2 ) time, where the cost is
dominated by computing XS? XS . Alg. 1 runs in expectation in O(n); Alg. 2 costs O(m2 n3 ).
5
5
Further Implications
We close our theoretical presentation by discussing a potentially important geometric problem related to ESP-design. In particular, our motivation here is the dual problem of D-optimal design (i.e.,
dual to the convex relaxation of D-optimal design): this is nothing but the well-known Minimum Volume Covering Ellipsoid (MVCE) problem, which is a problem of great interest to the optimization
community in its own right?see the recent book [42] for an excellent account.
With this motivation, we develop the dual formulation for ESP-design now. We start by deriving
?E? (A), for which we recall that E? (?) is a spectral function, whereby the spectral calculus of Lewis
[29] becomes applicable, saving us from intractable multilinear algebra [23]. More precisely, say
U ? ?U is the eigendecomposition of A, with U unitary. Then, as E? (A) = e? ? ?(A),
?E? (A) = U ? Diag(?e? (?))U = U ? Diag(e??1 (??(i) ))U.
(5.1)
We can now derive the dual of ESP-design (we consider only z ? 0); in this case problem (3.2) is
sup
inf ? 1? log E? (A) ? tr(H(A?1 ? X ? Diag(z)X)) ? ?(1? z ? k),
A?0,z?0 ??R,H
which admits as dual
inf
sup ? 1
??R,H A?0,z?0 | ?
log E? (A) ? tr(HA?1 ) + tr(HX ? Diag(z)X) ? ?(1? z ? k).
{z
}
(5.2)
g(A)
We easily show that H ? 0 and that g reaches its maximum on S++
m for A such that ?g = 0.
Rewriting A = U ? ?U , we have
(
)
?g(A) = 0 ?? ? Diag e??1 (?(i) ) ? = e? (?)U HU ? .
In particular, H and A are co-diagonalizable, with ? Diag(e??1 (?(i) ))? = Diag(h1 , . . . , hm ). The
eigenvalues of A must thus satisfy the system of equations
?2i e??1 (?1 , . . . , ?i?1 , ?i+1 , . . . , ?m ) = hi e? (?1 , . . . , ?m ),
1 ? i ? m.
Let a(H) be such a matrix (notice, a(H) = ?g ? (0)). Since f? is convex, g(a(H)) = f?? (?H)
where f?? is the Fenchel conjugate of f? . Finally, the dual optimization problem is given by
sup
x?
i Hxi ?1,H?0
f?? (?H)
=
sup
x?
i Hxi ?1,H?0
1
?
log E? (a(H))
Details of the calculation are provided in Appendix D. In the general case, deriving a(H) or even
E? (a(H)) does not admit a closed form that we know of. Nevertheless, we recover the well-known
duals of A-optimal design and D-optimal design as special cases.
Corollary 5.1. For ? = 1, a(H) = tr(H 1/2 )H 1/2 and for ? = m, a(H) = H. Consequently, we
recover the dual formulations of A- and D-optimal design.
6
Experimental results
We compared the following methods to solving (3.1):
? U NIF / U NIF F DV: k experiments are sampled uniformly / with Fedorov exchange
? G REEDY / G REEDY F DV: greedy algorithm (relaxed init.) / with Fedorov exchange
? S AMPLE: sampling (relaxed init.) as in Algorithm 1.
We also report the results for solution of the continuous relaxation (R ELAX); the convex optimization
was solved using projected gradient descent, the projection being done with the code from [12].
6.1
Synthetic experiments: optimization comparison
We generated the experimental matrix X by sampling n vectors of size m from the multivariate
Gaussian distribution of mean 0 and sparse precision ??1 (density d ranging from 0.3 to 0.9). Due
to the runtime of Fedorov methods, results are reported for only one run; results averaged over
multiple iterations (as well as for other distributions over X) are provided in Appendix E.
6
As shown in Fig. 1, the greedy algorithm applied to the convex relaxation?s support outperforms
sampling from the convex relaxation solution, and does as well as the usual Fedorov algorithm
U NIF F DV; G REEDY F DV marginally improves upon the greedy algorithm and U NIF F DV. Strikingly, G REEDY provides designs of comparable quality to U NIF F DV; furthermore, as very few local
exchanges improve upon its design, running the Fedorov algorithm with G REEDY initialization is
much faster (Table 1); this is confirmed by Table 2, which shows the number of experiments in common for different algorithms: G REEDY and G REEDY F DV only differ on very few elements. As the
budget k increases, the difference in performances between S AMPLE, G REEDY and the continuous
relaxation decreases, and the simpler S AMPLE algorithm becomes competitive. Table 3 reports the
support of the continuous relaxation solution for ESP-design with ? = 10.
Table 1: Runtimes (s) (? = 10, d = 0.6)
k
40
G REEDY
G REEDY F DV
U NIF F DV
80
1
2.8 10
6.6 101
1.6 103
120
1
2.7 10
2.2 102
4.1 103
160
1
3.1 10
3.2 102
6.0 103
200
1
4.0 10
1.2 102
6.2 103
5.2 101
1.3 102
4.7 103
Table 2: Common items between solutions (? = 10, d = 0.6)
k
40
80
120
160
200
|G REEDY ? U NIF F DV|
|G REEDY ? G REEDY F DV|
|U NIF F DV ? G REEDY F DV|
26
40
26
76
78
75
114
117
113
155
160
155
200
200
200
Table 3: ?z ? ?0 (? = 10, d = 0.6)
6.2
k
40
80
120
160
200
d = 0.3
d = 0.6
d = 0.9
93 ? 3
92 ? 7
88 ? 3
117 ? 3
117 ? 4
116 ? 3
148 ? 2
145 ? 4
147 ? 4
181 ? 3
180 ? 3
179 ? 3
213 ? 2
214 ? 4
214 ? 1
Real data
We used the Concrete Compressive Strength dataset [47] (with column normalization) from the UCI
repository to evaluate ESP-design on real data; this dataset consists in 1030 possible experiments to
model concrete compressive strength as a linear combination of 8 physical parameters. In Figure 2
(a), OED chose k experiments to run to estimate ?, and we report the normalized prediction error on
the remaining n ? k experiments. The best choice of OED for this problem is of course A-optimal
design, which shows the smallest predictive error. In Figure 2 (b), we report the fraction of non-zero
entries in the design matrix XS ; higher values of ? correspond to increasing sparsity. This confirms
that OED allows us to scale between the extremes of A-optimal design and D-optimal design to tune
desirable side-effects of the design; for example, sparsity in a design matrix can indicate not needing
to tune a potentially expensive experimental parameter, which is instead left at its default value.
7
Conclusion and future work
We introduced the family of ESP-design problems, which evaluate the quality of an experimental
design using elementary symmetric polynomials, and showed that typical approaches to optimal
design such as continuous relaxation and greedy algorithms can be extended to this broad family of
problems, which covers A-optimal design and D-optimal design as special cases.
We derived new properties of elementary symmetric polynomials: we showed that they are geodesically log-convex on the space of positive definite matrices, enabling fast solutions to solving the
relaxed ESP optimization problem. We furthermore showed in Lemma 4.2 that volume sampling,
applied to the columns of the design matrix X has a constant multiplicative impact on the objec(
)?1
tive function E? ( XS? XS
), extending Avron and Boutsidis [2]?s result from the trace to all el7
G REEDY
.
G REEDY F DV
S AMPLE
R ELAX
U NIF
U NIF F DV
f? (S)
? = 1 (A-Opt)
2.0
2.0
2.0
1.0
1.0
1.0
0.0
0.0
.
40
80
120
160
200
.
40
80
120
160
200
.
40
80
120
160
200
40
80
120
160
200
40
80
120
160
200
0.0
f? (S)
? = 10
0.0
0.0
-1.0
-1.0
-1.0
-2.0
.
40
80
120
160
200
.
40
80
120
160
200
.
-1.0
f? (S)
? = 20 (D-Opt)
-1.0
-2.0
-2.0
-2.0
-3.0
-3.0
40
80
120
160
200
-3.0
40
80
budget k
160
200
budget k
d = 0.3
.
120
budget k
d = 0.6
.
d = 0.9
.
Figure 1: Synthetic experiments, n = 500, m = 30. The greedy algorithm performs as well as the
classical Fedorov approach; as k increases, all designs except U NIF converge towards the continuous
relaxation, making S AMPLE the best approach for large designs.
? = 1 (A-opt)
.
?=3
?=6
? = 8 (D-opt)
?4
ratio of non zero entries
predictive error
?10
3.2
3.0
2.8
100
120
140
160
180
200
0.81
0.80
100
budget k
.
0.82
120
140
160
180
200
budget k
(a) MSE
.
(b) Sparsity
Figure 2: Predicting concrete compressive strength via the greedy method; higher ? increases the
sparsity of the design matrix XS , at the cost of marginally decreasing predictive performance.
ementary symmetric polynomials. This allows us to derive a greedy algorithm with performance
guarantees, which empirically performs as well as Fedorov exchange, in a fraction of the runtime.
However, our work still includes some open questions: in deriving the Lagrangian dual of the optimization problem, we had to introduce the function a(H) which maps S++
m ; however, although
a(H) is known for ? = 1, m, its form for other values of ? is unknown, making the dual form
a purely theoretical object in the general case. Whether the closed form of a can be derived, or
whether E? (a(H)) can be obtained with only knowledge of H, remains an open problem. Due to
the importance of the dual form of D-optimal design as the Minimum Volume Covering Ellipsoid,
we believe that further investigation of the general dual form of ESP-design will provide valuable
insight, both into optimal design and for the general theory of optimization.
8
ACKNOWLEDGEMENTS
Suvrit Sra acknowledges support from NSF grant IIS-1409802 and DARPA Fundamental Limits of
Learning grant W911NF-16-1-0551.
References
[1] A. Atkinson, A. Donev, and R. Tobias. Optimum Experimental Designs, With SAS. Oxford Statistical
Science Series. OUP Oxford, 2007.
[2] H. Avron and C. Boutsidis. Faster subset selection for matrices and applications. SIAM J. Matrix Analysis
Applications, 34(4):1464?1499, 2013.
[3] E. R. Barnes. An algorithm for separating patterns by ellipsoids. IBM Journal of Research and Development, 26:759?764, 1982.
[4] H. H. Bauschke, O. G?ler, A. S. Lewis, and H. S. Sendov. Hyperbolic polynomials and convex analysis.
Canad. J. Math., 53(3):470?488, 2001.
[5] R. Bhatia. Matrix Analysis. Springer, 1997.
[6] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
[7] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[8] K. Chaloner and I. Verdinelli. Bayesian experimental design: A review. Statistical Science, 10:273?304,
1995.
[9] D. A. Cohn. Neural network exploration using optimal experiment design. In Neural Networks, pages
679?686. Morgan Kaufmann, 1994.
[10] R. Cominetti, W. F. Mascarenhas, and P. J. S. Silva. A newton?s method for the continuous quadratic
knapsack problem. Mathematical Programming Computation, 6(2):151?169, 2014.
[11] M. A. Davenport, A. K. Massimino, D. Needell, and T. Woolf. Constrained adaptive sensing. IEEE
Transactions on Signal Processing, 64(20):5437?5449, 2016.
[12] T. A. Davis, W. W. Hager, and J. T. Hungerford. An efficient hybrid algorithm for the separable convex
quadratic knapsack problem. ACM Trans. Math. Softw., 42(3):22:1?22:25, 2016.
[13] H. Dette, V. B. Melas, and W. K. Wong. Locally d-optimal designs for exponential regression models.
Statistica Sinica, 16(3):789?803, 2006.
[14] A. N. Dolia, T. De Bie, C. J. Harris, J. Shawe-Taylor, and D. M. Titterington. The minimum volume covering ellipsoid estimation in kernel-defined feature spaces. In European Conference on Machine Learning,
pages 630?637, 2006.
[15] E. N. Dolia, N. M. White, and C. J. Harris. D-optimality for minimum volume ellipsoid with outliers. In
In Proceedings of the Seventh International Conference on Signal/Image Processing and Pattern Recognition, pages 73?76, 2004.
[16] G. Elfving. Optimum allocation in linear regression theory. Ann. Math. Statist., 23(2):255?262, 1952.
[17] V. Fedorov. Theory of optimal experiments. Probability and mathematical statistics. Academic Press,
1972.
[18] Y. Gu and Z. Jin. Neighborhood preserving d-optimal design for active learning and its application to
terrain classification. Neural Computing and Applications, 23(7):2085?2092, 2013.
[19] X. He. Laplacian regularized d-optimal design for active learning and its application to image retrieval.
IEEE Trans. Image Processing, 19(1):254?263, 2010.
[20] T. Horel, S. Ioannidis, and S. Muthukrishnan. Budget Feasible Mechanisms for Experimental Design,
pages 719?730. Springer Berlin Heidelberg, 2014.
[21] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1985.
[22] D. A. Jackson and Y. Chen. Robust principal component analysis and outlier detection with ecological
data. Environmetrics, 15(2):129?139, 2004.
[23] T. Jain. Derivatives for antisymmetric tensor powers and perturbation bounds. Linear Algebra and its
Applications, 435(5):1111 ? 1121, 2011.
[24] R. Jozsa and G. Mitchison. Symmetric polynomials in information theory: Entropy and subentropy.
Journal of Mathematical Physics, 56(6), 2015.
[25] A. I. Khuri, B. Mukherjee, B. K. Sinha, and M. Ghosh. Design issues for generalized linear models: A
review. Statist. Sci., 21(3):376?399, 2006.
[26] J. Kiefer. Optimal design: Variation in structure and performance under change of criterion. Biometrika,
62:277?288, 1975.
9
[27] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory,
efficient algorithms and empirical studies. Journal of Machine Learning Research, 9(Feb):235?284, 2008.
[28] F. S. Lasheras, J. V. Vil?n, P. G. Nieto, and J. del Coz D?az. The use of design of experiments to improve a
neural network model in order to predict the thickness of the chromium layer in a hard chromium plating
process. Mathematical and Computer Modelling, 52(78):1169 ? 1176, 2010.
[29] A. S. Lewis. Derivatives of spectral functions. Math. Oper. Res., 21(3):576?588, 1996.
[30] I. G. Macdonald. Symmetric functions and Hall polynomials. Oxford university press, 1998.
[31] A. J. Miller and N.-K. Nguyen. A fedorov exchange algorithm for d-optimal design. Applied Statistics,
43:669?677, 1994.
[32] W. W. Muir. Inequalities concerning the inverses of positive definite matrices. Proceedings of the Edinburgh Mathematical Society, 19(2):109113, 1974.
[33] F. Pukelsheim. Optimal Design of Experiments. Society for Industrial and Applied Mathematics, 2006.
[34] P. J. Rousseeuw and A. M. Leroy. Robust Regression and Outlier Detection. John Wiley & Sons, Inc.,
1987.
[35] G. Sagnol. Optimal design of experiments with application to the inference of traffic matrices in large
networks: second order cone programming and submodularity. Theses, ?cole Nationale Sup?rieure des
Mines de Paris, 2010.
[36] A. Schein and L. Ungar. A-Optimality for Active Learning of Logistic Regression Classifiers, 2004.
[37] A. Shpilka and A. Wigderson. Depth-3 arithmetic circuits over fields of characteristic zero. computational
complexity, 10(1):1?27, 2001.
[38] S. Silvey, D. Titterington, and B. Torsney. An algorithm for optimal designs on a design space. Communications in Statistics - Theory and Methods, 7(14):1379?1389, 1978.
[39] J. D. Smith and M. T. Thai. Breaking the bonds of submodularity: Empirical estimation of approximation
ratios for monotone non-submodular greedy maximization. CoRR, abs/1702.07002, 2017.
[40] S. Sra and R. Hosseini. Conic Geometric Optimization on the Manifold of Positive Definite Matrices.
SIAM J. Optimization (SIOPT), 25(1):713?739, 2015.
[41] P. Sun and R. M. Freund. Computation of minimum-volume covering ellipsoids. Operations Research,
52(5):690?706, 2004.
[42] M. Todd. Minimum-Volume Ellipsoids. Society for Industrial and Applied Mathematics, 2016.
[43] L. Vandenberghe, S. Boyd, and S.-P. Wu. Determinant maximization with linear matrix inequality constraints. SIAM J. Matrix Anal. Appl., 19(2):499?533, 1998.
[44] Y. Wang, A. W. Yu, and A. Singh. On computationally tractable selection of experiments in regression
models, 2016.
[45] Z. Wang, B. Moran, X. Wang, and Q. Pan. Approximation for maximizing monotone non-decreasing set
functions with a greedy method. J. Comb. Optim., 31(1):29?43, 2016.
[46] T. C. Xygkis, G. N. Korres, and N. M. Manousakis. Fisher information based meter placement in distribution grids via the d-optimal experimental design. IEEE Transactions on Smart Grid, PP(99), 2016.
[47] I.-C. Yeh. Modeling of strength of high-performance concrete using artificial neural networks. Cement
and Concrete Research, 28(12):1797 ? 1808, 1998.
[48] Y. Yu. Monotonic convergence of a general algorithm for computing optimal designs. Ann. Statist., 38
(3):1593?1606, 2010.
10
| 6809 |@word mild:1 determinant:4 version:2 repository:1 polynomial:18 seems:1 stronger:1 chromium:2 open:2 calculus:1 hu:1 seek:2 confirms:1 tr:6 hager:1 initial:1 series:1 selecting:3 outperforms:1 optim:1 bie:1 written:1 must:2 determinantal:1 john:1 n0:2 greedy:24 item:2 plane:1 smith:1 provides:4 math:4 simpler:2 mathematical:6 constructed:1 direct:1 sagnol:1 prove:2 consists:1 comb:1 manner:2 introduce:5 indeed:2 decreasing:3 nieto:1 considering:1 increasing:1 becomes:2 begin:1 provided:2 underlying:1 notation:3 circuit:1 lowest:1 argmin:1 interpreted:1 minimizes:1 compressive:3 titterington:2 unified:1 ghosh:1 guarantee:3 avron:3 tie:1 runtime:2 biometrika:1 rm:5 classifier:1 grant:2 omit:1 enjoy:1 positive:11 local:1 treat:1 todd:1 esp:28 consequence:4 limit:1 despite:1 oxford:3 interpolation:1 abuse:1 chose:1 initialization:2 studied:4 conversely:1 relaxing:2 appl:1 co:1 range:1 averaged:1 practical:1 horn:1 definite:10 area:1 empirical:3 elicit:1 significantly:2 hyperbolic:1 projection:4 boyd:2 confidence:1 cannot:1 close:2 selection:4 onto:2 wong:1 map:4 lagrangian:1 maximizing:1 convex:27 immediately:1 needell:1 m2:1 insight:1 deriving:3 vandenberghe:2 jackson:1 classic:1 variation:1 diagonalizable:1 construction:1 suppose:1 user:1 exact:1 programming:2 us:2 element:4 expensive:1 particularly:1 recognition:1 mukherjee:1 wang:6 capture:2 solved:3 sun:1 trade:1 removed:2 decrease:2 valuable:1 deeply:1 pd:1 convexity:3 complexity:1 thai:1 tobias:1 coz:1 oed:12 geodesic:2 mine:1 motivate:1 singh:2 solving:4 algebra:2 smart:1 predictive:4 purely:1 upon:2 gu:1 strikingly:1 easily:2 darpa:1 various:1 muthukrishnan:1 surrounding:1 distinct:2 fast:2 describe:1 jain:1 artificial:1 approached:1 bhatia:2 outcome:2 neighborhood:1 whose:2 widely:1 say:1 otherwise:2 precludes:2 statistic:3 gp:1 transform:1 eigenvalue:2 uci:1 supposed:1 intuitive:2 az:1 convergence:2 empty:1 optimum:3 extending:1 produce:1 ben:1 object:1 help:2 derive:5 depending:1 develop:2 illustrate:1 ij:2 minor:1 advocated:1 sa:1 shpilka:1 indicate:1 differ:1 concentrate:1 submodularity:2 exploration:1 duals:1 material:1 exchange:6 require:2 hx:1 ungar:1 preliminary:1 investigation:1 proposition:3 opt:4 elementary:9 multilinear:1 hall:1 great:1 algorithmic:1 diverged:1 mapping:5 claim:1 predict:1 smallest:3 homoscedastic:1 estimation:2 applicable:1 combinatorial:1 bond:1 sensitive:1 cole:1 repetition:1 tf:1 establishes:1 minimization:1 mit:2 offs:1 sensor:2 gaussian:3 clearly:1 always:1 ck:3 minc:1 broader:1 probabilistically:1 earliest:1 corollary:5 derived:5 rank:2 bernoulli:1 chaloner:1 modelling:1 industrial:2 greedily:1 geodesically:6 inference:1 entire:1 hidden:2 interested:1 i1:1 selects:2 issue:1 among:2 dual:12 classification:1 development:1 constrained:2 special:7 cube:1 field:1 equal:1 once:1 construct:1 beach:1 sampling:11 runtimes:1 identical:1 softw:1 broad:1 yu:3 saving:1 future:1 others:1 report:4 develops:1 few:2 interpolate:2 homogeneity:1 replaced:1 replacement:2 recalling:1 ab:1 detection:3 interest:4 possibility:1 extreme:2 semidefinite:1 silvey:1 predefined:1 implication:1 partial:2 byproduct:2 necessary:1 unless:1 indexed:3 taylor:1 initialized:3 re:1 schein:1 theoretical:4 minimal:1 sinha:1 fenchel:1 column:7 modeling:1 cover:1 w911nf:1 maximization:2 cost:3 subset:6 entry:2 rounding:1 seventh:1 johnson:1 reported:1 bauschke:1 dependency:1 answer:1 thickness:1 synthetic:3 objec:1 st:1 density:1 fundamental:3 siam:3 international:1 csail:1 physic:1 invertible:3 concrete:5 thesis:1 opposed:1 choose:1 davenport:1 admit:2 book:2 derivative:2 return:2 oper:1 account:1 de:3 donev:1 includes:1 inc:1 satisfy:2 combinatorics:1 cement:1 depends:1 multiplicative:3 view:2 h1:1 closed:2 analyze:1 sup:5 traffic:1 start:1 recover:3 competitive:1 contribution:2 kiefer:2 variance:4 kaufmann:1 characteristic:1 miller:1 correspond:2 yield:2 bayesian:2 famous:1 marginally:2 vil:1 confirmed:1 reach:1 definition:1 nonetheless:1 boutsidis:3 pp:1 naturally:1 associated:2 riemannian:3 proof:3 conciseness:1 sampled:1 dataset:2 massachusetts:2 popular:2 recall:1 knowledge:1 fractional:1 improves:1 focusing:1 dette:1 attained:1 higher:2 formulation:3 done:2 box:1 furthermore:2 horel:1 nif:11 cohn:1 del:1 logistic:1 quality:3 indicated:1 believe:1 usa:1 effect:1 xs0:1 requiring:1 unbiased:2 normalized:1 hence:3 equality:1 symmetric:14 white:1 round:1 covering:5 whereby:1 substantiate:1 davis:1 m:1 criterion:2 generalized:1 muir:1 demonstrate:1 performs:2 silva:1 ranging:1 image:3 consideration:2 recently:1 common:3 data1:1 qp:1 overview:1 physical:1 empirically:1 volume:14 extend:2 interpretation:1 he:1 interpret:1 refer:3 cambridge:4 grid:2 mathematics:2 submodular:3 shawe:1 had:1 hxi:2 entail:1 etc:1 base:1 add:1 feb:1 curvature:1 multivariate:1 own:1 recent:1 showed:3 optimizing:1 inf:2 driven:1 scenario:1 rieure:1 certain:1 suvrit:3 ecological:1 inequality:2 discussing:1 yi:2 morgan:1 minimum:7 preserving:1 relaxed:4 guestrin:1 converge:1 signal:2 arithmetic:1 branch:1 multiple:2 ii:4 full:2 desirable:1 needing:1 faster:2 academic:1 calculation:1 offer:2 long:1 retrieval:1 concerning:2 laplacian:1 impact:1 prediction:1 variant:1 basic:1 regression:5 essentially:1 metric:1 expectation:2 iteration:1 grounded:1 normalization:1 kernel:1 background:1 addition:2 krause:1 source:1 probably:1 ample:5 call:1 curious:1 structural:1 leverage:1 unitary:1 near:1 iii:1 easy:1 variety:1 xj:1 zi:2 fm:1 idea:1 regarding:1 translates:1 det:5 whether:2 interpolates:1 remark:1 tune:3 rousseeuw:1 extensively:1 locally:1 statist:3 diameter:1 elax:2 nsf:1 revisit:2 notice:1 popularity:1 write:1 discrete:1 key:1 nevertheless:1 drawn:1 rewriting:1 relaxation:24 monotone:2 geometrically:2 fraction:3 cone:2 run:5 inverse:1 extends:3 family:2 reader:2 reasonable:1 wu:1 environmetrics:1 appendix:5 comparable:1 submatrix:1 bound:6 def:1 zelda:2 guaranteed:1 hi:1 atkinson:1 layer:1 quadratic:3 barnes:1 leroy:1 strength:4 placement:3 precisely:2 constraint:2 n3:1 dominated:1 generates:1 fourier:1 argument:1 optimality:7 min:1 oup:1 separable:1 combination:2 elfving:2 conjugate:1 across:1 smaller:1 reconstructing:1 em:1 son:1 pan:1 making:3 mariet:1 outlier:5 gradually:1 multiplicity:2 dv:15 taken:1 computationally:1 equation:1 remains:3 turn:2 discus:1 mechanism:1 know:1 tractable:1 serf:1 operation:1 apply:1 spectral:5 khuri:1 knapsack:3 denotes:3 running:2 remaining:1 log2:1 newton:1 wigderson:1 graded:2 hosseini:1 classical:3 society:3 seeking:1 objective:2 tensor:1 question:2 added:1 canad:1 costly:1 usual:1 traditional:2 gradient:2 separating:1 berlin:1 sci:1 macdonald:1 topic:1 manifold:3 considers:2 trivial:1 provable:1 code:1 ellipsoid:11 ratio:2 ler:1 difficult:1 sinica:1 potentially:2 trace:4 design:101 anal:1 unknown:1 perform:1 allowing:2 fedorov:10 markov:1 sm:2 enabling:1 descent:2 jin:1 extended:1 ever:1 communication:1 rn:5 perturbation:1 community:1 introduced:3 tive:1 cast:1 namely:1 paris:1 tremendous:1 nm2:1 nip:1 trans:2 able:1 below:3 pattern:2 sparsity:5 power:1 difficulty:2 natural:2 regularized:3 predicting:1 hybrid:1 scheme:1 improve:2 technology:2 historically:1 conic:1 created:1 acknowledges:1 hm:1 naive:1 review:2 geometric:5 acknowledgement:1 removal:1 meter:1 multiplication:1 yeh:1 asymptotic:1 freund:1 ioannidis:1 highlight:1 allocation:1 proven:1 var:2 remarkable:1 eigendecomposition:1 degree:1 affine:1 xp:1 s0:4 vij:1 ibm:1 row:4 course:1 summary:1 keeping:1 enjoys:1 bias:1 allow:2 side:1 institute:2 wide:1 sendov:1 sparse:2 siopt:1 benefit:1 slice:1 edinburgh:1 depth:2 default:1 evaluating:1 adaptive:2 projected:2 nguyen:1 transaction:2 approximate:1 keep:1 confirm:1 active:4 conclude:1 xi:6 alternatively:1 terrain:1 mitchison:1 continuous:13 table:6 additionally:1 robust:2 ca:1 sra:3 obtaining:1 init:2 alg:4 heidelberg:1 mse:1 excellent:2 complex:1 european:1 vj:1 diag:10 antisymmetric:1 statistica:1 linearly:1 motivation:3 noise:1 bounding:3 nothing:1 verifies:2 fig:1 wiley:1 precision:1 sub:1 exponential:1 breaking:1 theorem:7 removing:1 showing:1 sensing:2 moran:1 x:30 admits:2 intractable:1 corr:1 importance:2 budget:11 nationale:1 chen:1 reedy:16 entropy:1 intersection:2 relegate:1 pukelsheim:1 monotonic:2 springer:2 ch:2 satisfies:3 relies:1 lewis:3 ma:2 prop:3 acm:1 harris:2 presentation:1 consequently:1 ann:2 towards:1 fisher:1 feasible:4 hard:3 change:2 specifically:1 typical:1 operates:1 uniformly:2 except:1 lemma:10 principal:1 total:1 called:2 verdinelli:1 experimental:13 gauss:1 e:1 support:7 brevity:1 evaluate:2 princeton:1 phenomenon:1 |
6,423 | 681 | A Parallel Gradient Descent Method for Learning
in Analog VLSI Neural Networks
J. Alspector
R. Meir'" B. Yuhas A. Jayakumar
Bellcore
Morristown, NJ 07962-1910
D. Lippet
Abstract
Typical methods for gradient descent in neural network learning involve
calculation of derivatives based on a detailed knowledge of the network
model. This requires extensive, time consuming calculations for each pattern presentation and high precision that makes it difficult to implement
in VLSI. We present here a perturbation technique that measures, not
calculates, the gradient. Since the technique uses the actual network as
a measuring device, errors in modeling neuron activation and synaptic
weights do not cause errors in gradient descent. The method is parallel
in nature and easy to implement in VLSI. We describe the theory of such
an algorithm, an analysis of its domain of applicability, some simulations
using it and an outline of a hardware implementation.
Introduction
1
The most popular method for neural network learning is back-propagation (Rumelhart, 1986) and related algorithms that calculate gradients based on detailed knowledge of the neural network model. These methods involve calculating exact values
of the derivative of the activation function. For analog VLSI implementations, such
techniques require impossibly high precision in the synaptic weights and precise
modeling of the activation functions. It is much more appealing to measure rather
than calculate the gradient for analog VLSI implementation by perturbing either a
?Present address: Dept. of EE; Technion; Haifa, Israel
tpresent address: Dept. of EE; MIT; Cambridge, MA
836
A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks
single weight (Jabri, 1991) or a single neuron (Widrow, 1990) and measuring the
resulting change in the output error. However, perturbing only a single weight or
neuron at a time loses one of the main advantages of implementing neural networks
in analog VLSI, namely, that of computing weight changes in parallel. The oneweight-at-a-time perturbation method has the same order of time complexity as a
serial computer simulation of learning. A mathematical analysis of the possibility
of model free learning using parallel weight perturbations followed by local correlations suggests that random perturbations by additive, zero-mean, independent
noi~e sources may provide a means of parallel learning (Dembo, 1990). We have
pre :Tiously used such a noise source (Alspector, 1991) in a different implement able
learning model.
2
2.1
Gradient Estimation by Parallel Weight Perturbation
A Brownian Motion Algorithm
One can estimate the gradient of the error E(w) with respect to any weight WI
by perturbing WI by OWl and measuring the change in the output error oE as the
entire weight vector w except for component Wl is held constant.
E(w
+ OWl) -
E(w)
(1)
OWl
This leads to an approximation to the true gradient
g:l:
oE
oE
-OWl
= -OWl
+ O([owd)
(2)
For small perturbations, the second (and higher order) term can be ignored. This
method of perturbing weights one-at-a-time has the advantage of using the correct
physical neurons and synapses in a VLSI implementation but has time complexity
of O(W) where W is the number of weights.
Following (Dembo, 1990), let us now consider perturbing all weights simultaneously.
However, we wish to have the perturbation vector ow chosen uniformly on a hypercube. Note that this requires only a random sign multiplying a fixed perturbation
and is natural for VLSI. Dividing the resulting change in error by any single weight
change, say OWl, gives
oE
E(w + ow) - E(w)
OWl
OWl
(3)
which by a Taylor expansion is
(4)
leading to the approximation (ignoring higher order terms)
837
838
Alspector, Meir, Yuhas, Jayakumar, and Lippe
(5)
An important point of this paper, emphasized by (Dembo, 1990) and embodied in
Eq. (5), is that the last term has expectation value zero for random and independently distributed OWi since the last expression in parentheses is equally likely to
be +1 as -1. Thus, one can approximately follow the gradient by perturbing all
weights at the same time. If each synapse has access to information about the resulting change in error, it can adjust its weight by assuming it was the only weight
perturbed. The weight change rule
(6)
where TJ is a learning rate, will follow the gradient on the average but with the
considerable noise implied by the second term in Eq. (5). This type of stochastic gradient descent is similar to the random-direction Kiefer-Wolfowitz method
(Kushner, 1978), which can be shown to converge under suitable conditions on TJ
and OWi. This is also reminiscent of Brownian motion where, although particles may
be subject to considerable random motion, there is a general drift of the ensemble
of particles in the direction of even a weak external force. In this respect, there is
some similarity to the directed drift algorithm of (Venkatesh, 1991), although that
work applies to binary weights and single layer perceptrons whereas this algorithm
should work for any level of weight quantization or precision - an important advantage for VLSI implementations - as well as any number of layers and even for
recurrent networks.
2.2
Improving the Estimate by Multiple Perturbations
As was pointed out by (Dembo, 1990), for each pattern, one can reduce the variance
of the noise term in Eq. (5) by repeating the random parallel perturbation many
times to improve the statistical estimate. If we average over P perturbations, we
have
oE
OWl
=
1
P
p
oE
L oif.
p=l
l
=
8E
8Wl
+
1
P
P
W
L ?=
p=l&>l
(EJE) (owf)
8Wi
(7)
OwPl
.where .p indexes
. the perturbation number. The variance of the second term, which
IS
a nOise, v,
IS
where the expectation value, <>, leads to the Kronecker delta function,
reduces Eq. (8) to
off, . This
I
A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks
2
< II > =
1
p2
P
W
LL
p=li>l
(OE)2
OW.
(9)
z
The double sum over perturbations and weights (assuming the gradient is bounded
and all gradient directions have the same order of magnitude) has magnitude
O(PW) so that the variance is O(~) and the standard deviation is
(10)
Therefore, for a fixed variance in the noise term, it may be necessary to have a
number of perturbations of the same order as the number of weights. So, if a
high precision estimate of the gradient is needed throughout learning, it seems as
though the time complexity will still be O(W) giving no advantage over single
perturbations. However, one or a few of the gradient derivatives may dominate
the noise and reduce the effective number of parameters. One can also make a
qualitative argument that early in learning, one does not need a precise estimate of
the gradient since a general direction in weight space will suffice. Later, it will be
necessary to make a more precise estimate for learning to converge.
2.3
The Gibbs Distribution and the Learning Problem
Note that the noise of Eq. (7) is gaussian since it is composed of a sum of random
sign terms which leads to a binomial distribution and is gaussian distributed for
large P. Thus, in the continuous time limit, the learning problem has Langevin
dynamics such that the time rate of change of a weight Wk is,
(11)
and the learning problem converges in probability (Zinn-Justin, 1989), so that
as~mpto~ically Pr(w) <X exp[-,BE(w)] where ,B is inversely proportional to the
nOIse vanance.
Therefore, even though the gradient is noisy, one can still get a useful learning algorithm. Note that we can "anneal" Ilk by a variable perturbation method. Depending
on the annealing schedule, this can result in a substantial speedup in learning over
the one-weight-at-a-time perturbation technique.
2.4
Similar Work in these Proceedings
Coincidentally, there were three other papers with similar work at NIPS*92. This
algorithm was presented with different approaches by both (Flower, 1993) and
(Cauwenberghs, 1993). 1 A continuous time version was implemented in VLSI
but not on a neural network by (Kirk, 1993).
1 We note that (Cauwenberghs, 1993) shows that multiple perturbations are n o t needed for learning
if D.w is small enough and h e does not study the m . This do es not agree with our simulations (following)
839
840
Alspector, Meir, Yuhas, Jayakumar, and Lippe
3
3.1
Simulations
Learning with Various Perturbation Iterations
We tried some simple problems using this technique in software. We used a standard
sigmoid activation function with unit gain, a fixed size perturbation of .005 and
random sign. The learning rate, T/, was .1 and momentum, Q, was o. We varied
the number of perturbation iterations per pattern presentation from 1 to 128 (21
where 1 varies from 0 to 7). We performed 10 runs for each condition and averaged
the results. Fig. 1a shows the average learning curves for a 6 input, 12 hidden, 1
output unit parity problem as the number of perturbations per pattern presentation
is varied. The symbol plotted is l.
replication 6 avg , 0
pa"ty 6 avg10
----::-1
~f
.. .. ........
7 7 1 7 7 7 7 7 ,. 7 1 7 : l l l i l ? I I I I ; ; : - ; -,
7
??
.. .....
~
1.
I
I
7
I
?
3 33
:I
3
33
3
3
3
3 33 3 3 3 3 3
3
3
3
3 3 3 :I
33
I
I
j
~------~ -------~I-
50
100
150
10
15
20
~- J
2S
Figure 1. Learning curves for 6-12-1 parity and 6-6-6 replication .
There seems to be a critical number of perturbations, Pc, about 16 (1 = 4) in this
case, below which learning slows dramatically.
We repeated the measurements of Fig. 1a for different sizes of the parity problem
using a N-2N-1 network. We also did these measurements on a different problem,
replication or identity, where the task is to replicate the bit pattern of the input on
the output. We used a N-N-N network for this task so that we have a comparison
with the parity problem as N varies for roughly the same number of weights (2N 2 +
2N) in each network. The learning curves for the 6-6-6 problem are plotted in Fig.
lb. The critical value also seems to be 16 (l = 4).
p erhaps b ecause we do not d ecrease 6w and 11 as learning proceeds. He did not check this for large
problems as we did. In an implementation, one will not be able to reduce 6w too much so that the effect
on the output error can be measu r ed. It is also likely that multiple perturbations can be done more
quickly than multiple pattern presentations, if learning speed is an issue. He also notes the importance
of correlating with the change in error rather than the error alone as in (Dembo, 1990) .
A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks
3.2
Scaling of the Critical Value with Problem Size
To determine how the critical value of perturbation iterations scales, we tried a
variety of problems besides the N-N-N replication and N-2N-1 parity. We added N2N-N replication and N-N-1 parity to see how more weights affect the same problem.
We also did N-N-N /2 edge counting, where the output is the number of sign changes
in an ordered row of N inputs. Finally we did N-2N-N and N-N-N hamming where
the output is the closest hamming code for N inputs. We varied the number of
perturbation iterations so that p = 1,2,5,10,20,50,100,200,400.
Parity N-2N-1
Edge N-N-N/2
i
i
0
.
I
I
lOll
~
i
---
... ...
0
,....
I
I
"'"
...
~
...
,....
eoo
'000
Hamming N-2N-N
i
Replication N-2N-N
--
..,
~
~
10)
.tOO
--
100
100
'000
200
.00
--
100
Figure 2. Critical value scaling for different problems.
Fig. 2 gives a feel for the effective scale of the problem by plotting the critical value
of the number of perturbation iterations as a function of the number of weights for
some of the problems we looked at. Note that the required number of iterations is
not a steep function of the network size except for the parity problem. We speculate
that the scaling properties are dependent on the shape of the error surface. If the
derivatives in Eq. 9 are large in all dimensions (learning on a bowl-shaped surface),
then the effective number of parameters is large and the variance of the noise term
will be on the order of the number of weights, leading to a steep dependence in
Fig. 2. If, however, there are only a few weight directions with significantly large
error derivatives (learning on a taco shell), then the noise will scale at a slower
rate than the number of weights leading to a weak dependence of the critical value
with problem size. This is actually a nice feature of parallel perturbative learning
because it means learning will be noisy and slow in a bowl where it's easy, but
precise and fast in a taco shell where it's hard.
The critical value is required for convergence at the end of learning but not at
the start. This means it should be possible to anneal the number of perturbation
iterations to achieve an additional speedup over the one-weight-at-a-time perturba-
841
842
Alspector, Meir, Yuhas, Jayakumar, and Lippe
tion technique. We would also like to understand how to vary bw and 11 as learning
proceeds. The stochastic approximation literature is likely to serve as a useful guide.
3.3
Computational Geometry of Stochastic Gradient Descent
fW;Y:t~~f~;s~~:ii~
error
'"
:"
:,
H'
? ?? ??? _
.,
, '
.,
'"o
o
o
weight I
Figure 3. Computational Geometry of Stochastic Gradient Descent.
Fig. 3a shows some relevant gradient vectors and angles in the learning problem.
For a particular pattern presentation, the true gradient, gb, from a back-propagation
calculation is compared with the one-weight-at-a-time gradient, go, from a perturbation, bWi , in one weight direction. The gradient from perturbing all weights, gm,
adds a noise vector to go. By taking the normalized dot product between gm and
gb, one obtains the direction cosine between the estimated and the true gradient
direction. This is plotted in Fig. 3b for the 10 input N-N-l parity problem for all
nine perturbation values. The shaded bands increase in cos (decrease in angle) as
the number of perturbations goes from 1 to 400. Note that the angles are large
but that learning still takes place. Note also that the dot product is almost always
positive except for a few points at low perturbation numbers. Incidentally, by looking at plots of the true to one-weight-at-a-time angles (not shown), we see that the
large angles are due almost entirely to the parallel perturbative noise term and not
to the stepsize, bw.
4
Outline of an analog implementation
Fig. 4 shows a diagram of a learning synapse using this perturbation technique.
Note that its only inputs are a single bit representing the sign of the perturbation
and a broadcast signal representing the change in the output error. Multiple perturbations can be averaged by the summing buffer and weight is stored as charge
on a capacitor or floating gate device.
A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks
An estimate of the power and area of an analog chip implementation gives the
following: Using a standard 1.2J,.tm, double poly technology, the synapse with about
7 to 8 bits ofresolution and which includes a 0.5 pf storage capacitor, weight refresh
(Hochet, 1989) and update circuitry can be fabricated with an area of about 1600
J,.tm2 and with a power dissipation of about 100 J,.t W with continuous self-refresh.
This translates into a chip of about 22000 synapses at 2.2 watts on a 36 mm 2 die
core. It is likely that the power requirements can be greatly reduced with a more
relaxed refresh technique or with a suitable non-volatile analog storage technology.
S WI,j
P (Perturbation anneal)
, I..
, I,J
umming &,
Integrating ,
I?I,k I.,
I,
buffer
~cw
~ --L
~~
I,j
Teach .
I
"-+.-~
aW.
1,1
~
pertur6~r----.
Synapse'
----_.
Figure 4. Diagram of perturbative learning synapse.
We intend to use our noise generation technique (Alspector, 1991) to provide uncorrelated perturbations potentially to thousands of synapses. Note also that the
error signal can be generated by a simple resistor or a comparator followed by a
summer. The difference signal can be generated by a simple differentiator.
5
Conclusion
We have analyzed a parallel perturbative learning technique and shown that it
should converge under the proper conditions. We have performed simulations on
a variety of test problems to demonstrate the scaling behavior of this learning
algorithm. We are continuing work to understand speedups possible in an analog
VLSI implementation. Finally, we describe such an implementation. Future work
will involve applying this technique to learning in recurrent networks.
Acknowledgment
We thank Barak Pearhuutter for valuable and insightful discussions and Gert
Cauwenberghs for making an advance copy of his paper available. This work has
843
844
Alspector, Meir, Yuhas, Jayakumar, and Lippe
been partially supported by AFOSR contract F49620-90-C-0042, DEF.
References
J. Alspector, J. W. Gannett, S. Haber, M.B. Parker, and R. Chu, "A VLSI-Efficient
Technique for Generating Multiple Uncorrelated Noise Sources and Its Application
to Stochastic Neural Networks", IEEE Trans. Circuits and Systems, 38, 109, (Jan.,
1991).
J. Alspector, A. Jayakumar, and S. Luna, "Experimental Evaluation of Learning
in a Neural Microsystem" in Advances in Neural Information Processing Systems
4, J. E. Moody, S. J. Hanson, and R. P. Lippmann (eds.) San Mateo,CA: MorganKaufmann Publishers (1992), pp. 871-878.
G. Cauwenberghs, "A Fast Stochastic Error-Descent Algorithm for Supervised
Learning and Optimization," in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993.
A. Dembo and T. Kailath, "Model-Free Distributed Learning", IEEE Trans. Neural
Networks Bt, (1990) pp. 58-70.
B. Flower and M. Jabri, "Summed Weight Neuron Perturbation: An O(n) Improvement over Weight Perturbation," in Advances in Neural Information Processing
Systems, San Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993.
B. Hochet, "Multivalued MOS memory for Variable Synapse Neural Network", Electronics Letters, vol 25, no 10, (May 11, 1989) pp. 669-670.
M. Jabri and B. Flower, "Weight Perturbation: An Optimal Architecture and
Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer N etworks", Neural Computation 3 (1991) pp. 546-565.
D. Kirk, D. Kerns, K. Fleischer, and A. Barr, "Analog VLSI Implementation of
Gradient Descent," in Advances in Neural Information Processing Systems, San
Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993.
H.J. Kushner and D.S. Clark, "Stochastic Approximation Methods for Constrained
and Unconstrained Systems", p. 58 ff., Springer-Verlag, New York, (1978).
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning Internal Representations by Error Propagation", in Parallel Distributed Processing: Ezplorations
in the Microstructure of Cognition. Vol. 1: Foundations, D. E. Rumelhart and
J. L. McClelland (eds.), MIT Press, Cambridge, MA (1986), p. 318.
S. Venkatesh, "Directed Drift: A New Linear Threshold Algorithm for Learning
Binary Weights On-Line", Journal of Computer Science and Systems, (1993), in
press.
B. Widrow and M. A. Lehr, "30 years of Adaptive Neural Networks. Perceptron,
Madaline, and Backpropagation", Proc. IEEE 78 (1990) pp. 1415-1442.
J. Zinn-Justin, "Quantum Field Theory and Critical Phenomena", p. 57 ff., Oxford
University Press, New York, (1989).
PART XI
COGNITIVE
SCIENCE
| 681 |@word version:1 pw:1 seems:3 replicate:1 simulation:5 tried:2 electronics:1 activation:4 perturbative:4 reminiscent:1 chu:1 refresh:3 additive:1 shape:1 plot:1 update:1 alone:1 device:2 dembo:6 core:1 tpresent:1 mathematical:1 loll:1 replication:6 qualitative:1 yuhas:5 roughly:1 behavior:1 alspector:9 actual:1 pf:1 bounded:1 suffice:1 circuit:1 israel:1 kaufman:3 fabricated:1 nj:1 morristown:1 charge:1 unit:2 positive:1 local:1 limit:1 oxford:1 approximately:1 mateo:4 suggests:1 shaded:1 co:1 averaged:2 directed:2 acknowledgment:1 implement:3 backpropagation:1 jan:1 area:2 significantly:1 pre:1 integrating:1 kern:1 get:1 storage:2 applying:1 microsystem:1 go:3 williams:1 independently:1 rule:1 dominate:1 his:1 gert:1 feel:1 gm:2 exact:1 us:1 pa:1 rumelhart:3 calculate:2 thousand:1 oe:7 decrease:1 noi:1 valuable:1 substantial:1 complexity:3 dynamic:1 serve:1 bowl:2 chip:2 various:1 fast:2 describe:2 effective:3 say:1 noisy:2 advantage:4 product:2 relevant:1 achieve:1 convergence:1 double:2 requirement:1 incidentally:1 generating:1 converges:1 depending:1 widrow:2 recurrent:3 eq:6 p2:1 dividing:1 implemented:1 direction:8 correct:1 stochastic:7 implementing:1 owl:9 require:1 eoo:1 barr:1 microstructure:1 mm:1 exp:1 cognition:1 mo:1 circuitry:1 vary:1 early:1 estimation:1 proc:1 wl:2 mit:2 gaussian:2 always:1 rather:2 improvement:1 check:1 greatly:1 dependent:1 entire:1 bt:1 hidden:1 vlsi:18 issue:1 bellcore:1 constrained:1 summed:1 field:1 shaped:1 future:1 few:3 composed:1 simultaneously:1 floating:1 geometry:2 bw:2 possibility:1 evaluation:1 adjust:1 analyzed:1 lehr:1 pc:1 tj:2 held:1 edge:2 necessary:2 taylor:1 continuing:1 haifa:1 plotted:3 modeling:2 measuring:3 eje:1 applicability:1 deviation:1 technion:1 too:2 stored:1 perturbed:1 varies:2 aw:1 contract:1 off:1 quickly:1 moody:1 broadcast:1 luna:1 external:1 cognitive:1 jayakumar:6 derivative:5 leading:3 li:1 speculate:1 wk:1 includes:1 later:1 performed:2 tion:1 cauwenberghs:4 start:1 parallel:15 kiefer:1 variance:5 ensemble:1 weak:2 multiplying:1 synapsis:3 synaptic:2 ed:3 ty:1 pp:5 hamming:3 gain:1 popular:1 knowledge:2 multivalued:1 schedule:1 actually:1 back:2 higher:2 supervised:1 follow:2 synapse:6 done:1 though:2 correlation:1 propagation:3 effect:1 normalized:1 true:4 ll:1 self:1 die:1 cosine:1 outline:2 demonstrate:1 motion:3 dissipation:1 sigmoid:1 volatile:1 physical:1 perturbing:7 lippe:4 bwi:1 analog:14 he:2 measurement:2 cambridge:2 gibbs:1 unconstrained:1 pointed:1 particle:2 dot:2 access:1 similarity:1 surface:2 add:1 brownian:2 closest:1 buffer:2 verlag:1 binary:2 morgan:3 additional:1 relaxed:1 converge:3 wolfowitz:1 determine:1 signal:3 ii:2 multiple:6 reduces:1 calculation:3 serial:1 equally:1 parenthesis:1 calculates:1 multilayer:1 expectation:2 iteration:7 whereas:1 annealing:1 diagram:2 source:3 publisher:4 subject:1 capacitor:2 ee:2 counting:1 feedforward:1 easy:2 enough:1 variety:2 affect:1 ezplorations:1 architecture:1 reduce:3 tm:1 translates:1 fleischer:1 expression:1 gb:2 york:2 cause:1 nine:1 differentiator:1 dramatically:1 ignored:1 useful:2 detailed:2 involve:3 coincidentally:1 repeating:1 band:1 hardware:1 mcclelland:1 reduced:1 meir:5 sign:5 delta:1 estimated:1 per:2 vol:5 threshold:1 year:1 sum:2 impossibly:1 zinn:2 run:1 angle:5 letter:1 place:1 throughout:1 almost:2 scaling:4 bit:3 entirely:1 layer:2 def:1 followed:2 summer:1 ilk:1 kronecker:1 software:1 speed:1 argument:1 speedup:3 watt:1 wi:4 appealing:1 making:1 pr:1 agree:1 etworks:1 needed:2 end:1 available:1 stepsize:1 slower:1 gate:1 binomial:1 kushner:2 calculating:1 giving:1 hypercube:1 implied:1 intend:1 added:1 looked:1 dependence:2 gradient:30 ow:3 cw:1 n2n:1 thank:1 assuming:2 besides:1 code:1 index:1 madaline:1 difficult:1 steep:2 potentially:1 teach:1 slows:1 implementation:11 proper:1 neuron:5 descent:12 langevin:1 hinton:1 looking:1 precise:4 perturbation:40 varied:3 lb:1 drift:3 venkatesh:2 namely:1 required:2 extensive:1 hanson:1 nip:1 trans:2 address:2 able:2 justin:2 proceeds:2 flower:3 pattern:7 below:1 taco:2 memory:1 haber:1 power:3 suitable:2 critical:9 natural:1 force:1 representing:2 improve:1 technology:2 inversely:1 embodied:1 gannett:1 nice:1 literature:1 afosr:1 ically:1 generation:1 proportional:1 clark:1 foundation:1 plotting:1 uncorrelated:2 row:1 supported:1 last:2 free:2 parity:9 copy:1 guide:1 understand:2 barak:1 perceptron:1 taking:1 distributed:4 f49620:1 curve:3 dimension:1 quantum:1 avg:1 san:4 adaptive:1 obtains:1 lippmann:1 correlating:1 summing:1 consuming:1 xi:1 continuous:3 nature:1 ca:4 ignoring:1 improving:1 expansion:1 poly:1 anneal:3 domain:1 jabri:3 did:5 main:1 owi:2 noise:14 repeated:1 fig:8 ff:2 parker:1 slow:1 precision:4 momentum:1 wish:1 resistor:1 kirk:2 emphasized:1 insightful:1 symbol:1 quantization:1 importance:1 magnitude:2 likely:4 ordered:1 partially:1 applies:1 springer:1 loses:1 ma:2 shell:2 comparator:1 identity:1 presentation:5 kailath:1 considerable:2 change:11 hard:1 fw:1 typical:1 except:3 uniformly:1 e:1 experimental:1 perceptrons:1 internal:1 morgankaufmann:1 dept:2 phenomenon:1 |
6,424 | 6,810 | Emergence of Language with Multi-agent Games:
Learning to Communicate with Sequences of Symbols
Serhii Havrylov
ILCC, School of Informatics
University of Edinburgh
[email protected]
Ivan Titov
ILCC, School of Informatics
University of Edinburgh
ILLC, University of Amsterdam
[email protected]
Abstract
Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a
setting where two agents engage in playing a referential game and, from scratch,
develop a communication protocol necessary to succeed in this game. Unlike
previous work, we require that messages they exchange, both at train and test time,
are in the form of a language (i.e. sequences of discrete symbols). We compare a
reinforcement learning approach and one using a differentiable relaxation (straightthrough Gumbel-softmax estimator (Jang et al., 2017)) and observe that the latter is
much faster to converge and it results in more effective protocols. Interestingly, we
also observe that the protocol we induce by optimizing the communication success
exhibits a degree of compositionality and variability (i.e. the same information can
be phrased in different ways), both properties characteristic of natural languages.
As the ultimate goal is to ensure that communication is accomplished in natural
language, we also perform experiments where we inject prior information about
natural language into our model and study properties of the resulting protocol.
1
Introduction
With the rapid advances in machine learning in recent years, the goal of enabling intelligent agents
to communicate with each other and with humans is turning from a hot topic of philosophical
debates into a practical engineering problem. It is believed that supervised learning alone is not
going to provide a solution to this challenge (Mikolov et al., 2015). Moreover, even learning natural
language from an interaction between humans and an agent may not be the most efficient and scalable
approach. These considerations, as well as desire to achieve a better understanding of principles
guiding evolution and emergence of natural languages (Nowak and Krakauer, 1999; Brighton, 2002),
have motivated previous research into setups where agents invent a communication protocol which
lets them succeed in a given collaborative task (Batali, 1998; Kirby, 2002; Steels, 2005; Baronchelli
et al., 2006). For an extensive overview of earlier work in this area, we refer the reader to Kirby
(2002) and Wagner et al. (2003).
We continue this line of research and specifically consider a setting where the collaborative task is a
game. Neural network models have been shown to be able to successfully induce a communication
protocol for this setting (Lazaridou et al., 2017; Jorge et al., 2016; Foerster et al., 2016; Sukhbaatar
et al., 2016). One important difference with these previous approaches is that we assume that
messages exchanged between the agents are variable-length strings of symbols rather than atomic
categories (as in the previous work). Our protocol would have properties more similar to natural
language and, as such, would have more advantages over using atomic categories. For example, it can
support compositionality (Werning et al., 2011) and provide an easy way to regulate the amount of
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
information conveyed in a message. Interestingly, in our experiments, we also find that agents develop
a protocol faster when we allow them to use longer sequences of symbols. Somewhat surprisingly, we
observe that the language derived by our method favours multiple encodings of the same information,
reminiscent of synonyms or paraphrases in natural languages. Moreover, with messages being strings
of symbols (i.e. words), it is now possible to inject supervision to ensure that the invented protocol is
close enough to a natural language and, thus, potentially interpretable by humans.
In our experiments, we focus on a referential game (Lewis, 1969), where the goal for one agent is to
explain which image the other agent should select. Our setting can be formulated as follows:
1. There is a collection of images {in }N
n=1 from which a target image t is sampled as well as
K distracting images {dk }K
k=1 .
2. There are two agents: a sender S? and a receiver R? .
3. After seeing the target image t, the sender has to come up with a message mt , which is
represented by a sequence of symbols from the vocabulary V of a size |V |. The maximum
possible length of a sequence is L.
4. Given the message mt and a set of images, which consists of distracting images and the
target image, the goal of the receiver is to identify the target image correctly.
This setting is inspired by Lazaridou et al. (2017) but there are important differences: for example,
we use sequences rather than single symbols, and our sender, unlike theirs, does not have access to
distracting images. This makes our setting both arguably more realistic and more challenging from
the learning perspective.
Generating message mt requires sampling from categorical distributions over vocabulary, which
makes backpropagating the error through the message impossible. It is tempting to formulate
this game as a reinforcement learning problem. However, the number of possible messages1 is
proportional to |V |L . Therefore, na?ve Monte Carlo methods will give very high-variance estimates
of the gradients which makes the learning process harder. Also, in this setup, because the receiver R?
tries to adapt to the produced messages it will correspond to the non-stationary environment in which
sender S? acts making the learning problem even more challenging. Instead, we propose an effective
approach where we use straight-through Gumbel-softmax estimators (Jang et al., 2017; Bengio et al.,
2013) allowing for end-to-end differentiation, despite using only discrete messages in training. We
demonstrate that this approach is much more effective than the reinforcement learning framework
employed in previous approaches to referential games, both in terms of convergence times and the
resulting communication success.
Our main contributions can be summarized as follows:
? we are the first to show that structured protocols (i.e. strings of symbols) can be induced
from scratch by optimizing reward in collaborative tasks;
? we demonstrate that relaxations based on straight-through estimators are more effective than
reinforcement learning for our task;
? we show that the induced protocol implements hierarchical encoding scheme and there exist
multiple paraphrases that encode the same semantic content.
2
Model
2.1
Agents? architectures
The sender and the receiver are implemented as LSTM networks (Hochreiter and Schmidhuber,
1997). Figure 1 shows the sketch of model architecture where diamond-shaped, dashed and solid
arrows represent sampling, copying and deterministic functions respectively. The inputs to the sender
are target image t and the special token <S> that denotes the start of a message. Given these inputs,
the sender generates next token wi in a sequence by sampling from the categorical distribution
Cat(pti ) where pti = softmax(W hsi + b). Here, hsi is the hidden state of sender?s LSTM and can be
calculated as2 hsi = LSTM(hsi?1 , wi?1 ). In the first time step we have hs0 = ?(f (t)) where ?(?) is an
1
2
In our experiments |V | = 10000 and L is up to 14.
We omitted the cell state in the equation for brevity.
2
affine transformation of image features f (?) extracted from a convolutional neural network (CNN).
Message mt is obtained by sequentially sampling until the maximum possible length L is reached or
the special token <S> is generated.
Figure 1: Architectures of sender and receiver.
The inputs to the receiver are the generated message mt and a set of images that contain the target
image t and distracting images {dk }K
k=1 . Receiver interpretation of the message is given by the affine
transformation g(?) of the last hidden state hrl of the LSTM network that reads the message. The loss
function for the whole system can be written as:
"K
#
X
T
r
T
r
L?,? (t) = Emt ?p? (?|t)
max[0, 1 ? f (t) g(hl ) + f (dk ) g(hl )]
(1)
k=1
The energy function E(v, mt ) = ?f (v)T g(hrl (mt )) can be used to define the probability distribution
over a set of images p(v|mt ) ? e?E(v,mt ) . Communication between two agents is successful if the
target image has the highest probability according to this distribution.
2.2
Grounding in Natural Language
To ensure that communication is accomplished with a language that is understandable by humans,
we should favour protocols that resemble, in some respect, a natural language. Also, we would like
to check whether using sequences with statistical properties similar to those of a natural language
would be beneficial for communication. There are at least two ways how to do this.
The indirect supervision can be implemented by using the Kullback-Leibler (KL) divergence regularization DKL (q? (m|t)kpN L (m)), from the natural language to the learned protocol. As we do
not have access to pN L (m), we train a language model p? using available samples (i.e. texts) and
approximate the original KL divergence with DKL (q? (m|t)kp? (m)). We estimated the gradient
of the divergent with respect to the ? parameters by applying ST-GS estimator to the Monte Carlo
approximation calculated with one sampled message from q? (m|t). This regularization provides
indirect supervision by encouraging generated messages to have a high probability in natural language
but at the same time maintaining high entropy for the communication protocol. Note that this is a
weak form of grounding, as it does not force agents to preserve ?meanings? of words: the same word
can refer to a very different concept in the induced artificial language and in the natural language.
The described indirect grounding of the artificial language in a natural language can be interpreted as
a particular instantiation of a variational autoencoder (VAE) (Kingma and Welling, 2014). There are
no gold standard messages for images. Thus, a message can be treated as a variable-length sequence
of discrete latent variables. On the other hand, image representations are always given. Hence they
are equivalent to the observed variable in the VAE framework. The trained language model p? (m)
serves as a prior over latent variables. The receiver agent is analogous to the generative part of the
VAE, although, it uses a slightly different loss for the reconstruction error (hinge loss instead of
log-likelihood). The sender agent is equivalent to an inference network used to approximate the
posteriors in VAEs.
3
Minimizing the KL divergence from the natural language distribution to the learned protocol distribution can ensure that statistical properties of the messages are similar to those of natural language.
However, words are not likely to preserve their original meaning (e.g. the word ?red? may not refer
to ?red? in the protocol). To address this issue, a more direct form of supervision can be considered.
For example, additionally training the sender on the image captioning task (Vinyals et al., 2015),
assuming that there is a correct and most informative way to describe an image.
2.3
Learning
It is relatively easy to learn the receiver agent. It is end-to-end differentiable, so gradients of the loss
function with respect to its parameters can be estimated efficiently. The receiver-type model was
investigated before by Chrupa?a et al. (2015) and known as Imaginet. It was used to learn visually
grounded representations of language from coupled textual and visual input. The real challenge is to
learn the sender agent. Its computational graph contains sampling, which makes it nondifferentiable.
In what follows in this section, we discuss methods for estimating gradients of the loss function in
Equation (1).
2.3.1
REINFORCE
REINFORCE is a likelihood-ratio method (Williams, 1992) that provides a simple way of estimating
gradients of the loss function with respect to parameters of the stochastic policy. We are interested
in optimizing the loss function from Equation (1). The REINFORCE algorithm enables the use of
gradient-based optimization methods by estimating gradients as:
?log p? (mt |t)
?L?,?
= Ep? (?|t) l(mt )
??
??
(2)
Where l(mt ) is the learning signal, the inner part of the expectation in Equation (1). However,
computing the gradient precisely may not be feasible due to the enormous number of message
configurations. Usually, a Monte Carlo approximation of the expectation is used. Training models
with REINFORCE can be difficult, due to the high variance of the estimator. We observed more
reliable learning when using stabilizing techniques proposed by Mnih and Gregor (2014). Namely,
we use a baseline, defined as a moving average of the reward, to control variance of the estimator;
this results in centering the learning signal l(mt ). We also use a variance-based adaptation of the
learning rate that consists of dividing the learning rate by a running estimate of the reward standard
deviation. This trick ensures that the learning signal is approximately unit variance, making the
learning process less sensitive to dramatic and non-monotonic changes in the centered learning
signal. To take into account varying difficulty of describing different images, we use input-dependent
baseline implemented as a neural network with two hidden layers.
2.3.2
Gumbel-softmax estimator
In the typical RL task formulation, an acting agent does not have access to the complete environment
specification, or, even if it does, the environment is non-differentiable. Thus, in our setup, an agent
that was trained by any REINFORCE-like algorithm would underuse available information about the
environment. As a solution, we consider replacement of one-hot encoded symbols w ? V sampled
from a categorical distribution with a continuous relaxation w
? obtained from the Gumbel-softmax
distribution (Jang et al., 2017; Maddison et al., 2017).
Consider a categorical distribution with event probabilities p1 , p2 , ..., pK , the Gumbel-softmax trick
proceeds as follows: obtain K samples {uk }K
k=1 from uniformly distributed variable u ? U (0, 1),
transform each sample with function gk = ? log (? log (uk )) to get samples from the Gumbel
distribution, then compute a continuous relaxation:
exp ((log pk + gk )/? )
w?k = PK
i=1 exp ((log pi + gi )/? )
(3)
Where ? is the temperature that controls accuracy of the approximation arg max with softmax
function. As the temperature ? is approaching 0, samples from the Gumbel-softmax distribution
4
are becoming one-hot encoded, and the Gumbel-softmax distribution starts to be identical to the
categorical distribution (Jang et al., 2017).
As a result of this relaxation, the game becomes completely differentiable and can be trained using the
backpropagation algorithm. However, communicating with real values allows the sender to encode
much more information into a message compared to using a discrete one and is unrealistic if our
ultimate goal is communication in natural language. Also, due to the recurrent nature of the receiver
agent, using discrete tokens during test time can lead to completely different dynamics compared to
the training time which uses continuous tokens. This manifests itself in a large gap between training
and testing performance (up to 20% drop in the communication success rate in our experiments).
2.3.3
Straight-through Gumbel-softmax estimator
To prevent the issues mentioned above, we discretize w
? back with arg max in the forward pass that
then becomes an ordinary sample from the original categorical distribution. Nevertheless, we use
?L
? ??L
continuous relaxation in the backward pass, effectively assuming ?w
w
? . This biased estimator is
known as the straight-through Gumbel-softmax (ST-GS) estimator (Jang et al., 2017; Bengio et al.,
2013). As a result of applying this trick, there is no difference in message usage during training and
testing stages, which contrasts with previous differentiable frameworks for learning communication
protocols (Foerster et al., 2016).
Because of using ST-GS, the forward pass does not depend on the temperature. However, it still
affects the gradient values during the backward pass. As discussed before, low values for ? provide
better approximations of arg max. Because the derivative of arg max is 0 everywhere except at the
boundary of state changes, a more accurate approximation would lead to the severe vanishing gradient
problem. Nonetheless, with ST-GS we can afford to use large values for ? , which would usually lead
to faster learning. In order to reduce the burden of performing extensive hyperparameter search for
the temperature, similarly to Gulcehre et al. (2017), we consider learning the inverse-temperature
with a multilayer perceptron:
1
= log(1 + exp(w?T hsi )) + ?0 ,
(4)
? (hsi )
where ?0 controls maximum possible value for the temperature. In our experiments, we found that
learning process is not very sensitive to the hyperparameter as long as ?0 is less than 1.0.
Despite the fact that ST-GS estimator is computationally efficient, it is biased. To understand how
reliable the provided direction is, one can check whether it can be regarded as a pseudogradient (for
the results see Section 3.1). The direction ? is a pseudogradient of J(u) if the condition ? T ?J(u) > 0
is satisfied. Polyak and Tsypkin (1973) have shown that, given certain assumptions about the learning
rate, a very broad class of pseudogradient methods converge to the critical point of function J.
To examine whether the direction provided by ST-GS is a pseudogradient, we used a stochastic
perturbation gradient estimator that can approximate a dot product between arbitrary direction ? in
the parameter space and the true gradient:
J(u + ?) ? J(u ? ?)
= ? T ?J(u) + O(2 )
2
(5)
In our case J(u) is a Monte Carlo approximation of Equation (1). In order to reduce the variance
in dot product estimation (Bhatnagar et al., 2012), the same Gumbel noise samples can be used for
evaluating forward and backward perturbations of J(u).
3
3.1
Experiments
Tabula rasa communication
We used the Microsoft COCO dataset (Chen et al., 2015) as a source of images. Prior to training,
we randomly selected 10% of the images from the MSCOCO 2014 training set as validation data
and kept the rest as training data. As a result of this split, more than 74k images were used for
training and more than 8k images for validation. To evaluate the learned communication protocol, we
used the MSCOCO 2014 validation set that consists of more than 40k images. In our experiments
5
images are represented by outputs of the relu7 layer from the pretrained 16-layer VGG convolutional
network (Simonyan and Zisserman, 2015).
Figure 2: The performance and properties of learned protocols.
We set the following model configuration without tuning: the embedding dimensionality is 256,
the dimensionality of LSTM layers is 512, the vocabulary size is 10000, the number of distracting
images is 127, the batch size is 128. We used Adam (Kingma and Ba, 2014) as an optimizer, with
default hyperparameters and the learning rate of 0.001 for the GS-ST method. For the REINFORCE
estimator we tuned learning rate by searching for the optimal value over [10?5 ; 0.1] interval with
a multiplicative step size 10?1 . We did not observe significant improvements while using inputdependent baseline and disregarded them for the sake of simplicity. To investigate benefits of learning
temperature, first, we found the optimal temperature that is equal to 1.2 by performing a search over
interval [0.5; 2.0] with the step size equal to 0.1. As we mentioned before, the learning process with
temperature defined by Equation (4) is not very sensitive to ?0 hyperparameter. Nevertheless, we
conducted hyperparameter search over interval [0.0; 2.0] with step size 0.1 and found that model
?0 = 0.2 has the best performance. The differences in the performance were not significant unless
the ?0 was bigger than 1.0.
After training models we tested two encoding strategies: plain sampling and greedy argmax. That
means selecting an argmax of the corresponding categorical distribution at each time step. Figure 2
shows the communication success rate as a function of the maximum message length L. Because
results for models with learned temperature are very similar to the counterparts with fixed (manually
tuned) temperatures, we omitted them from the figure for clarity. However, in average, models with
learned temperatures outperform vanilla versions by 0.8%. As expected, argmax encoding slightly but
consistently outperforms the sampling strategy. Surprisingly, REINFORCE beats GS-ST for the setup
with L = 1. We may speculate that in this relatively easy setting being unbiased (as REINFORCE) is
more important than having a low variance (as GS-ST).
Interestingly, the number of updates that are required to achieve training convergence with the GS-ST
estimator decreases when we let the sender use longer messages (i.e. for larger L). This behaviour
is slightly surprising as one could expect that it is harder to learn the protocol when the space of
messages is larger. In other words, using longer sequences helps to learn a communication protocol
faster. However, this is not at all the case for the REINFORCE estimator: it usually takes five-fold
more updates to converge compared to GS-ST, and also there is no clear dependency between the
number of updates needed to converge and the maximum possible length of a message.
We also plot the perplexity of the encoder. It is relatively high and increasing with sentence length for
GS-ST, whereas for REINFORCE the perplexity increase is not as rapid. This implies redundancy in
the encodings: there exist multiple paraphrases that encode the same semantic content. A noteworthy
feature of GS-ST with learned temperature is that perplexity values of all encoders for different L are
always smaller than corresponding values for vanilla GS-ST.
Lastly, we calculated an estimate of the dot product between the true gradient of the loss function and
the direction provided by GS-ST estimator using Equation (5). We found that after 400 parameter
updates there is almost always (> 99%) an acute angle between the two. This suggests that GS-ST
gradient can be used as a pseudogradient for our referential game problem.
6
3.2
Qualitative analysis of the learned language
To better understand the nature of the learned language, we inspected a small subset of sentences
that were produced by the model with maximum possible message length equal to 5. To avoid cherry
picking images, we use the following strategy in both food and animal domains. First, we took a
random photo of an object and generated a message. Then we iterated over the dataset and randomly
selected images with messages that share prefixes of 1, 2 and 3 symbols with the given message.
Figure 3 shows some samples from the MSCOCO 2014 validation set that correspond to (5747 *
* * *) code.3 Images in this subset depict animals. On the other hand, it seems that images for (*
* * 5747 *) code do not correspond to any predefined category. This suggests that word order is
crucial in the developed language. Particularly, word 5747 on the first position encodes presence of
an animal in the image. The same figure shows that message (5747 5747 7125 * *) corresponds
to a particular type of bears. This suggests that the developed language implements some kind of
hierarchical coding. This is interesting by itself because the model was not constrained explicitly
to use any hierarchical encoding scheme. Presumably, this can help the model efficiently describe
unseen images. Nevertheless, natural language uses other principles to ensure compositionality. The
model shows similar behaviour for images in the food domain.
Figure 3: The samples from MS COCO that correspond to particular codes.
3.3
Indirect grounding of artificial language in natural language
We implemented indirect grounding algorithm, as discussed in Section 2.2. We trained language
model p? (m) using an LSTM recurrent neural network. It was used as a prior distribution over the
messages. To acquire data for estimating the parameters of a language model, we took image captions
of randomly selected (50%) images from the previously created training set. These images were not
used for training the sender and the receiver. Another half of the set was used for training agents. We
evaluated the learned communication protocol on the MSCOCO 2014 validation set.
To get an estimate of communication success when using natural language, we trained the receiver
with pairs of images and captions. This model is similar to Imaginet (Chrupa?a et al., 2015). Also,
inspired by their analysis, we report the omission score. The omission score of a word is equal to
difference between the target image probability given the original message and the probability given
a message with the removed word. The sentence omission score is the maximum over all word
omission scores in the given sentence. The score quantifies the change in the target image probability
after removing the most important word. Natural languages have content words that name objects
(i.e. nouns) and encode their qualities (e.g., adjectives). One can expect that a protocol that uses a
distinction between content words and function words would have a higher omission score than a
protocol that distributes information evenly across tokens. As Table 1 shows, the grounded language
has the communication success rate similar to natural language. However, it has a slightly lower
omission score. The unregularized model has the lowest omission score which probably means that
symbols in the developed protocol have similar nature to characters or syllables rather than words.
3
* means any word from the vocabulary or end-of-sentence padding.
7
Table 1: Comparison of the grounded protocol with the natural language and the artificial language
Model
Comm. success (%)
Number of updates
Omission score
52.51
95.65
52.51
11600
27600
16100
0.258
0.193
0.287
With KL regularization
Without regularization
Imaginet
3.4
Direct grounding of artificial language in natural language
As we discussed previously in Section 2.2, minimizing the KL divergence will ensure that statistical
properties of the protocol are going to be similar to those of natural language. However, words are
not likely to preserve their original meaning (e.g. the word ?red? may refer to the concept of ?blue? in
the protocol). To resolve this issue, we additionally trained the sender on the image captioning task.
To understand whether the additional communication loss can help in the setting where the amount of
the data is limited we considered next setup for image description generation task.
To simulate the semi-supervised setting, we divided the previously created training set into two parts.
The randomly selected 25% of the dataset were used to train the sender on the image captioning task
Lcaption . The rest 75% were used to train the sender and the receiver to solve the referential game
Lgame . The final loss is a weighted sum of losses for the two tasks L = Lcaption + ?Lgame . We did
not perform any preprocessing of the gold standard captions apart from lowercasing. It is important
to mention that in this setup the communication loss is equivalent to the variational lower bound of
mutual information (Barber and Agakov, 2003) of image features and the corresponding caption.
Table 2: Metrics for image captioning models with and without communication loss
Model
w/ comm. loss
w/o comm. loss
BLEU-2
BLEU-3
BLEU-4
ROUGE-L
CIDEr
Avg. length
0.435
0.436
0.290
0.290
0.195
0.195
0.492
0.491
0.590
0.594
13.93
12.85
We used the greedy decoding strategy to sample image descriptions. As Table 2 shows, both systems
have comparable performance across different image captioning metrics. We believe that the model
did not achieve better peroformance as discriminative captions are different in nature compared
to reference captions. In fact generating discriminative descriptions may be useful for certain
applications (e.g., generating reference expressions in navigation instructions (Byron et al., 2009))
but it is hard to evaluate them intrinsically. Note that using the communication loss yield, in average,
longer captions. It is not surprising, taking into account the mutual information interpretation of the
referential game, a longer sequence can retain more information about image features.
4
Related work
There is a long history of work on language emergence in multi-agent systems (Kirby, 2002; Wagner
et al., 2003; Steels, 2005; Nolfi and Mirolli, 2009; Golland et al., 2010). The recent generation
relied on deep learning techniques. More specifically, Foerster et al. (2016) proposed a differentiable
inter-agent learning (DIAL) framework where it was used to solve puzzles in a multi-agent setting.
The agents in their work were allowed to communicate by sending one-bit messages. Jorge et al.
(2016) adopted DIAL to solve the interactive image search task with two agents participating in the
task. These actors successfully developed a language consisting of one-hot encoded atomic symbols.
By contrast, Lazaridou et al. (2017) applied the policy gradient method to learn agents that are
involved in a referential game. Unlike us, they used atomic symbols rather than sequences of tokens.
Learning dialogue systems for collaborative activities between machine and human were previously
considered by Lemon et al. (2002). Usually, they are represented by hybrid models that combine
reinforcement learning with supervised learning (Henderson et al., 2008; Schatzmann et al., 2006).
The idea of using the Gumbel-softmax distribution for learning language in a multi-agent environment was concurrently considered by Mordatch and Abbeel (2017). They studied a simulated
8
two-dimensional environment in continuous space and discrete time with several agents where, in
addition to performing physical actions, agents can also utter verbal communication symbols at every
timestep. Similarly to us, the induced language exhibits compositional structure and to a large degree
interpretable. Das et al. (2017), also in concurrent work, investigated a cooperative ?image guessing?
game with two agents communicating in natural language. They use the policy gradient method for
learning, hence their framework can benefit from the approach proposed in this paper. One important
difference with our approach is that they pretrain their model on an available dialog dataset. By
contrast, we induce the communication protocol from scratch.
VAE-based approaches that use sequences of discrete latent variables were studied recently by
Miao and Blunsom (2016) and Ko?cisk`y et al. (2016) for text summarization and semantic parsing,
correspondingly. The variational lower bound for these models involves expectation with respect to
the distribution over sequences of symbols, so the learning strategy proposed here may be beneficial
in their applications.
5
Conclusion
In this paper, we have shown that agents, modeled using neural networks, can successfully invent
a language that consists of sequences of discrete tokens. Despite the common belief that it is hard
to train such models, we proposed an efficient learning strategy that relies on the straight-through
Gumbel-softmax estimator. We have performed analysis of the learned language and corresponding
learning dynamics. We have also considered two methods for injecting prior knowledge about natural
language. In the future work, we would like to extend this approach to modelling practical dialogs.
The ?game? can be played between two agents rather than an agent and a human while human
interpretability would be ensured by integrating supervised loss into the learning objective (as we did
in section 3.5 where we used captions). Hopefully, this will reduce the amount of necessary human
supervision.
Acknowledgments
This project is supported by SAP ICN, ERC Starting Grant BroadSem (678254) and NWO Vidi Grant
(639.022.518). We would like to thank Jelle Zuidema and anonymous reviewers for their helpful
suggestions and comments.
References
David Barber and Felix V Agakov. The IM Algorithm: A Variational Approach to Information
Maximization. In Advances in Neural Information Processing Systems, 2003.
Andrea Baronchelli, Maddalena Felici, Vittorio Loreto, Emanuele Caglioti, and Luc Steels. Sharp
transition towards shared vocabularies in multi-agent systems. Journal of Statistical Mechanics:
Theory and Experiment, 2006(06):P06014, 2006.
John Batali. Computational simulations of the emergence of grammar. Approaches to the evolution
of language: Social and cognitive bases, 405:426, 1998.
Yoshua Bengio, Nicholas L?onard, and Aaron Courville. Estimating or propagating gradients through
stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Shalabh Bhatnagar, HL Prasad, and LA Prashanth. Stochastic recursive algorithms for optimization:
simultaneous perturbation methods, volume 434. Springer, 2012.
Henry Brighton. Compositional syntax from cultural transmission. Artificial life, 8(1):25?54, 2002.
Donna Byron, Alexander Koller, Kristina Striegnitz, Justine Cassell, Robert Dale, Johanna Moore,
and Jon Oberlander. Report on the first NLG challenge on generating instructions in virtual
environments (GIVE). In Proceedings of the 12th European workshop on natural language
generation, 2009.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll?r, and
C Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. arXiv
preprint arXiv:1504.00325, 2015.
9
Grzegorz Chrupa?a, Akos K?d?r, and Afra Alishahi. Learning language through pictures. In
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, 2015.
Abhishek Das, Satwik Kottur, Jos? MF Moura, Stefan Lee, and Dhruv Batra. Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning. In Proceedings of International
Conference on Computer Vision and Image Processing, 2017.
Jakob Foerster, Yannis M Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate
with deep multi-agent reinforcement learning. In Advances in Neural Information Processing
Systems, pages 2137?2145, 2016.
Dave Golland, Percy Liang, and Dan Klein. A game-theoretic approach to generating spatial
descriptions. In Proceedings of the 2010 conference on empirical methods in natural language
processing, pages 410?419. Association for Computational Linguistics, 2010.
Caglar Gulcehre, Sarath Chandar, and Yoshua Bengio. Memory Augmented Neural Networks with
Wormhole Connections. arXiv preprint arXiv:1701.08718, 2017.
James Henderson, Oliver Lemon, and Kallirroi Georgila. Hybrid reinforcement/supervised learning
of dialogue policies from fixed data sets. Computational Linguistics, 34(4):487?511, 2008.
Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735?1780, 1997.
Eric Jang, Shixiang Gu, and Ben Poole. Categorical Reparameterization with Gumbel-Softmax. In
Proceedings of the International Conference on Learning Representations, 2017.
Emilio Jorge, Mikael K?geb?ck, and Emil Gustavsson. Learning to Play Guess Who? and Inventing
a Grounded Language as a Consequence. In Neural Information Processing Systems, the 3rd Deep
Reinforcement Learning Workshop, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of
the 3rd International Conference for Learning Representations, 2014.
Diederik P Kingma and Max Welling. Auto-encoding Variational Bayes. In Proceedings of the 3rd
International Conference for Learning Representations, 2014.
Simon Kirby. Natural language from artificial life. Arificial Life, 8:185?215, 2002.
Tom?? Ko?cisk`y, G?bor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and
Karl Moritz Hermann. Semantic parsing with semi-supervised sequential autoencoders. arXiv
preprint arXiv:1609.09315, 2016.
Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-agent cooperation and the
emergence of (natural) language. In Proceedings of the International Conference on Learning
Representations, 2017.
Oliver Lemon, Alexander Gruenstein, and Stanley Peters. Collaborative activities and multi-tasking
in dialogue systems: Towards natural dialogue with robots. TAL. Traitement automatique des
langues, 43(2):131?154, 2002.
David Lewis. Convention: A philosophical study. 1969.
Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution: A Continuous
Relaxation of Discrete Random Variables. In Proceedings of the International Conference on
Learning Representations, 2017.
Yishu Miao and Phil Blunsom. Language as a latent variable: Discrete generative models for sentence
compression. In Proceedings of the Conference on Empirical Methods in Natural Language
Processing, 2016.
Tomas Mikolov, Armand Joulin, and Marco Baroni. A roadmap towards machine intelligence. In
Neural Information Processing Systems, Reasoning, Attention, and Memory Workshop, 2015.
10
Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In
Proceedings of the 31st International Conference on Machine Learning, 2014.
Igor Mordatch and Pieter Abbeel. Emergence of Grounded Compositional Language in Multi-Agent
Populations. arXiv preprint arXiv:1703.04908, 2017.
Stefano Nolfi and Marco Mirolli. Evolution of communication and language in embodied agents.
Springer Science & Business Media, 2009.
M. A. Nowak and D. Krakauer. The evolution of language. PNAS, 96(14):8028?8033, 1999. doi:
10.1073/pnas.96.14.8028. URL http://groups.lis.illinois.edu/amag/langev/paper/
nowak99theEvolution.html.
BT Polyak and Ya Z Tsypkin. Pseudogradient adaptation and training algorithms. 1973.
Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The knowledge
engineering review, 21(2):97?126, 2006.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image
recognition. In Proceedings of the International Conference on Learning Representations, 2015.
Luc Steels. What triggers the emergence of grammar. 2005.
Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning Multiagent Communication with
Backpropagation. In Advances in Neural Information Processing Systems, pages 2244?2252, 2016.
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural
image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 3156?3164, 2015.
Kyle Wagner, James A Reggia, Juan Uriagereka, and Gerald S Wilkinson. Progress in the simulation
of emergent communication and language. Adaptive Behavior, 11(1):37?69, 2003.
M. Werning, W. Hinzen, and M. Machery. The Oxford handbook of compositionality. Oxford, UK,
2011.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine learning, 8(3-4):229?256, 1992.
11
| 6810 |@word cnn:1 version:1 armand:1 compression:1 seems:1 instruction:2 pieter:1 simulation:3 prasad:1 dramatic:1 mention:1 solid:1 harder:2 configuration:2 contains:1 score:9 selecting:1 tuned:2 interestingly:3 prefix:1 outperforms:1 freitas:1 icn:1 surprising:2 diederik:2 reminiscent:1 written:1 parsing:2 john:1 ronald:1 realistic:1 chrupa:3 informative:1 enables:1 drop:1 interpretable:2 update:5 plot:1 depict:1 alone:1 sukhbaatar:2 stationary:1 generative:2 selected:4 greedy:2 half:1 kristina:1 guess:1 intelligence:1 vanishing:1 short:1 provides:2 five:1 direct:2 qualitative:1 consists:4 combine:1 dan:1 inter:1 expected:1 automatique:1 behavior:1 p1:1 examine:1 dialog:3 multi:9 mechanic:1 rapid:2 andrea:1 inspired:2 relying:1 food:2 encouraging:1 resolve:1 increasing:1 becomes:2 provided:3 estimating:5 moreover:2 cultural:1 project:1 medium:1 lowest:1 what:2 kind:1 interpreted:1 string:3 developed:4 transformation:2 differentiation:1 every:1 act:1 interactive:1 ensured:1 uk:5 control:3 unit:1 grant:2 szlam:1 arguably:1 before:3 felix:1 engineering:2 consequence:1 rouge:1 despite:3 encoding:7 oxford:2 traitement:1 becoming:1 approximately:1 noteworthy:1 blunsom:3 studied:2 suggests:3 challenging:2 limited:1 practical:2 acknowledgment:1 testing:2 atomic:4 recursive:1 implement:2 backpropagation:2 area:1 empirical:2 onard:1 tasking:1 word:19 induce:3 integrating:1 seeing:1 get:2 close:1 impossible:1 applying:2 yee:1 equivalent:3 deterministic:1 reviewer:1 vittorio:1 phil:2 williams:2 sepp:1 starting:1 jimmy:1 attention:1 survey:1 formulate:1 stabilizing:1 simplicity:1 tomas:1 communicating:2 estimator:17 sarath:1 regarded:1 fang:1 reparameterization:1 embedding:1 searching:1 population:1 analogous:1 target:9 inspected:1 play:1 engage:1 caption:10 user:1 trigger:1 us:4 samy:1 trick:3 recognition:2 particularly:1 agakov:2 cooperative:2 invented:1 observed:2 ep:1 preprint:5 wang:1 ensures:1 decrease:1 highest:1 removed:1 mentioned:2 environment:7 comm:3 reward:3 wilkinson:1 dynamic:2 donna:1 gerald:1 trained:6 depend:1 eric:1 completely:2 gu:1 indirect:5 emergent:1 represented:3 cat:1 train:5 effective:4 describe:2 monte:4 kp:1 artificial:7 vidi:1 doi:1 tell:1 encoded:3 larger:2 solve:3 encoder:1 grammar:2 simonyan:2 gi:1 unseen:1 emergence:7 transform:1 itself:2 final:1 sequence:15 differentiable:6 advantage:1 took:2 propose:1 reconstruction:1 interaction:2 product:3 emil:1 adaptation:2 loreto:1 achieve:3 gold:2 description:4 participating:1 zuidema:1 convergence:2 transmission:1 captioning:5 generating:5 adam:2 karol:1 ben:1 object:2 help:3 develop:2 ac:2 propagating:1 recurrent:2 peysakhovich:1 andrew:1 school:2 progress:1 edward:1 dividing:1 implemented:4 p2:1 resemble:1 come:1 implies:1 involves:1 convention:1 direction:5 hermann:1 correct:1 stochastic:5 centered:1 human:8 nando:1 virtual:1 require:1 exchange:1 behaviour:2 inventing:1 abbeel:2 anonymous:1 sainbayar:1 im:1 marco:3 considered:6 dhruv:1 visually:1 presumably:1 lawrence:1 puzzle:1 exp:3 rgen:1 optimizer:1 omitted:2 baroni:2 estimation:1 injecting:1 nwo:1 sensitive:3 concurrent:1 successfully:3 weighted:1 lazaridou:4 stefan:1 concurrently:1 always:3 cider:1 rather:6 ck:1 pn:1 avoid:1 varying:1 vae:4 encode:4 derived:1 focus:1 improvement:1 consistently:1 modelling:1 check:2 likelihood:2 pretrain:1 contrast:3 baseline:3 helpful:1 inference:2 dependent:1 kottur:1 bt:1 hidden:3 koller:1 going:2 interested:1 issue:3 arg:4 html:1 animal:3 constrained:1 softmax:14 special:2 noun:1 mutual:2 equal:4 saurabh:1 spatial:1 shaped:1 beach:1 sampling:7 manually:1 identical:1 having:1 broad:1 piotr:1 igor:1 jon:1 future:1 report:2 yoshua:2 intelligent:1 connectionist:1 randomly:4 preserve:3 ve:1 divergence:4 argmax:3 consisting:1 replacement:1 microsoft:2 assael:1 message:36 investigate:1 mnih:3 evaluation:1 severe:1 henderson:2 navigation:1 cherry:1 accurate:1 predefined:1 oliver:2 nowak:2 necessary:2 arthur:1 unless:1 exchanged:1 earlier:1 maximization:1 ordinary:1 deviation:1 subset:2 successful:1 conducted:1 dependency:1 encoders:1 st:18 lstm:6 international:8 retain:1 lee:1 informatics:2 decoding:1 picking:1 jos:1 concrete:1 na:1 satisfied:1 management:1 juan:1 cognitive:1 inject:2 derivative:1 dialogue:5 li:1 account:2 de:2 speculate:1 summarized:1 coding:1 chandar:1 explicitly:1 tsung:1 multiplicative:1 try:1 performed:1 reached:1 start:2 red:3 relied:1 bayes:1 simon:1 prashanth:1 collaborative:5 contribution:1 johanna:1 nolfi:2 accuracy:1 convolutional:3 variance:7 characteristic:1 efficiently:2 who:1 correspond:4 identify:1 inputdependent:1 yield:1 weak:1 bor:1 iterated:1 produced:2 carlo:4 bhatnagar:2 straight:5 dave:1 history:1 explain:1 simultaneous:1 moura:1 ed:2 centering:1 energy:1 nonetheless:1 involved:1 james:2 sampled:3 sap:1 dataset:4 intrinsically:1 manifest:1 knowledge:2 dimensionality:2 stanley:1 back:1 steve:1 higher:1 miao:2 supervised:6 tom:1 zisserman:2 formulation:1 evaluated:1 stage:1 lastly:1 until:1 autoencoders:1 sketch:1 hand:2 hopefully:1 quality:1 believe:1 usa:1 grounding:6 usage:1 contain:1 true:2 unbiased:1 concept:2 evolution:4 regularization:4 hence:2 counterpart:1 read:1 moritz:1 leibler:1 moore:1 semantic:4 game:15 during:3 shixiang:1 backpropagating:1 m:1 whye:1 distracting:5 brighton:2 syntax:1 complete:1 demonstrate:2 theoretic:1 percy:1 temperature:13 stefano:1 reasoning:1 image:55 meaning:3 consideration:1 variational:6 recently:1 cisk:2 kyle:1 common:1 mt:13 rl:1 overview:1 emt:1 physical:1 volume:1 discussed:3 interpretation:2 extend:1 association:2 theirs:1 refer:4 significant:2 ai:1 tuning:1 vanilla:2 rd:4 similarly:2 rasa:1 erc:1 illinois:1 shalabh:1 emanuele:1 language:66 henry:1 dot:3 moving:1 access:3 specification:1 supervision:6 longer:5 acute:1 actor:1 base:1 satwik:1 robot:1 posterior:1 recent:2 perspective:1 optimizing:3 inf:2 apart:1 coco:3 schmidhuber:2 perplexity:3 certain:2 server:1 success:7 continue:1 jorge:3 life:3 accomplished:2 yi:1 meeting:1 tabula:1 somewhat:1 additional:1 employed:1 converge:4 tempting:1 dashed:1 hsi:6 signal:4 multiple:3 semi:2 emilio:1 pnas:2 faster:4 adapt:1 believed:1 long:4 lin:1 divided:1 dkl:2 bigger:1 jost:1 scalable:1 ko:2 multilayer:1 invent:2 expectation:3 metric:2 vision:2 foerster:4 arxiv:10 represent:1 grounded:5 hochreiter:2 cell:1 golland:2 whereas:1 addition:1 interval:3 source:1 crucial:1 biased:2 rest:2 unlike:3 probably:1 comment:1 induced:4 byron:2 wormhole:1 name:1 presence:1 bengio:5 easy:3 enough:1 split:1 ivan:1 affect:1 architecture:3 approaching:1 andriy:2 polyak:2 inner:1 reduce:3 pseudogradient:6 idea:1 vgg:1 favour:2 whether:4 motivated:1 expression:1 ultimate:2 url:1 padding:1 peter:1 karen:1 afford:1 compositional:3 action:1 deep:5 useful:1 clear:1 amount:3 referential:7 category:3 http:1 outperform:1 exist:2 estimated:2 correctly:1 klein:1 blue:1 discrete:10 hyperparameter:4 group:1 redundancy:1 lowercasing:1 nevertheless:3 enormous:1 clarity:1 prevent:1 kept:1 backward:3 timestep:1 graph:1 relaxation:7 year:1 sum:1 inverse:1 everywhere:1 angle:1 communicate:5 almost:1 reader:1 batali:2 bit:1 hrl:2 comparable:1 layer:4 bound:2 syllable:1 played:1 courville:1 fold:1 g:16 activity:2 annual:1 lemon:3 precisely:1 as2:1 phrased:1 encodes:1 sake:1 tal:1 generates:1 toshev:1 simulate:1 mikolov:2 performing:3 relatively:3 structured:1 developing:1 according:1 beneficial:2 slightly:4 smaller:1 pti:2 across:2 wi:2 kirby:4 character:1 xinlei:1 making:2 kpn:1 rob:1 hl:3 unregularized:1 computationally:1 equation:7 previously:4 discus:1 describing:1 needed:1 tsypkin:2 dyer:1 end:5 photo:1 serf:1 adopted:1 available:3 gulcehre:2 sending:1 doll:1 prerequisite:1 titov:1 observe:4 hierarchical:3 regulate:1 reggia:1 nicholas:1 batch:1 jang:6 original:5 denotes:1 running:1 ensure:6 linguistics:3 maintaining:1 hinge:1 dial:2 mikael:1 krakauer:2 gregor:2 objective:1 strategy:7 guessing:1 exhibit:2 gradient:18 thank:1 reinforce:10 simulated:1 nondifferentiable:1 maddison:2 topic:1 evenly:1 barber:2 chris:2 roadmap:1 bleu:3 assuming:2 length:9 code:3 copying:1 modeled:1 ratio:1 minimizing:2 acquire:1 liang:1 setup:6 difficult:1 robert:1 potentially:1 debate:1 gk:2 hao:1 steel:4 ba:2 understandable:1 policy:4 summarization:1 perform:2 allowing:1 diamond:1 discretize:1 neuron:1 teh:1 enabling:1 caglar:1 beat:1 communication:29 variability:1 perturbation:3 jakob:1 arbitrary:1 omission:8 sharp:1 paraphrase:3 angeliki:1 compositionality:4 david:2 grzegorz:1 namely:1 required:1 kl:5 extensive:2 philosophical:2 sentence:6 pair:1 connection:1 learned:11 textual:1 distinction:1 kingma:4 nip:1 address:1 able:1 proceeds:1 usually:4 mordatch:2 poole:1 pattern:1 challenge:3 adjective:1 max:6 reliable:2 interpretability:1 belief:2 memory:3 hot:4 event:1 unrealistic:1 natural:34 force:1 treated:1 difficulty:1 turning:1 critical:1 hybrid:2 business:1 geb:1 scheme:2 picture:1 created:2 categorical:8 autoencoder:1 coupled:1 auto:1 embodied:1 text:2 prior:5 understanding:1 review:1 loss:17 expect:2 bear:1 multiagent:1 interesting:1 generation:3 proportional:1 suggestion:1 ramakrishna:1 utter:1 validation:5 generator:1 agent:40 degree:2 conveyed:1 affine:2 principle:2 playing:1 pi:1 share:1 karl:2 cooperation:1 token:8 surprisingly:2 last:1 supported:1 verbal:1 allow:1 understand:3 perceptron:1 taking:1 wagner:3 correspondingly:1 edinburgh:2 distributed:1 boundary:1 alishahi:1 calculated:3 vocabulary:5 evaluating:1 default:1 benefit:2 plain:1 transition:1 forward:3 collection:2 reinforcement:11 preprocessing:1 avg:1 dale:1 erhan:1 adaptive:1 welling:2 social:1 approximate:3 kullback:1 sequentially:1 instantiation:1 handbook:1 receiver:14 vedantam:1 discriminative:2 abhishek:1 fergus:1 continuous:6 latent:4 search:4 quantifies:1 table:4 additionally:2 learn:6 nature:4 ca:1 whiteson:1 investigated:2 european:1 zitnick:1 protocol:29 domain:2 did:4 joulin:1 pk:3 main:1 da:2 synonym:1 arrow:1 whole:1 noise:1 hyperparameters:1 ling:1 allowed:1 augmented:1 mscoco:4 position:1 guiding:1 explicit:1 yannis:1 young:1 shimon:1 removing:1 dumitru:1 underuse:1 symbol:15 dk:3 divergent:1 gupta:1 burden:1 workshop:3 sequential:1 effectively:1 disregarded:1 gumbel:14 gap:1 chen:2 mf:1 entropy:1 likely:2 sender:18 visual:2 vinyals:2 amsterdam:1 desire:1 pretrained:1 monotonic:1 springer:2 corresponds:1 lewis:2 extracted:1 relies:1 succeed:2 conditional:1 grefenstette:1 goal:5 formulated:1 hs0:1 towards:3 luc:2 shared:1 content:4 feasible:1 change:3 hard:2 specifically:2 typical:1 uniformly:1 except:1 acting:1 distributes:1 batra:1 pas:4 matt:1 la:1 ya:1 vaes:1 aaron:1 select:1 illc:1 support:1 latter:1 brevity:1 alexander:4 oriol:1 evaluate:2 tested:1 scratch:3 |
6,425 | 6,811 | Training Deep Networks without Learning Rates
Through Coin Betting
Francesco Orabona?
Department of Computer Science
Stony Brook University
Stony Brook, NY
[email protected]
Tatiana Tommasi?
Department of Computer, Control, and
Management Engineering
Sapienza, Rome University, Italy
[email protected]
Abstract
Deep learning methods achieve state-of-the-art performance in many application
scenarios. Yet, these methods require a significant amount of hyperparameters
tuning in order to achieve the best results. In particular, tuning the learning rates
in the stochastic optimization process is still one of the main bottlenecks. In this
paper, we propose a new stochastic gradient descent procedure for deep networks
that does not require any learning rate setting. Contrary to previous methods, we
do not adapt the learning rates nor we make use of the assumed curvature of the
objective function. Instead, we reduce the optimization process to a game of betting
on a coin and propose a learning-rate-free optimal algorithm for this scenario.
Theoretical convergence is proven for convex and quasi-convex functions and
empirical evidence shows the advantage of our algorithm over popular stochastic
gradient algorithms.
1
Introduction
In the last years deep learning has demonstrated a great success in a large number of fields and has
attracted the attention of various research communities with the consequent development of multiple
coding frameworks (e.g., Caffe [Jia et al., 2014], TensorFlow [Abadi et al., 2015]), the diffusion of
blogs, online tutorials, books, and dedicated courses. Besides reaching out scientists with different
backgrounds, the need of all these supportive tools originates also from the nature of deep learning: it
is a methodology that involves many structural details as well as several hyperparameters whose
importance has been growing with the recent trend of designing deeper and multi-branches networks.
Some of the hyperparameters define the model itself (e.g., number of hidden layers, regularization
coefficients, kernel size for convolutional layers), while others are related to the model training
procedure. In both cases, hyperparameter tuning is a critical step to realize deep learning full potential
and most of the knowledge in this area comes from living practice, years of experimentation, and, to
some extent, mathematical justification [Bengio, 2012].
With respect to the optimization process, stochastic gradient descent (SGD) has proved itself to be a
key component of the deep learning success, but its effectiveness strictly depends on the choice of
the initial learning rate and learning rate schedule. This has primed a line of research on algorithms
to reduce the hyperparameter dependence in SGD?see Section 2 for an overview on the related
literature. However, all previous algorithms resort on adapting the learning rates, rather than removing
them, or rely on assumptions on the shape of the objective function.
In this paper we aim at removing at least one of the hyperparameter of deep learning models. We
leverage over recent advancements in the stochastic optimization literature to design a backprop?
The authors contributed equally.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
agation procedure that does not have a learning rate at all, yet it is as simple as the vanilla SGD.
Specifically, we reduce the SGD problem to the game of betting on a coin (Section 4). In Section 5,
we present a novel strategy to bet on a coin that extends previous ones in a data-dependent way,
proving optimal convergence rate in the convex and quasi-convex setting (defined in Section 3).
Furthermore, we propose a variant of our algorithm for deep networks (Section 6). Finally, we show
how our algorithm outperforms popular optimization methods in the deep learning literature on a
variety of architectures and benchmarks (Section 7).
2
Related Work
Stochastic gradient descent offers several challenges in terms of convergence speed. Hence, the topic
of learning rate setting has been largely investigated.
Some of the existing solutions are based on the use of carefully tuned momentum terms [LeCun et al.,
1998b, Sutskever et al., 2013, Kingma and Ba, 2015]. It has been demonstrated that these terms can
speed-up the convergence for convex smooth functions [Nesterov, 1983]. Other strategies propose
scale-invariant learning rate updates to deal with gradients whose magnitude changes in each layer of
the network [Duchi et al., 2011, Tieleman and Hinton, 2012, Zeiler, 2012, Kingma and Ba, 2015].
Indeed, scale-invariance is a well-known important feature that has also received attention outside of
the deep learning community [Ross et al., 2013, Orabona et al., 2015, Orabona and Pal, 2015]. Yet,
both these approaches do not avoid the use of a learning rate.
A large family of algorithms exploit a second order approximation of the cost function to better capture
its local geometry and avoid the manual choice of a learning rate. The step size is automatically
adapted to the cost function with larger/shorter steps in case of shallow/steep curvature. QuasiNewton methods [Wright and Nocedal, 1999] as well as the natural gradient method [Amari, 1998]
belong to this family. Although effective in general, they have a spatial and computational complexity
that is square in the number of parameters with respect to the first order methods, which makes the
application of these approaches unfeasible in modern deep learning architectures. Hence, typically
the required matrices are approximated with diagonal ones [LeCun et al., 1998b, Schaul et al., 2013].
Nevertheless, even assuming the use of the full information, it is currently unclear if the objective
functions in deep learning have enough curvature to guarantee any gain.
There exists a line of work on unconstrained stochastic gradient descent without learning
rates [Streeter and McMahan, 2012, Orabona, 2013, McMahan and Orabona, 2014, Orabona, 2014,
Cutkosky and Boahen, 2016, 2017]. The latest advancement in this direction is the strategy of reducing stochastic subgradient descent to coin-betting, proposed by Orabona and Pal [2016]. However,
their proposed betting strategy is worst-case with respect to the gradients received and cannot take
advantage, for example, of sparse gradients.
3
Definitions
We now introduce the basic notions of convex analysis that are used in the paper?see, e.g., Bauschke
and Combettes [2011]. We denote by k?k1 the 1-norm in Rd . Let f : Rd ? R ? {??}, the Fenchel
conjugate of f is f ? : Rd ? R ? {??} with f ? (?) = supx?Rd ? > x ? f (x).
A vector x is a subgradient of a convex function f at v if f (v) ? f (u) ? (v ? u)> x for any u in
the domain of f . The differential set of f at v, denoted by ?f (v), is the set of all the subgradients of
f at v. If f is also differentiable at v, then ?f (v) contains a single vector, denoted by ?f (v), which
is the gradient of f at v.
We go beyond convexity using the definition of weak quasi-convexity in Hardt et al. [2016]. This
definition is relevant for us because Hardt et al. [2016] proved that ? -weakly-quasi-convex objective
functions arise in the training of linear recurrent networks. A function f : Rd ? R is ? -weakly-quasiconvex over a domain B ? Rd with respect to the global minimum v ? if there is a positive constant
? > 0 such that for all v ? B, f (v) ? f (v ? ) ? ? (v ? v ? )> ?f (v). From the definition, it follows
that differentiable convex function are also 1-weakly-quasi-convex.
Betting on a coin. We will reduce the stochastic subgradient descent procedure to betting on a
number of coins. Hence, here we introduce the betting scenario and its notation. We consider a
2
gambler making repeated bets on the outcomes of adversarial coin flips. The gambler starts with
initial money > 0. In each round t, he bets on the outcome of a coin flip gt ? {?1, 1}, where +1
denotes heads and ?1 denotes tails. We do not make any assumption on how gt is generated.
The gambler can bet any amount on either heads or tails. However, he is not allowed to borrow any
additional money. If he loses, he loses the betted amount; if he wins, he gets the betted amount back
and, in addition to that, he gets the same amount as a reward. We encode the gambler?s bet in round t
by a single number wt . The sign of wt encodes whether he is betting on heads or tails. The absolute
value encodes the betted amount. We define Wealtht as the gambler?s wealth at the end of round t
and Rewardt as the gambler?s net reward (the difference of wealth and the initial money), that is
t
t
X
X
Wealtht = +
wi gi
and
Rewardt = Wealtht ? =
wi gi .
(1)
i=1
i=1
In the following, we will also refer to a bet with ?t , where ?t is such that
wt = ?t Wealtht?1 .
(2)
The absolute value of ?t is the fraction of the current wealth to bet and its sign encodes whether
he is betting on heads or tails. The constraint that the gambler cannot borrow money implies that
?t ? [?1, 1]. We also slighlty generalize the problem by allowing the outcome of the coin flip gt to
be any real number in [?1, 1], that is a continuous coin; wealth and reward in (1) remain the same.
4
Subgradient Descent through Coin Betting
In this section, following Orabona and Pal [2016], we briefly explain how to reduce subgradient
descent to the gambling scenario of betting on a coin.
Consider as an example the function F (x) := |x ? 10| and the optimization problem minx F (x).
This function does not have any curvature, in fact it is not even differentiable, thus no second order
optimization algorithm could reliably be used on it. We set the outcome of the coin flip gt to be
equal to the negative subgradient of F in wt , that is gt ? ?[?F (wt )], where we remind that wt is the
amount of money we bet. Given our choice of F (x), its negative subgradients are in {?1, 1}. In the
first iteration we do not bet, hence w1 = 0 and our initial money is $1. Let?s also assume that there
exists a function H(?) such that our betting strategy will guarantee that the wealth after T rounds will
PT
be at least H( t=1 gt ) for any arbitrary sequence g1 , ? ? ? , gT .
PT
We claim that the average of the bets, T1 t=1 wt , converges to the solution of our optimization
problem and the rate depends on how good our betting strategy is. Let?s see how.
Denoting by x? the minimizer of F (x), we have that the following holds
!
T
T
T
T
1X
1X
1X
1X
F
wt ? F (x? ) ?
F (wt ) ? F (x? ) ?
gt x? ?
gt wt
T t=1
T t=1
T t=1
T t=1
!!
T
T
X
X
?
1
1
?T +T
gt x ? H
? T1 + T1 max vx? ? H(v)
gt
t=1
H ? (x? )+1
,
T
v
t=1
=
where in the first inequality we used Jensen?s inequality, in the second the definition of subgradients,
in the third our assumption on H, and in the last equality the definition of Fenchel conjugate of H.
In words, we used a gambling algorithm to find the minimizer of a non-smooth objective function
by accessing its subgradients. All we need is a good gambling strategy. Note that this is just a
very simple one-dimensional example, but the outlined approach works in any dimension and for
any convex objective function, even if we just have access to stochastic subgradients [Orabona and
Pal, 2016]. In particular, if the gradients are bounded in a range, the same reduction works using a
continuous coin.
Pt?1
g
i
Orabona and Pal [2016] showed that the simple betting strategy of ?t = i=1
gives optimal growth
t
rate of the wealth and optimal worst-case convergence rates. However, it is not data-dependent so it
does not adapt to the sparsity of the gradients. In the next section, we will show an actual betting
strategy that guarantees optimal convergence rate and adaptivity to the gradients.
3
Algorithm 1 COntinuous COin Betting - COCOB
1: Input: Li > 0, i = 1, ? ? ? , d; w 1 ? Rd (initial parameters); T (maximum number of iterations);
F (function to minimize)
2: Initialize: G0,i ? Li , Reward0,i ? 0, ?0,i ? 0, i = 1, ? ? ? , d
3: for t = 1, 2, . . . , T do
4:
Get a (negative) stochastic subgradient g t such that E[g t ] ? ?[?F (wt )]
5:
for i = 1, 2, . . . , d do
6:
Update the sum of the absolute values of the subgradients: Gt,i ? Gt?1,i + |gt,i |
7:
Update the reward: Rewardt,i ? Rewardt?1,i +(wt,i ? w1,i )gt,i
8:
Update the sum of the gradients: ?t,i ??t?1,i
+ gt,i
9:
Calculate the fraction to bet: ?t,i =
1
Li
2?
2?t,i
Gt,i +Li
? 1 , where ?(x) =
1
1+exp(?x)
10:
Calculate the parameters: wt+1,i ? w1,i + ?t,i (Li + Rewardt,i )
11:
end for
12: end for
P
? T = T1 Tt=1 wt or wI where I is chosen uniformly between 1 and T
13: Return w
5
The COCOB Algorithm
We now introduce our novel algorithm for stochastic subgradient descent, COntinuous COin Betting
(COCOB), summarized in Algorithm 1. COCOB generalizes the reasoning outlined in the previous
section to the optimization of a function F : Rd ? R with bounded subgradients, reducing the
optimization to betting on d coins.
Similarly to the construction in the previous section, the outcomes of the coins are linked to the
stochastic gradients. In particular, each gt,i ? [?Li , Li ] for i = 1, ? ? ? , d is equal to the coordinate
i of the negative stochastic gradient g t of F in wt . With the notation of the algorithm,
COCOB
is
2?t,i
1
based on the strategy to bet a signed fraction of the current wealth equal to Li 2? Gt,i +Li ? 1 ,
?
1
where ?(x) = 1+exp(?x)
(lines 9 and 10). Intuitively, if Gt,it,i
+Li is big in absolute value, it means
that we received a sequence of equal outcomes, i.e., gradients, hence we should increase our bets, i.e.,
the absolute value of wt,i . Note that this strategy assures that |wt,i gt,i | < Wealtht?1,i , so the wealth
of the gambler is always positive. Also, it is easy to verify that the algorithm is scale-free because
multiplying all the subgradients and Li by any positive constant it would result in the same sequence
of iterates wt,i .
Note that the update in line 10 is carefully defined: The algorithm does not use the previous wt,i in
the update. Indeed, this algorithm belongs to the family of the Dual Averaging algorithms, where the
iterate is a function of the average of the past gradients [Nesterov, 2009].
Denoting by w? a minimizer of F , COCOB satisfies the following convergence guarantee.
Theorem 1. Let F : Rd ? R be a ? -weakly-quasi-convex function and assume that g t satisfy
|gt,i | ? Li . Then, running COCOB for T iterations guarantees, with the notation in Algorithm 1,
E[F (wI )] ? F (w? ) ?
d
X
v "
!#
u
u
(GT ,i +Li )2 (wi? ?w1,i )2
Li (GT ,i +Li ) ln 1+
2
Li
Li +|wi? ?w1,i |tE
?T
,
i=1
where the expectation is with respect to the noise in the subgradients and the choice of I. Moreover,
if F is convex, the same guarantee with ? = 1 also holds for wT .
The proof, in the Appendix, shows through induction that betting a fraction of money equal to
?t,i in line 9 on the outcomes gi,t , with an initial money of Li , guarantees that the wealth after T
?2
G
rounds is at least Li exp 2Li (GTT ,i,i +Li ) ? 21 ln LTi,i . Then, as sketched in Section 4, it is enough to
calculate the Fenchel conjugate of the wealth and use the standard construction for the per-coordinate
updates [Streeter and McMahan, 2010]. We note in passing that the proof technique is also novel
because the one introduced in Orabona and Pal [2016] does not allow data-dependent bounds.
4
y
y
Effective Learning Rate of COCOB
6
Effective Learning Rate
5
4
3
2
1
0
x
x
0
50
100
150
200
Iterations
Figure 1: Behaviour of COCOB (left) and gradient descent with various learning rates and same
number of steps (center) in minimizing the function y = |x ? 10|. (right) The effective learning rates
of COCOB. Figures best viewed in colors.
Pt?1
g
i
When |gt,i | = 1, we have ?t,i ? i=1
that recovers the betting strategy in Orabona and Pal [2016].
t
In other words, we substitute the time variable with the data-dependent quantity Gt,i . In fact, our
bound depends on the terms GT,i while the similar one in Orabona and Pal [2016] simply depends
on Li T . Hence, as in AdaGrad [Duchi et al., 2011], COCOB?s bound is tighter because it takes
advantage of sparse gradients.
?
k
? kw
? 1 ) without any learning rate to tune. This has to be compared
COCOB converges at a rate of O(
T
? 2
Pd
to the bound of AdaGrad that is2 O( ?1T i=1 ( (w?i ) + ?i )), where ?i are the initial learning rates
for each coordinate. Usually all the ?i are set to the same value, but from the bound we see that
the optimal setting would require a different value for each of them. This effectively means that the
optimal ?i for AdaGrad are problem-dependent and typically unknown. Using the optimal ?i would
?
k
? 1 ), that is exactly equal to our bound up to polylogarithmic
give us a convergence rate of O( kw
T
terms. Indeed, the logarithmic term in the square root of our bound is the price to pay to be adaptive
to any w? and not tuning hyperparameters. This logarithmic term is unavoidable for any algorithm
that wants to be adaptive to w? , hence our bound is optimal [Streeter and McMahan, 2012, Orabona,
2013].
To gain a better understanding on the differences between COCOB and other subgradient descent
algorithms, it is helpful to compare their behaviour on the simple one-dimensional function F (x) =
|x ? 10| already used in Section 4. In Figure 1 (left), COCOB starts from 0 and over time it increases
in an exponential way the iterate wt , until it meets a gradient of opposing sign. From the gambling
perspective this is obvious: The wealth will increase exponentially because there is a sequence of
identical outcomes, that in turn gives an increasing wealth and a sequence of increasing bets.
On the other hand, in Figure 1 (center), gradient descent shows a different behaviour depending on
its learning rate. If the learning rate is constant and too small (black line) it will take a huge number
of steps to reach the vicinity of the minimum. If the learning rate is constant and too large (red line),
it will keep oscillating around the minimum, unless some form of averaging is used [Zhang, 2004]. If
the learning rate decreases as ??t , as in AdaGrad [Duchi et al., 2011], it will slow down over time,
but depending of the choice of the initial learning rate ? it might take an arbitrary large number of
steps to reach the minimum.
Also, notice that in this case the time to reach the vicinity of the minimum for gradient descent is
not influenced in any way by momentum terms or learning rates that adapt to the norm of the past
gradients, because the gradients are all the same. Same holds for second order methods: The function
in figure lacks of any curvature, so these methods could not be used. Even approaches based on the
reduction of the variance in the gradients, e.g. [Johnson and Zhang, 2013], do not give any advantage
here because the subgradients are deterministic.
qP
t
2
Figure 1 (right) shows the ?effective learning? rate of COCOB that is ??t := wt
i=1 gi . This is
the learning rate we should use in AdaGrad to obtain the same behaviour of COCOB. We see a very
2
The AdaGrad variant used in deep learning does not have a convergence guarantee, because no projections
are used. Hence, we report the oracle bound in the case that projections are used inside the hypercube with
dimensions |wi? |.
5
Algorithm 2 COCOB-Backprop
1: Input: ? > 0 (default value = 100); w 1 ? Rd (initial parameters); T (maximum number of
iterations); F (function to minimize)
2: Initialize: L0,i ? 0, G0,i ? 0, Reward0,i ? 0, ?0,i ? 0, i = 1, ? ? ? , number of parameters
3: for t = 1, 2, . . . , T do
4:
Get a (negative) stochastic subgradient g t such that E[g t ] ? ?[?F (wt )]
5:
for each i-th parameter in the network do
6:
Update the maximum observed scale: Lt,i ? max(Lt?1,i , |gt,i |)
7:
Update the sum of the absolute values of the subgradients: Gt,i ? Gt?1,i + |gt,i |
8:
Update the reward: Rewardt,i ? max(Rewardt?1,i +(wt,i ? w1,i )gt,i , 0)
9:
Update the sum of the gradients: ?t,i ? ?t?1,i + gt,i
10:
Calculate the parameters: wt,i ? w1,i +
11:
end for
12: end for
13: Return w T
?t,i
Lt,i max(Gt,i +Lt,i ,?Lt,i )
(Lt,i + Rewardt,i )
interesting effect: The learning rate is not constant nor is monotonically increasing or decreasing.
Rather, it is big when we are far from the optimum and small when close to it. However, we would
like to stress that this behaviour has not been coded into the algorithm, rather it is a side-effect of
having the optimal convergence rate.
We will show in Section 7 that this theoretical gain is confirmed in the empirical results.
6
Backprop and Coin Betting
The algorithm described in the previous section is guaranteed to converge at the optimal convergence
rate for non-smooth functions and does not require a learning rate. However, it still needs to know
the maximum range of the gradients on each coordinate. Note that for the effect of the vanishing
gradients, each layer will have a different range of the gradients [Hochreiter, 1991]. Also, the weights
of the network can grow over time, increasing the value of the gradients too. Hence, it would be
impossible to know the range of each gradient beforehand and use any strategy based on betting.
By following the previous literature, e.g. [Kingma and Ba, 2015], we propose a variant of COCOB
better suited to optimizing deep networks. We name it COCOB-Backprop and its pseudocode is in
Algorithm 2. Although this version lacks the backing of a theoretical guarantee, it is still effective in
practice as we will show experimentally in Section 7.
There are few differences between COCOB and COCOB-Backprop. First, we want to be adaptive
to the maximum component-wise range of the gradients. Hence, in line 6 we constantly update the
values Lt,i for each variable. Next, since Li,t?1 is not assured anymore to be an upper bound on
gt,i , we do not have any guarantee that the wealth Rewardt,i is non-negative. Thus, we enforce the
positivity of the reward in line 8 of Algorithm 2.
We also modify the fraction to bet in line 10 by removing the sigmoidal function because 2?(2x)?1 ?
x for x ? [?1, 1]. This choice simplifies the code and always improves the results in our experiments.
Moreover, we change the denominator of the fraction to bet such that it is at least ?Lt,i . This has
the effect of restricting the value of the parameters in the first iterations of the algorithm. To better
understand this change, consider that, for example, in AdaGrad and Adam with learning rate ? the
first update is w2,i = w1,i ? ? SGN(g1,i ). Hence, ? should have a value smaller than w1,i in order
to not ?forget? the initial point too fast. In fact, the initialization is critical to obtain good results
and moving too far away from it destroys the generalization ability of deep networks. Here, the first
update becomes w2,i = w1,i ? ?1 SGN(g1,i ), so ?1 should also be small compared to w1,i .
Finally, as in previous algorithms, we do not return the average or a random iterate, but just the last
one (line 13 in Algorithm 2).
6
Figure 2: Training cost (cross-entropy) (left) and testing error rate (0/1 loss) (right) vs. the number
epochs with two different architectures on MNIST, as indicated in the figure titles. The y-axis is
logarithmic in the left plots. Figures best viewed in colors.
7
Empirical Results and Future Work
We run experiments on various datasets and architectures, comparing COCOB with some popular
stochastic gradient learning algorithms: AdaGrad [Duchi et al., 2011], RMSProp [Tieleman and
Hinton, 2012], Adadelta [Zeiler, 2012], and Adam [Kingma and Ba, 2015]. For all the algorithms,
but COCOB, we select their learning rate as the one that gives the best training cost a posteriori using
a very fine grid of values3 . We implemented4 COCOB (following Algorithm 2) in Tensorflow [Abadi
et al., 2015] and we used the implementations of the other algorithms provided by this deep learning
framework. The best value of the learning rate for each algorithm and experiment is reported in the
legend.
We report both the training cost and the test error, but, as in previous work, e.g., [Kingma and Ba,
2015], we focus our empirical evaluation on the former. Indeed, given a large enough neural network
it is always possible to overfit the training set, obtaining a very low performance on the test set.
Hence, test errors do not only depends on the optimization algorithm.
Digits Recognition. As a first test, we tackle handwritten digits recognition using the MNIST
dataset [LeCun et al., 1998a]. It contains 28 ? 28 grayscale images with 60k training data, and
10k test samples. We consider two different architectures, a fully connected 2-layers network and a
Convolutional Neural Network (CNN). In both cases we study different optimizers on the standard
cross-entropy objective function to classify 10 digits. For the first network we reproduce the structure
described in the multi-layer experiment of [Kingma and Ba, 2015]: it has two fully connected hidden
layers with 1000 hidden units each and ReLU activations, with mini-batch size of 100. The weights
are initialized with a centered truncated normal distribution and standard deviation 0.1, the same
small value 0.1 is also used as initialization for the bias. The CNN architecture follows the Tensorflow
tutorial 5 : two alternating stages of 5 ? 5 convolutional filters and 2 ? 2 max pooling are followed
by a fully connected layer of 1024 rectified linear units (ReLU). To reduce overfitting, 50% dropout
noise is used during training.
3
[0.00001, 0.000025, 0.00005, 0.000075, 0.0001, 0.00025, 0.0005, 0.00075, 0.001, 0.0025, 0.005, 0.0075,
0.01, 0.02, 0.05, 0.075, 0.1]
4
https://github.com/bremen79/cocob
5
https://www.tensorflow.org/get_started/mnist/pros
7
Figure 3: Training cost (cross-entropy) (left) and testing error rate (0/1 loss) (right) vs. the number
epochs on CIFAR-10. The y-axis is logarithmic in the left plots. Figures best viewed in colors.
Word Prediction on PTB - Training Cost
600
400
AdaGrad 0.25
RMSprop 0.001
Adadelta 2.5
Adam 0.00075
COCOB
Perplexity
400
AdaGrad 0.25
RMSprop 0.001
Adadelta 2.5
Adam 0.00075
COCOB
350
300
Perplexity
500
Word Prediction on PTB - Test Cost
300
200
250
200
150
100
100
0
0
10
20
30
50
40
0
Epochs
10
20
30
40
Epochs
Figure 4: Training cost (left) and test cost (right) measured as average per-word perplexity vs. the
number epochs on PTB word-level language modeling task. Figures best viewed in colors.
Training cost and test error rate as functions of the number of training epochs are reported in Figure 2.
With both architectures, the training cost of COCOB decreases at the same rate of the best tuned
competitor algorithms. The training performance of COCOB is also reflected in its associated test
error which appears better or on par with the other algorithms.
Object Classification. We use the popular CIFAR-10 dataset [Krizhevsky, 2009] to classify 32?32
RGB images across 10 object categories. The dataset has 60k images in total, split into a training/test
set of 50k/10k samples. For this task we used the network defined in the Tensorflow CNN tutorial6 .
It starts with two convolutional layers with 64 kernels of dimension 5 ? 5 ? 3, each followed by a
3 ? 3 ? 3 max pooling with stride of 2 and by local response normalization as in Krizhevsky et al.
[2012]. Two more fully connected layers respectively of 384 and 192 rectified linear units complete
the architecture that ends with a standard softmax cross-entropy classifier. We use a batch size of
128 and the input images are simply pre-processed by whitening. Differently from the Tensorflow
tutorial, we do not apply image random distortion for data augmentation.
The obtained results are shown in Figure 3. Here, with respect to the training cost, our learningrate-free COCOB performs on par with the best competitors. For all the algorithms, there is a good
correlation between the test performance and the training cost. COCOB and its best competitor
AdaDelta show similar classification results that differ on average ? 0.008 in error rate.
Word-level Prediction with RNN. Here we train a Recurrent Neural Network (RNN) on a language modeling task. Specifically, we conduct word-level prediction experiments on the Penn Tree
Bank (PTB) dataset [Marcus et al., 1993] using the 929k training words and its 73k validation words.
We adopted the medium LSTM [Hochreiter and Schmidhuber, 1997] network architecture described
in Zaremba et al. [2014]: it has 2 layers with 650 units per layer and parameters initialized uniformly
in [?0.05, 0.05], a dropout of 50% is applied on the non-recurrent connections, and the norm of the
gradients (normalized by mini-batch size = 20) is clipped at 5.
6
https://www.tensorflow.org/tutorials/deep_cnn
8
We show the obtained results in terms of average per-word perplexity in Figure 4. In this task COCOB
performs as well as Adagrad and Adam with respect to the training cost and much better than the other
algorithms. In terms of test performance, COCOB, Adam, and AdaGrad all show an overfit behaviour
indicated by the perplexity which slowly grows after having reached its minimum. Adagrad is the
least affected by this issue and presents the best results, followed by COCOB which outperforms all
the other methods. We stress again that the test performance does not depend only on the optimization
algorithm used in training and that early stopping may mitigate the overfitting effect.
Summary of the Empirical Evaluation and Future Work. Overall, COCOB has a training
performance that is on-par or better than state-of-the-art algorithms with perfectly tuned learning
rates. The test error appears to depends on other factors too, with equal training errors corresponding
to different test errors.
We would also like to stress that in these experiments, contrary to some of the previous reported
empirical results on similar datasets and networks, the difference between the competitor algorithms
is minimal or not existent when they are tuned on a very fine grid of learning rate values. Indeed, the
very similar performance of these methods seems to indicate that all the algorithms are inherently
doing the same thing, despite their different internal structures and motivations. Future more detailed
empirical results will focus on unveiling what is the common structure of these algorithms that give
rise to this behavior.
In the future, we also plan to extend the theory of COCOB beyond ? -weakly-quasi-convex functions,
characterizing the non-convexity present in deep networks. Also, it would be interesting to evaluate a
possible integration of the betting framework with second-order methods.
Acknowledgments
The authors thank the Stony Brook Research Computing and Cyberinfrastructure, and the Institute
for Advanced Computational Science at Stony Brook University for access to the high-performance
SeaWulf computing system, which was made possible by a $1.4M National Science Foundation grant
(#1531492). The authors also thank Akshay Verma for the help with the TensorFlow implementation
and Matej Kristan for reporting a bug in the pseudocode in the previous version of the paper. T.T. was
supported by the ERC grant 637076 - RoboExNovo. F.O. is partly supported by a Google Research
Award.
References
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz,
L. Kaiser, M. Kudlur, J. Levenberg, D. Man?, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster,
J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi?gas,
O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale
machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software
available from tensorflow.org.
S.-I. Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251?276,
1998.
H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert
Spaces. Springer, 2011.
Y. Bengio. Practical recommendations for gradient-based training of deep architectures. In G. Montavon, G. B. Orr, and K.-R. M?ller, editors, Neural Networks: Tricks of the Trade: Second Edition,
pages 437?478. Springer, Berlin, Heidelberg, 2012.
A. Cutkosky and K. Boahen. Online learning without prior information. In Conference on Learning
Theory (COLT), pages 643?677, 2017.
A. Cutkosky and K. A. Boahen. Online convex optimization with unconstrained domains and losses.
In Advances in Neural Information Processing Systems (NIPS), pages 748?756, 2016.
9
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12(Jul):2121?2159, 2011.
M. Hardt, T. Ma, and B. Recht. Gradient descent learns linear dynamical systems. arXiv preprint
arXiv:1609.05191, 2016.
S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut f?r
Informatik, Lehrstuhl Prof. Brauer, Technische Universit?t M?nchen, 1991.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell.
Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093,
2014.
R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
In Advances in Neural Information Processing Systems (NIPS), pages 315?323, 2013.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference
on Learning Representations (ICLR), 2015.
A. Krizhevsky. Learning multiple layers of features from tiny images. Master?s thesis, Department of
Computer Science, University of Toronto, 2009.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in Neural Information Processing Systems (NIPS), pages 1097?1105, 2012.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998a. URL http://yann.lecun.
com/exdb/mnist/.
Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. M?ller. Efficient backprop. In Neural networks: Tricks
of the trade, pages 9?48. Springer, 1998b.
M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english:
The penn treebank. Computational linguistics, 19(2):313?330, 1993.
H. B. McMahan and F. Orabona. Unconstrained online linear learning in Hilbert spaces: Minimax
algorithms and normal approximations. In Conference on Learning Theory (COLT), pages 1020?
1039, 2014.
Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence
O(1/k 2 ). Soviet Mathematics Doklady, 27(2):372?376, 1983.
Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming,
120(1):221?259, 2009.
F. Orabona. Dimension-free exponentiated gradient. In Advances in Neural Information Processing
Systems (NIPS), pages 1806?1814, 2013.
F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic
learning. In Advances in Neural Information Processing Systems (NIPS), pages 1116?1124, 2014.
F. Orabona and D. Pal. Scale-free algorithms for online linear optimization. In International
Conference on Algorithmic Learning Theory (ALT), pages 287?301. Springer, 2015.
F. Orabona and D. Pal. Coin betting and parameter-free online learning. In Advances in Neural
Information Processing Systems (NIPS), pages 577?585. 2016.
F. Orabona, K. Crammer, and N. Cesa-Bianchi. A generalized online mirror descent with applications
to classification and regression. Machine Learning, 99(3):411?435, 2015.
S. Ross, P. Mineiro, and J. Langford. Normalized online learning. In Proc. of the Twenty-Ninth
Conference on Uncertainty in Artificial Intelligence (UAI), 2013.
10
T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. In International conference on
Machine Learning (ICML), pages 343?351, 2013.
M. Streeter and H. B. McMahan. Less regret via online conditioning. arXiv preprint arXiv:1002.4862,
2010.
M. Streeter and H. B. McMahan. No-regret algorithms for unconstrained online convex optimization.
In Advances in Neural Information Processing Systems (NIPS), pages 2402?2410, 2012.
I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum
in deep learning. In International conference on Machine Learning (ICML), pages 1139?1147,
2013.
T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its
recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
S. Wright and J. Nocedal. Numerical optimization. Springer, 1999.
W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv preprint
arXiv:1409.2329, 2014.
M. D. Zeiler. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms.
In International Conference on Machine Learning (ICML), pages 919?926, 2004.
11
| 6811 |@word cnn:3 version:2 briefly:1 nchen:1 norm:3 seems:1 rgb:1 sgd:4 reduction:3 initial:10 contains:2 tuned:4 denoting:2 document:1 outperforms:2 existing:1 past:2 current:2 com:3 comparing:1 steiner:1 guadarrama:1 activation:1 yet:3 stony:4 attracted:1 realize:1 devin:1 numerical:1 shape:1 plot:2 update:14 v:3 isard:1 intelligence:1 advancement:2 vanishing:1 short:1 iterates:1 toronto:1 sigmoidal:1 org:4 zhang:5 mathematical:2 olah:1 gtt:1 differential:1 abadi:3 uniroma1:1 inside:1 introduce:3 indeed:5 behavior:1 nor:2 growing:1 multi:2 ptb:4 decreasing:1 automatically:1 actual:1 increasing:4 becomes:1 provided:1 notation:3 bounded:2 moreover:2 medium:1 what:1 guarantee:10 mitigate:1 growth:1 tackle:1 zaremba:2 exactly:1 doklady:1 universit:1 classifier:1 control:1 originates:1 unit:4 penn:2 grant:2 positive:3 t1:4 engineering:1 scientist:1 local:2 modify:1 despite:1 meet:1 signed:1 black:1 might:1 initialization:3 range:5 acknowledgment:1 lecun:7 wealtht:5 testing:2 practical:1 practice:2 regret:2 pesky:1 optimizers:1 digit:3 procedure:4 area:1 empirical:7 rnn:2 adapting:1 projection:2 word:11 pre:1 get:4 unfeasible:1 cannot:2 close:1 operator:1 selection:1 impossible:1 www:2 deterministic:1 demonstrated:2 center:2 dean:1 marten:1 latest:1 attention:2 go:1 convex:19 borrow:2 shlens:1 proving:1 embedding:1 notion:1 coordinate:4 justification:1 pt:4 construction:2 netzen:1 programming:1 designing:1 goodfellow:1 trick:2 trend:1 adadelta:5 approximated:1 recognition:3 observed:1 preprint:5 capture:1 worst:2 calculate:4 connected:4 coursera:1 decrease:2 trade:2 boahen:3 accessing:1 convexity:3 complexity:1 pd:1 reward:6 rmsprop:4 nesterov:4 brauer:1 existent:1 weakly:5 depend:1 solving:1 predictive:1 differently:1 various:3 soviet:1 train:1 fast:2 effective:6 artificial:1 outside:1 outcome:8 caffe:2 whose:2 larger:1 distortion:1 amari:2 ability:1 gi:4 g1:3 itself:2 online:11 advantage:4 differentiable:3 sequence:5 net:1 karayev:1 propose:5 relevant:1 achieve:2 schaul:2 bug:1 sutskever:5 convergence:12 optimum:1 darrell:1 oscillating:1 adam:7 rewardt:9 converges:2 object:2 help:1 depending:2 recurrent:4 measured:1 received:3 involves:1 come:1 implies:1 indicate:1 differ:1 direction:1 slighlty:1 annotated:1 filter:1 stochastic:21 centered:1 vx:1 sgn:2 backprop:6 require:4 behaviour:6 generalization:1 marcinkiewicz:1 tighter:1 strictly:1 hold:3 around:1 wright:2 normal:2 exp:3 great:1 algorithmic:1 claim:1 early:1 proc:1 currently:1 ross:2 title:1 tool:1 minimization:1 destroys:1 always:3 aim:1 reaching:1 primed:1 rather:3 avoid:2 bet:16 encode:1 l0:1 focus:2 adversarial:1 kristan:1 helpful:1 posteriori:1 dependent:5 stopping:1 typically:2 hidden:3 quasi:7 reproduce:1 backing:1 sketched:1 issue:1 dual:2 classification:4 overall:1 denoted:2 colt:2 development:1 plan:1 art:2 integration:1 spatial:1 initialize:2 softmax:1 field:1 equal:7 having:2 beach:1 untersuchungen:1 identical:1 kw:2 yu:1 icml:3 future:4 others:1 report:2 few:1 modern:1 national:1 geometry:1 opposing:1 huge:1 zheng:1 evaluation:2 primal:1 beforehand:1 shorter:1 institut:1 unless:1 conduct:1 tree:1 divide:1 initialized:2 girshick:1 theoretical:3 minimal:1 fenchel:3 classify:2 modeling:2 cost:15 deviation:1 values3:1 technische:1 krizhevsky:4 johnson:2 too:6 pal:10 bauschke:2 reported:3 supx:1 kudlur:1 st:1 recht:1 lstm:1 international:5 w1:11 augmentation:1 again:1 unavoidable:1 management:1 thesis:2 cesa:1 slowly:1 positivity:1 book:1 resort:1 return:3 li:23 potential:1 orr:2 stride:1 coding:1 summarized:1 coefficient:1 satisfy:1 depends:6 vi:1 root:1 linked:1 doing:1 red:1 start:3 reached:1 hazan:1 jul:1 jia:3 minimize:2 square:2 convolutional:6 variance:2 largely:1 efficiently:1 generalize:1 weak:1 handwritten:1 informatik:1 multiplying:1 confirmed:1 rectified:2 explain:1 simultaneous:1 reach:3 influenced:1 manual:1 definition:6 competitor:4 tucker:1 obvious:1 proof:2 associated:1 recovers:1 gain:3 dynamischen:1 proved:2 hardt:3 popular:4 dataset:4 knowledge:1 color:4 improves:1 wicke:1 hilbert:2 schedule:1 carefully:2 back:1 matej:1 appears:2 methodology:1 reflected:1 response:1 lehrstuhl:1 furthermore:1 just:3 stage:1 until:1 overfit:2 hand:1 correlation:1 betted:3 langford:1 lack:2 google:1 indicated:2 grows:1 building:1 usa:1 effect:5 normalized:2 verify:1 name:1 vasudevan:1 former:1 regularization:2 hence:12 equality:1 vicinity:2 alternating:1 moore:1 deal:1 round:5 game:2 during:1 irving:1 davis:1 levenberg:1 generalized:1 stress:3 exdb:1 tt:1 complete:1 duchi:5 dedicated:1 performs:2 pro:1 reasoning:1 image:6 wise:1 novel:3 common:1 pseudocode:2 qp:1 overview:1 conditioning:1 exponentially:1 belong:1 he:9 tail:4 extend:1 significant:1 refer:1 jozefowicz:1 tuning:4 vanilla:1 unconstrained:5 rd:10 outlined:2 similarly:1 grid:2 erc:1 mathematics:1 language:2 moving:1 access:2 money:8 whitening:1 gt:35 curvature:5 recent:3 showed:1 perspective:1 italy:1 belongs:1 optimizing:1 perplexity:5 scenario:4 schmidhuber:2 wattenberg:1 inequality:2 blog:1 success:2 supportive:1 minimum:6 additional:1 converge:1 ller:2 monotonically:1 living:1 corrado:1 branch:1 multiple:2 full:2 smooth:3 adapt:3 offer:1 long:3 cross:4 cifar:2 equally:1 award:1 coded:1 prediction:5 variant:3 basic:1 regression:1 denominator:1 heterogeneous:1 expectation:1 arxiv:10 iteration:6 kernel:2 normalization:1 monga:1 agarwal:1 hochreiter:4 background:1 addition:1 want:2 fine:2 wealth:13 grow:1 w2:2 warden:1 pooling:2 thing:1 contrary:2 legend:1 effectiveness:1 structural:1 leverage:1 agation:1 bengio:3 enough:3 easy:1 split:1 variety:1 iterate:3 relu:2 architecture:11 perfectly:1 reduce:6 simplifies:1 barham:1 haffner:1 gambler:8 bottleneck:1 whether:2 tommasi:2 url:2 accelerating:1 passing:1 deep:21 detailed:1 tune:1 amount:7 processed:1 category:1 http:5 tutorial:4 notice:1 sign:3 per:4 hyperparameter:3 unveiling:1 affected:1 key:1 harp:1 nevertheless:1 lti:1 diffusion:1 dahl:1 nocedal:2 subgradient:12 monotone:1 fraction:6 year:2 sum:4 run:1 talwar:1 master:1 uncertainty:1 extends:1 family:3 clipped:1 reporting:1 yann:1 sapienza:1 appendix:1 dropout:2 layer:13 bound:10 pay:1 guaranteed:1 followed:3 oracle:1 adapted:1 constraint:1 software:1 encodes:3 speed:2 subgradients:11 betting:25 department:3 conjugate:3 remain:1 smaller:1 across:1 wi:7 shallow:1 making:1 intuitively:1 invariant:1 ln:2 assures:1 turn:1 singer:1 know:2 flip:4 end:6 adopted:1 generalizes:1 brevdo:1 available:1 experimentation:1 apply:1 away:1 enforce:1 anymore:1 batch:3 coin:21 substitute:1 denotes:2 running:2 linguistics:1 zeiler:3 tatiana:1 exploit:1 k1:1 murray:1 prof:1 hypercube:1 objective:7 g0:2 already:1 quantity:1 kaiser:1 strategy:13 dependence:1 diagonal:1 unclear:1 gradient:43 win:1 minx:1 iclr:1 thank:2 berlin:1 topic:1 extent:1 induction:1 marcus:2 assuming:1 besides:1 code:1 remind:1 mini:2 minimizing:1 steep:1 negative:6 rise:1 ba:7 design:1 reliably:1 implementation:2 unknown:1 contributed:1 allowing:1 upper:1 bianchi:1 twenty:1 francesco:2 datasets:2 benchmark:1 descent:17 gas:1 truncated:1 hinton:5 santorini:1 head:4 rome:1 ninth:1 arbitrary:2 community:2 introduced:1 required:1 connection:1 imagenet:1 tensorflow:11 polylogarithmic:1 kingma:7 nip:8 brook:4 beyond:2 usually:1 dynamical:1 sparsity:1 challenge:1 max:6 memory:1 critical:2 natural:2 rely:1 advanced:1 minimax:1 github:1 axis:2 cutkosky:3 epoch:6 literature:4 understanding:1 prior:1 adagrad:13 loss:3 fully:4 par:3 diploma:1 adaptivity:1 interesting:2 lecture:1 proven:1 validation:1 foundation:1 shelhamer:1 vanhoucke:1 editor:1 bank:1 neuronalen:1 verma:1 tiny:1 treebank:1 course:1 summary:1 supported:2 last:3 free:7 english:1 dis:1 side:1 allow:1 deeper:1 understand:1 bias:1 institute:1 exponentiated:1 characterizing:1 akshay:1 absolute:6 sparse:2 dimension:4 default:1 author:3 made:1 adaptive:5 far:2 keep:1 global:1 overfitting:2 uai:1 corpus:1 assumed:1 grayscale:1 continuous:4 mineiro:1 streeter:5 nature:1 ca:1 inherently:1 obtaining:1 heidelberg:1 investigated:1 bottou:2 domain:3 assured:1 main:1 big:2 noise:2 hyperparameters:4 arise:1 motivation:1 edition:1 repeated:1 allowed:1 gambling:4 ny:1 slow:1 combettes:2 is2:1 quasiconvex:1 momentum:3 exponential:1 mcmahan:7 third:1 montavon:1 learns:1 donahue:1 removing:3 theorem:1 down:1 zu:1 jensen:1 ghemawat:1 consequent:1 alt:1 evidence:1 exists:2 mnist:4 restricting:1 effectively:1 importance:2 mirror:1 magnitude:2 te:1 chen:1 suited:1 entropy:4 forget:1 logarithmic:4 lt:8 simply:2 vinyals:2 recommendation:1 springer:5 tieleman:3 quasinewton:1 loses:2 minimizer:3 satisfies:1 constantly:1 ma:1 viewed:4 orabona:21 price:1 man:1 change:3 experimentally:1 specifically:2 reducing:2 uniformly:2 wt:25 averaging:2 total:1 invariance:1 partly:1 citro:1 select:1 internal:1 crammer:1 evaluate:1 schuster:1 |
6,426 | 6,812 | Pixels to Graphs by Associative Embedding
Alejandro Newell
Jia Deng
Computer Science and Engineering
University of Michigan, Ann Arbor
{alnewell, jiadeng}@umich.edu
Abstract
Graphs are a useful abstraction of image content. Not only can graphs represent
details about individual objects in a scene but they can capture the interactions
between pairs of objects. We present a method for training a convolutional neural
network such that it takes in an input image and produces a full graph definition.
This is done end-to-end in a single stage with the use of associative embeddings.
The network learns to simultaneously identify all of the elements that make up a
graph and piece them together. We benchmark on the Visual Genome dataset, and
demonstrate state-of-the-art performance on the challenging task of scene graph
generation.
1
Introduction
Extracting semantics from images is one of the main goals of computer vision. Recent years have
seen rapid progress in the classification and localization of objects [7, 24, 10]. But a bag of labeled
and localized objects is an impoverished representation of image semantics: it tells us what and where
the objects are (?person? and ?car?), but does not tell us about their relations and interactions (?person
next to car?). A necessary step is thus to not only detect objects but to identify the relations between
them. An explicit representation of these semantics is referred to as a scene graph [12] where we
represent objects grounded in the scene as vertices and the relationships between them as edges.
End-to-end training of convolutional networks has proven to be a highly effective strategy for image
understanding tasks. It is therefore natural to ask whether the same strategy would be viable for
predicting graphs from pixels. Existing approaches, however, tend to break the problem down into
more manageable steps. For example, one might run an object detection system to propose all of the
objects in the scene, then isolate individual pairs of objects to identify the relationships between them
[18]. This breakdown often restricts the visual features used in later steps and limits reasoning over
the full graph and over the full contents of the image.
We propose a novel approach to this problem, where we train a network to define a complete graph
from a raw input image. The proposed supervision allows a network to better account for the full
image context while making predictions, meaning that the network reasons jointly over the entire
scene graph rather than focusing on pairs of objects in isolation. Furthermore, there is no explicit
reliance on external systems such as Region Proposal Networks (RPN) [24] that provide an initial
pool of object detections.
To do this, we treat all graph elements?both vertices and edges?as visual entities to be detected as
in a standard object detection pipeline. Specifically, a vertex is an instance of an object (?person?),
and an edge is an instance of an object-object relation (?person next to car?). Just as visual patterns
in an image allow us to distinguish between objects, there are properties of the image that allow us to
see relationships. We train the network to pick up on these properties and point out where objects and
relationships are likely to exist in the image space.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: Scene graphs are defined by the objects in an image (vertices) and their interactions (edges).
The ability to express information about the connections between objects make scene graphs a useful
representation for many computer vision tasks including captioning and visual question answering.
What distinguishes this work from established detection approaches [24] is the need to represent
connections between detections. Traditionally, a network takes an image, identifies the items of
interest, and outputs a pile of independent objects. A given detection does not tell us anything about
the others. But now, if the network produces a pool of objects (?car?, ?person?, ?dog?, ?tree?, etc),
and also identifies a relationship such as ?in front of? we need to define which of the detected objects
is in front of which. Since we do not know which objects will be found in a given image ahead of
time, the network needs to somehow refer to its own outputs.
We draw inspiration from associative embeddings [20] to solve this problem. Originally proposed for
detection and grouping in the context of multiperson pose estimation, associative embeddings provide
the necessary flexibility in the network?s output space. For pose estimation, the idea is to predict
an embedding vector for each detected body joint such that detections with similar embeddings can
be grouped to form an individual person. But in its original formulation, the embeddings are too
restrictive, the network can only define clusters of nodes, and for a scene graph, we need to express
arbitrary edges between pairs of nodes.
To address this, associative embeddings must be used in a substantially different manner. That is,
rather than having nodes output a shared embedding to refer to clusters and groups, we instead have
each node define its own unique embedding. Given a set of detected objects, the network outputs a
different embedding for each object. Now, each edge can refer to the source and destination nodes by
correctly producing their embeddings. Once the network is trained it is straightforward to match the
embeddings from detected edges to each vertex and construct a final graph.
There is one further issue that we address in this work: how to deal with detections grounded at the
same location in the image. Frequently in graph prediction, multiple vertices or edges may appear in
the same place. Supervision of this is difficult as training a network traditionally requires telling it
exactly what appears and where. With an unordered set of overlapping detections there may not be a
direct mapping to explicitly lay this out. Consider a set of object relations grounded at the same pixel
location. Assume the network has some fixed output space consisting of discrete ?slots? in which
detections can appear. It is unclear how to define a mapping so that the network has a consistent rule
for organizing its relation predictions into these slots. We address this problem by not enforcing any
explicit mapping at all, and instead provide supervision such that it does not matter how the network
chooses to fill its output, a correct loss can still be applied.
Our contributions are a novel use of associative embeddings for connecting the vertices and edges of
a graph, and a technique for supervising an unordered set of network outputs. Together these form
the building blocks of our system for direct graph prediction from pixels. We apply our method to the
task of generating a semantic graph of objects and relations and test on the Visual Genome dataset
[14]. We achieve state-of-the-art results improving performance over prior work by nearly a factor of
three on the most difficult task setting.
2
Related Work
Relationship detection: There are many ways to frame the task of identifying objects and the
relationships between them. This includes localization from referential expressions [11], detection
of human-object interactions [3], or the more general tasks of visual relationship detection (VRD)
[18] and scene graph generation [12]. In all of these settings, the aim is to correctly determine the
2
relationships between pairs of objects and ground this in the image with accurate object bounding
boxes.
Visual relationship detection has drawn much recent attention [18, 28, 27, 2, 17, 19, 22, 23]. The
open-ended and challenging nature of the task lends itself to a variety of diverse approaches and
solutions. For example: incorporating vision and language when reasoning over a pair of objects
[18]; using message-passing RNNs to process a set of proposed object boxes [26]; predicting over
triplets of bounding boxes that corresponding to proposals for a subject, phrase, and object [15];
using reinforcement learning to sequentially evaluate on pairs of object proposals and determine their
relationships [16]; comparing the visual features and relative spatial positions of pairs of boxes [4];
learning to project proposed objects into a vector space such that the difference between two object
vectors is informative of the relationship between them [27].
Most of these approaches rely on generated bounding boxes from a Region Proposal Network (RPN)
[24]. Our method does not require proposed boxes and can produce detections directly from the image.
However proposals can be incorporated as additional input to improve performance. Furthermore,
many methods process pairs of objects in isolation whereas we train a network to process the whole
image and produce all object and relationship detections at once.
Associative Embedding: Vector embeddings are used in a variety of contexts. For example, to
measure the similarity between pairs of images [6, 25], or to map visual and text features to a shared
vector space [5, 8, 13]. Recent work uses vector embeddings to group together body joints for
multiperson pose estimation [20]. These are referred to as associative embeddings since supervision
does not require the network to output a particular vector value, and instead uses the distances
between pairs of embeddings to calculate a loss. What is important is not the exact value of the vector
but how it relates to the other embeddings produced by the network.
More specifically, in [20] a network is trained to detect body joints of the various people in an image.
In addition, it must produce a vector embedding for each of its detections. The embedding is used
to identify which person a particular joint belongs to. This is done by ensuring that all joints that
belong to a single individual produce the same output embedding, and that the embeddings across
individuals are sufficiently different to separate detections out into discrete groups. In a certain sense,
this approach does define a graph, but the graph is restricted in that it can only represent clusters of
nodes. For the purposes of our work, we take a different perspective on the associative embedding
loss in order to express any arbitrary graph as defined by a set of vertices and directed edges. There
are other ways that embeddings could be applied to solve this problem, but our approach depends on
our specific formulation where we treat edges as elements of the image to be detected which is not
obvious given the prior use of associative embeddings for pose.
3
Pixels ? Graph
Our goal is to construct a graph from a set of pixels. In particular, we want to construct a graph
grounded in the space of these pixels. Meaning that in addition to identifying vertices of the graph,
we want to know their precise locations. A vertex in this case can refer to any object of interest in the
scene including people, cars, clothing, and buildings. The relationships between these objects is then
captured by the edges of the graph. These relationships may include verbs (eating, riding), spatial
relations (on the left of, behind), and comparisons (smaller than, same color as).
More formally we consider a directed graph G = (V, E). A given vertex vi ? V is grounded at
a location (xi , yi ) and defined by its class and bounding box. Each edge e ? E takes the form
ei = (vs , vt , ri ) defining a relationship of type ri from vs to vt . We train a network to explicitly
define V and E. This training is done end-to-end on a single network, allowing the network to reason
fully over the image and all possible components of the graph when making its predictions.
While production of the graph occurs all at once, it helps to think of the process in two main steps:
detecting individual elements of the graph, and connecting these elements together. For the first step,
the network indicates where vertices and edges are likely to exist and predicts the properties of these
detections. For the second, we determine which two vertices are connected by a detected edge. We
describe these two steps in detail in the following subsections.
3
Figure 2: Full pipeline for object and relationship detection. A network is trained to produce two
heatmaps that activate at the predicted locations of objects and relationships. Feature vectors are
extracted from the pixel locations of top activations and fed through fully connected networks to
predict object and relationship properties. Embeddings produced at this step serve as IDs allowing
detections to refer to each other.
3.1
Detecting graph elements
First, the network must find all of the vertices and edges that make up a graph. Each graph element
is grounded at a pixel location which the network must identify. In a scene graph where vertices
correspond to object detections, the center of the object bounding box will serve as the grounding
ys +yt
t
location. We ground edges at the midpoint of the source and target vertices: (b xs +x
2 c, b 2 c).
With this grounding in mind, we can detect individual elements by using a network that produces
per-pixel features at a high output resolution. The feature vector at a pixel determines if an edge or
vertex is present at that location, and if so is used to predict the properties of that element.
A convolutional neural network is used to process the image and produce a feature tensor of size h x
w x f . All information necessary to define a vertex or edge is thus encoded at particular pixel in a
feature vector of length f . Note that even at a high output resolution, multiple graph elements may
be grounded at the same location. The following discussion assumes up to one vertex and edge can
exist at a given pixel, and we elaborate on how we accommodate multiple detections in Section 3.3.
We use a stacked hourglass network [21] to process an image and produce the output feature tensor.
While our method has no strict dependence on network architecture, there are some properties that are
important for this task. The hourglass design combines global and local information to reason over
the full image and produce high quality per-pixel predictions. This is originally done for human pose
prediction which requires global reasoning over the structure of the body, but also precise localization
of individual joints. Similar logic applies to scene graphs where the context of the whole scene must
be taken into account, but we wish to preserve the local information of individual elements.
An important design choice here is the output resolution of the network. It does not have to match the
full input resolution, but there are a few details worth considering. First, it is possible for elements to
be grounded at the exact same pixel. The lower the output resolution, the higher the probability of
overlapping detections. Our approach allows this, but the fewer overlapping detections, the better.
All information necessary to define these elements must be encoded into a single feature vector of
length f which gets more difficult to do as more elements occupy a given location. Another detail is
that increasing the output resolution aids in performing better localization.
To predict the presence of graph elements we take the final feature tensor and apply a 1x1 convolution
and sigmoid activation to produce two heatmaps (one for vertices and another for edges). Each
heatmap indicates the likelihood that a vertex or edge exists at a given pixel. Supervision is a binary
cross-entropy loss on the heatmap activations, and we threshold on the result to produce a candidate
set of detections.
Next, for each of these detections we must predict their properties such as their class label. We extract
the feature vector from the corresponding location of a detection, and use the vector as input to a
set of fully connected networks. A separate network is used for each property we wish to predict,
and each consists of a single hidden layer with f nodes. This is illustrated above in Figure 2. During
training we use the ground truth locations of vertices and edges to extract features. A softmax loss is
used to supervise labels like object class and relationship predicate. And to predict bounding box
information we use anchor boxes and regress offsets based on the approach in Faster-RCNN [24].
4
In summary, the detection pipeline works as follows: We pass the image through a network to produce
a set of per-pixel features. These features are first used to produce heatmaps identifying vertex and
edge locations. Individual feature vectors are extracted from the top heatmap locations to predict the
appropriate vertex and edge properties. The final result is a pool of vertex and edge detections that
together will compose the graph.
3.2
Connecting elements with associative embeddings
Next, the various pieces of the graph need to be put together. This is made possible by training the
network to produce additional outputs in the same step as the class and bounding box prediction.
For every vertex, the network produces a unique identifier in the form of a vector embedding, and
for every edge, it must produce the corresponding embeddings to refer to its source and destination
vertices. The network must learn to ensure that embeddings are different across different vertices,
and that all embeddings that refer to a single vertex are the same.
These embeddings are critical for explicitly laying out the definition of a graph. For instance, while
it is helpful that edge detections are grounded at the midpoint of two vertices, this ultimately does
not address a couple of critical details for correctly constructing the graph. The midpoint does
not indicate which vertex serves as the source and which serves as the destination, nor does it
disambiguate between pairs of vertices that happen to share the same midpoint.
To train the network to produce a coherent set of embeddings we build off of the loss penalty used in
[20]. During training, we have a ground truth set of annotations defining the unique objects in the
scene and the edges between these objects. This allows us to enforce two penalties: that an edge points
to a vertex by matching its output embedding as closely as possible, and that the embedding vectors
produced for each vertex are sufficiently different. We think of the first as ?pulling together? all
references to a single vertex, and the second as ?pushing apart? the references to different individual
vertices.
We consider an embedding hi ? Rd produced for a vertex vi ? V . All edges that connect to this
vertex produce a set of embeddings h0ik , k = 1, ..., Ki where Ki is the total number of references to
that vertex. Given an image with n objects the loss to ?pull together? these embeddings is:
Lpull
= Pn
Ki
n X
X
1
i=1
Ki
(hi ? h0ik )2
i=1 k=1
To ?push apart? embeddings across different vertices we first used the penalty described in [20],
but experienced difficulty with convergence. We tested alternatives and the most reliable loss was a
margin-based penalty similar to [9]:
Lpush =
n?1
X
n
X
max(0, m ? ||hi ? hj ||)
i=1 j=i+1
Intuitively, Lpush is at its highest the closer hi and hj are to each other. The penalty drops off sharply
as the distance between hi and hj grows, eventually hitting zero once the distance is greater than a
given margin m. On the flip side, for some edge connected to a vertex vi , the loss Lpull will quickly
grow the further its reference embedding h0i is from hi .
The two penalties are weighted equally leaving a final associative embedding loss of Lpull + Lpush .
In this work, we use m = 8 and d = 8. Convergence of the network improves greatly after increasing
the dimension d of tags up from 1 as used in [20].
Once the network is trained with this loss, full construction of the graph can be performed with a
trivial postprocessing step. The network produces a pool of vertex and edge detections. For every
edge, we look at the source and destination embeddings and match them to the closest embedding
amongst the detected vertices. Multiple edges may have the same source and target vertices, vs and
vt , and it is also possible for vs to equal vt .
5
3.3
Support for overlapping detections
In scene graphs, there are going to be many cases where multiple vertices or multiple edges will be
grounded at the same pixel location. For example, it is common to see two distinct relationships
between a single pair of objects: person wearing shirt ? shirt on person. The detection pipeline
must therefore be extended to support multiple detections at the same pixel.
One way of dealing with this is to define an extra axis that allows for discrete separation of detections
at a given x, y location. For example, one could split up objects along a third spatial dimension
assuming the z-axis were annotated, or perhaps separate them by bounding box anchors. In either of
these cases there is a visual cue guiding the network so that it can learn a consistent rule for assigning
new detections to a correct slot in the third dimension. Unfortunately this idea cannot be applied as
easily to relationship detections. It is unclear how to define a third axis such that there is a reliable
and consistent bin assignment for each relationship.
In our approach, we still separate detections out into several discrete bins, but address the issue of
assignment by not enforcing any specific assignment at all. This means that for a given detection we
strictly supervise the x, y location in which it is to appear, but allow it to show up in one of several
?slots?. We have no way of knowing ahead of time in which slot it will be placed by the network, so
this means an extra step must be taken at training time to identify where we think the network has
placed its predictions and then enforce the loss at those slots.
We define so and sr to be the number of slots available for objects and relationships respectively.
We modify the network pipeline so that instead of producing predictions for a single object and
relationship at a pixel, a feature vector is used to produce predictions for a set of so objects and sr
relationships. That is, given a feature vector f from a single pixel, the network will for example
output so object class labels, so bounding box predictions, and so embeddings. This is done with
separate fully connected layers predicting the various object and relationship properties for each
available slot. No weights are shared amongst these layers. Furthermore, we add an additional output
to serve as a score indicating whether or not a detection exists at each slot.
During training, we have some number of ground truth objects, between 1 and so , grounded at a
particular pixel. We do not know which of the so outputs of the network will correspond to which
objects, so we must perform a matching step. The network produces distributions across possible
object classes and bounding box sizes, so we try to best match the outputs to the ground truth
information we have available. We construct a reference vector by concatenating one-hot encodings
of the class and bounding box anchor for a given object. Then we compare these reference vectors to
the output distributions produced at each slot. The Hungarian method is used to perform a maximum
matching step such that ground truth annotations are assigned to the best possible slot, but no two
annotations are assigned to the same slot.
Matching for relationships is similar. The ground truth reference vector is constructed by concatenating a one-hot encoding of its class with the output embeddings hs and ht from the source and
destination vertices, vs and vt . Once the best matching has been determined we have a correspondence between the network predictions and the set of ground truth annotations and can now apply the
various losses. We also supervise the score for each slot depending on whether or not it is matched
up to a ground truth detection - thus teaching the network to indicate a ?full? or ?empty? slot.
This matching process is only used during training. At test time, we extract object and relationship
detections from the network by first thresholding on the heatmaps to find a set of candidate pixel
locations, and then thresholding on individual slot scores to see which slots have produced detections.
4
Implementation details
We train a stacked hourglass architecture [21] in TensorFlow [1]. The input to the network is a
512x512 image, with an output resolution of 64x64. To prepare an input image we resize it is so that
its largest dimension is of length 512, and center it by padding with zeros along the other dimension.
During training, we augment this procedure with random translation and scaling making sure to
update the ground truth annotations to ignore objects and relationships that may be cropped out.
We make a slight modification to the orginal hourglass design: doubling the number of features to
512 at the two lowest resolutions of the hourglass. The output feature length f is 256. All losses classification, bounding box regression, associative embedding - are weighted equally throughout
6
Figure 3: Predictions on Visual Genome. In the top row, the network must produce all object and
relationship detections directly from the image. The second row includes examples from an easier
version of the task where object detections are provided. Relationships outlined in green correspond
to predictions that correctly matched to a ground truth annotation.
the course of training. We set so = 3 and sr = 6 which is sufficient to completely accommodate the
detection annotations for all but a small fraction of cases.
Incorporating prior detections: In some problem settings, a prior set of object detections may be
made available either as ground truth annotations or as proposals from an independent system. It is
good to have some way of incorporating these into the network. We do this by formatting an object
detection as a two channel input where one channel consists of a one-hot activation at the center of
the object bounding box and the other provides a binary mask of the box. Multiple boxes can be
displayed on these two channels, with the first indicating the center of each box and the second, the
union of their masks.
If provided with a large set of detections, this representation becomes too crowded so we either
separate bounding boxes by object class, or if no class information is available, by bounding box
anchors. To reduce computational cost this additional input is incorporated after several layers
of convolution and pooling have been applied to the input image. For example, we set up this
representation at the output resolution, 64x64, then apply several consecutive 1x1 convolutions to
remap the detections to a feature tensor with f channels. Then, we add this result to the first feature
tensor produced by the hourglass network at the same resolution and number of channels.
Sparse supervision: It is important to note that it is almost impossible to exhaustively annotate
images for scene graphs. A large number of possible relationships can be described between pairs of
objects in a real-world scene. The network is likely to generate many reasonable predictions that are
not covered in the ground truth. We want to reduce the penalty associated with these detections and
encourage the network to produce as many detections as possible. There are a few properties of our
training pipeline that are conducive to this.
For example, we do not need to supervise the entire heatmap for object and relationship detections.
Instead, we apply a loss at the pixels we know correspond to positive detections, and then randomly
sample some fraction from the rest of the image to serve as negatives. This balances the proportion of
positive and negative samples, and reduces the chance of falsely penalizing unannotated detections.
5
Experiments
Dataset: We evaluate the performance of our method on the Visual Genome dataset [14]. Visual
Genome consists of 108,077 images annotated with object detections and object-object relationships,
and it serves as a challenging benchmark for scene graph generation on real world images. Some
7
Lu et al. [18]
Xu et al. [26]
Our model
SGGen (no RPN)
R@50 R@100
?
?
?
?
6.7
7.8
SGGen (w/ RPN)
R@50 R@100
0.3
0.5
3.4
4.2
9.7
11.3
SGCls
R@50 R@100
11.8
14.1
21.7
24.4
26.5
30.0
PredCls
R@50 R@100
27.9
35.0
44.8
53.0
68.0
75.2
Table 1: Results on Visual Genome
Predicate
wearing
has
on
wears
of
riding
holding
in
sitting on
carrying
R@100
87.3
80.4
79.3
77.1
76.1
74.1
66.9
61.6
58.4
56.1
Predicate
to
and
playing
made of
painted on
between
against
flying in
growing on
from
R@100
5.5
5.4
3.8
3.2
2.5
2.3
1.6
0.0
0.0
0.0
Figure 4: How detections are distributed across the Table 2: Performance per relationship predicate
six available slots for relationships.
(top ten on left, bottom ten on right)
processing has to be done before using the dataset as objects and relationships are annotated with
natural language not with discrete classes, and many redundant bounding box detections are provided
for individual objects. To make a direct comparison to prior work we use the preprocessed version of
the set made available by Xu et al. [26]. Their network is trained to predict the 150 most frequent
object classes and 50 most frequent relationship predicates in the dataset. We use the same categories,
as well as the same training and test split as defined by the authors.
Task: The scene graph task is defined as the production of a set of subject-predicate-object tuples. A
proposed tuple is composed of two objects defined by their class and bounding box and the relationship
between them. A tuple is correct if the object and relationship classes match those of a ground truth
annotation and the two objects have at least a 0.5 IoU overlap with the corresponding ground truth
objects. To avoid penalizing extra detections that may be correct but missing an annotation, the
standard evaluation metric used for scene graphs is Recall@k which measures the fraction of ground
truth tuples to appear in a set of k proposals. Following [26], we report performance on three problem
settings:
SGGen: Detect and classify all objects and determine the relationships between them.
SGCls: Ground truth object boxes are provided, classify them and determine their relationships.
PredCls: Boxes and classes are provided for all objects, predict their relationships.
SGGen corresponds to the full scene graph task while PredCls allows us to focus exclusively on
predicate classification. Example predictions on the SgGen and PredCls tasks are shown in Figure
3. It can be seen in Table 1 that on all three settings, we achieve a significant improvement in
performance over prior work. It is worth noting that prior approaches to this problem require a
set of object proposal boxes in order to produce their predictions. For the full scene graph task
(SGGen) these detections are provided by a Region Proposal Network (RPN) [24]. We evaluate
performance with and without the use of RPN boxes, and achieve promising results even without the
use of proposal boxes - using nothing but the raw image as input. Furthermore, the network is trained
from scratch, and does not rely on pretraining on other datasets.
Discussion: There are a few interesting results that emerge from our trained model. The network
exhibits a number of biases in its predictions. For one, the vast majority of predicate predictions
correspond to a small fraction of the 50 predicate classes. Relationships like ?on? and ?wearing? tend
to completely dominate the network output, and this is in large part a function of the distribution of
ground truth annotations of Visual Genome. There are several orders of magnitude more examples for
8
?on? than most other predicate classes. This discrepancy becomes especially apparent when looking
at the performance per predicate class in Table 2. The poor results on the worst classes do not have
much effect on final performance since there are so few instances of relationships labeled with those
predicates.
We do some additional analysis to see how the network fills its ?slots? for relationship detection.
Remember, at a particular pixel the network produces a set of dectection and this is expressed by
filling out a fixed set of available slots. There is no explicit mapping telling the network which slots it
should put particular detections. From Figure 4, we see that the network learns to divide slots up such
that they correspond to subsets of predicates. For example, any detection for the predicates behind,
has, in, of, and on will exclusively fall into three of the six available slots. This pattern emerges for
most classes, with the exception of wearing/wears where detections are distributed uniformly across
all six slots.
6
Conclusion
The qualities of a graph that allow it to capture so much information about the semantic content of an
image come at the cost of additional complexity for any system that wishes to predict them. We show
how to supervise a network such that all of the reasoning about a graph can be abstracted away into a
single network. The use of associative embeddings and unordered output slots offer the network the
flexibility necessary to making training of this task possible. Our results on Visual Genome clearly
demonstrate the effectiveness of our approach.
7
Acknowledgements
This publication is based upon work supported by the King Abdullah University of Science and
Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-2015-CRG42639.
References
[1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
[2] Yuval Atzmon, Jonathan Berant, Vahid Kezami, Amir Globerson, and Gal Chechik. Learning to generalize
to new compositions in image understanding. arXiv preprint arXiv:1608.07639, 2016.
[3] Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng. Learning to detect human-object
interactions. arXiv preprint arXiv:1702.05448, 2017.
[4] Bo Dai, Yuqi Zhang, and Dahua Lin. Detecting visual relationships with deep relational networks. arXiv
preprint arXiv:1704.03114, 2017.
[5] Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. Devise: A
deep visual-semantic embedding model. In Advances in neural information processing systems, pages
2121?2129, 2013.
[6] Andrea Frome, Yoram Singer, Fei Sha, and Jitendra Malik. Learning globally-consistent local distance
functions for shape-based image retrieval and classification. In 2007 IEEE 11th International Conference
on Computer Vision, pages 1?8. IEEE, 2007.
[7] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision
and pattern recognition, pages 580?587, 2014.
[8] Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hockenmaier, and Svetlana Lazebnik. Improving
image-sentence embeddings using large weakly annotated photo collections. In European Conference on
Computer Vision, pages 529?545. Springer, 2014.
[9] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping.
In Computer vision and pattern recognition, 2006 IEEE computer society conference on, volume 2, pages
1735?1742. IEEE, 2006.
9
[10] Kaiming He, Georgia Gkioxari, Piotr Doll?r, and Ross Girshick. Mask r-cnn. arxiv preprint. arXiv preprint
arXiv:1703.06870, 2017.
[11] Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. Modeling relationships
in referential expressions with compositional modular networks. arXiv preprint arXiv:1611.09978, 2016.
[12] Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma, Michael Bernstein, and Li FeiFei. Image retrieval using scene graphs. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 3668?3678, 2015.
[13] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128?3137,
2015.
[14] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen,
Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. Visual genome:
Connecting language and vision using crowdsourced dense image annotations. 2016.
[15] Yikang Li, Wanli Ouyang, and Xiaogang Wang. Vip-cnn: A visual phrase reasoning convolutional neural
network for visual relationship detection. arXiv preprint arXiv:1702.07191, 2017.
[16] Xiaodan Liang, Lisa Lee, and Eric P Xing. Deep variation-structured reinforcement learning for visual
relationship and attribute detection. arXiv preprint arXiv:1703.03054, 2017.
[17] Wentong Liao, Michael Ying Yang, Hanno Ackermann, and Bodo Rosenhahn. On support relations and
semantic scene graphs. arXiv preprint arXiv:1609.05834, 2016.
[18] Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. Visual relationship detection with language
priors. In European Conference on Computer Vision, pages 852?869. Springer, 2016.
[19] Cewu Lu, Hao Su, Yongyi Lu, Li Yi, Chikeung Tang, and Leonidas Guibas. Beyond holistic object
recognition: Enriching image understanding with part states. arXiv preprint arXiv:1612.07310, 2016.
[20] Alejandro Newell and Jia Deng. Associative embedding: End-to-end learning for joint detection and
grouping. arXiv preprint arXiv:1611.05424, 2016.
[21] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In
European Conference on Computer Vision, pages 483?499. Springer, 2016.
[22] Bryan A Plummer, Arun Mallya, Christopher M Cervantes, Julia Hockenmaier, and Svetlana Lazebnik.
Phrase localization and visual relationship detection with comprehensive linguistic cues. arXiv preprint
arXiv:1611.06641, 2016.
[23] David Raposo, Adam Santoro, David Barrett, Razvan Pascanu, Timothy Lillicrap, and Peter Battaglia. Discovering objects and their relations from entangled scene representations. arXiv preprint arXiv:1702.05068,
2017.
[24] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection
with region proposal networks. In Advances in neural information processing systems, pages 91?99, 2015.
[25] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest
neighbor classification. In Advances in neural information processing systems, pages 1473?1480, 2005.
[26] Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message
passing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[27] Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, and Tat-Seng Chua. Visual translation embedding network
for visual relation detection. arXiv preprint arXiv:1702.08319, 2017.
[28] Bohan Zhuang, Lingqiao Liu, Chunhua Shen, and Ian Reid. Towards context-aware interaction recognition.
arXiv preprint arXiv:1703.06246, 2017.
10
| 6812 |@word h:1 cnn:3 version:2 manageable:1 proportion:1 open:1 hu:1 choy:1 tat:1 jacob:1 pick:1 accommodate:2 reduction:1 initial:1 liu:3 score:3 exclusively:2 existing:1 comparing:1 activation:4 assigning:1 must:13 john:1 devin:1 happen:1 informative:1 shape:1 hourglass:7 drop:1 update:1 rpn:6 v:5 sponsored:1 cue:2 fewer:1 discovering:1 item:1 amir:1 chua:1 detecting:3 provides:1 node:7 location:19 pascanu:1 zhang:2 along:2 constructed:1 direct:3 viable:1 abadi:1 consists:3 combine:1 compose:1 manner:1 falsely:1 mask:3 rapid:1 andrea:2 vahid:1 frequently:1 nor:1 growing:1 shirt:2 globally:1 considering:1 increasing:2 becomes:2 project:1 provided:6 matched:2 lowest:1 what:4 ouyang:1 substantially:1 gal:1 ended:1 remember:1 every:3 exactly:1 appear:4 producing:2 reid:1 orginal:1 positive:2 engineering:1 local:3 treat:2 modify:1 limit:1 before:1 bodo:1 encoding:2 painted:1 id:1 might:1 rnns:1 challenging:3 shamma:2 enriching:1 jiadeng:1 directed:2 unique:3 globerson:1 lecun:1 union:1 block:1 razvan:1 x512:1 procedure:1 liwei:1 matching:6 chechik:1 get:1 cannot:1 andrej:1 put:2 context:5 impossible:1 gkioxari:1 map:1 dean:2 center:4 yt:1 missing:1 straightforward:1 attention:1 resolution:10 tomas:1 hadsell:1 identifying:3 shen:1 matthieu:1 rule:2 fill:2 pull:1 dominate:1 shlens:1 embedding:21 x64:2 traditionally:2 variation:1 target:2 construction:1 hierarchy:1 exact:2 us:2 samy:1 xiaodan:1 element:16 berant:1 recognition:7 lay:1 breakdown:1 predicts:1 labeled:2 bottom:1 preprint:16 wang:2 capture:2 worst:1 calculate:1 region:4 connected:5 sun:1 kilian:1 highest:1 complexity:1 exhaustively:1 ultimately:1 trained:7 carrying:1 weakly:1 serve:4 localization:5 flying:1 upon:1 eric:1 completely:2 easily:1 joint:7 various:4 train:6 stacked:3 distinct:1 effective:1 describe:1 activate:1 plummer:1 detected:8 tell:3 apparent:1 encoded:2 modular:1 solve:2 ability:1 think:3 jointly:1 itself:1 final:5 associative:15 propose:2 interaction:6 frequent:2 organizing:1 holistic:1 flexibility:2 achieve:3 description:1 convergence:2 cluster:3 empty:1 darrell:2 produce:27 captioning:1 adam:1 generating:2 object:92 supervising:1 help:1 depending:1 blitzer:1 gong:1 pose:6 nearest:1 progress:1 predicted:1 hungarian:1 indicate:2 come:1 frome:2 iou:1 kenji:1 closely:1 correct:4 annotated:4 attribute:1 human:4 bin:2 require:3 heatmaps:4 strictly:1 clothing:1 sufficiently:2 ground:19 guibas:1 lawrence:1 mapping:5 predict:11 consecutive:1 purpose:1 battaglia:1 estimation:4 bag:1 label:3 prepare:1 yuke:2 ross:3 grouped:1 largest:1 arun:1 weighted:2 clearly:1 aim:1 rather:2 pn:1 hj:3 avoid:1 eating:1 publication:1 office:1 linguistic:1 focus:1 improvement:1 indicates:2 likelihood:1 greatly:1 detect:5 sense:1 helpful:1 abstraction:1 entire:2 santoro:1 hidden:1 relation:10 going:1 semantics:3 pixel:25 issue:2 classification:5 augment:1 heatmap:4 art:2 spatial:3 softmax:1 equal:1 once:6 construct:4 having:1 beach:1 piotr:1 aware:1 look:1 yu:1 nearly:1 filling:1 jon:1 discrepancy:1 others:1 report:1 few:4 distinguishes:1 randomly:1 composed:1 kalantidis:1 preserve:1 simultaneously:1 individual:13 comprehensive:1 consisting:1 jeffrey:1 detection:75 interest:2 message:2 highly:1 evaluation:1 alignment:1 behind:2 accurate:2 andy:1 oliver:1 fu:1 edge:36 closer:1 necessary:5 encourage:1 tuple:2 tree:1 divide:1 girshick:3 instance:4 classify:2 modeling:1 assignment:3 phrase:3 cost:2 vertex:46 subset:1 predicate:14 johnson:2 sumit:1 front:2 too:2 connect:1 chooses:1 person:9 st:1 international:1 destination:5 off:2 lee:1 cewu:2 pool:4 michael:5 together:8 connecting:4 quickly:1 ashish:1 yongyi:1 external:1 stark:1 li:11 account:2 unordered:3 seng:1 includes:2 crowded:1 matter:1 jitendra:2 kate:1 explicitly:3 depends:1 vi:3 piece:2 later:1 break:1 performed:1 try:1 unannotated:1 leonidas:1 xing:1 crowdsourced:1 annotation:12 jia:6 contribution:1 greg:2 convolutional:4 correspond:6 identify:6 sitting:1 generalize:1 ackermann:1 raw:2 mallya:1 produced:7 craig:1 lu:4 ren:1 worth:2 trevor:2 definition:2 against:1 regress:1 obvious:1 associated:1 couple:1 dataset:6 ask:1 recall:1 color:1 car:5 subsection:1 improves:1 emerges:1 segmentation:1 dimensionality:1 impoverished:1 focusing:1 appears:1 originally:2 higher:1 wei:1 raposo:1 formulation:2 done:6 box:29 furthermore:4 just:1 stage:1 christopher:2 ei:1 zeng:1 su:1 overlapping:4 somehow:1 quality:2 perhaps:1 pulling:1 grows:1 riding:2 usa:1 building:2 grounding:2 effect:1 lillicrap:1 inspiration:1 assigned:2 semantic:6 illustrated:1 deal:1 cervantes:1 during:5 davis:1 anything:1 complete:1 demonstrate:2 julia:2 reasoning:5 postprocessing:1 image:44 meaning:2 lazebnik:2 novel:2 sigmoid:1 common:1 volume:1 belong:1 slight:1 micah:1 he:2 dahua:1 refer:7 significant:1 composition:1 wanli:1 rd:1 outlined:1 teaching:1 language:4 wear:2 supervision:6 similarity:1 alejandro:3 etc:1 add:2 closest:1 own:2 recent:3 perspective:1 belongs:1 apart:2 chunhua:1 certain:1 binary:2 vt:5 yi:2 joshua:1 devise:1 seen:2 captured:1 additional:6 greater:1 dai:1 krishna:3 deng:4 feifei:1 determine:5 redundant:1 corrado:2 relates:1 full:11 multiple:8 reduces:1 conducive:1 match:5 faster:2 hata:1 cross:1 long:1 offer:1 lin:1 retrieval:2 danfei:1 equally:2 y:1 award:1 raia:1 ensuring:1 prediction:20 regression:1 heterogeneous:1 vision:13 metric:2 liao:1 arxiv:31 annotate:1 represent:4 grounded:11 agarwal:1 proposal:11 whereas:1 addition:2 want:3 cropped:1 entangled:1 grow:1 source:7 leaving:1 jian:1 extra:3 rest:1 sr:3 strict:1 sure:1 isolate:1 tend:2 subject:2 pooling:1 effectiveness:1 extracting:1 chopra:1 presence:1 noting:1 bernstein:3 split:2 embeddings:32 bengio:1 yang:2 variety:2 isolation:2 architecture:2 reduce:2 idea:2 barham:1 knowing:1 andreas:1 whether:3 expression:2 six:3 lingqiao:1 remap:1 padding:1 penalty:7 peter:1 passing:2 shaoqing:1 pretraining:1 compositional:1 deep:4 useful:2 covered:1 karpathy:1 referential:2 ten:2 category:1 generate:1 occupy:1 exist:3 restricts:1 correctly:4 per:5 bryan:1 diverse:1 discrete:5 express:3 group:3 shih:1 reliance:1 threshold:1 drawn:1 preprocessed:1 penalizing:2 ht:1 vast:1 graph:57 fraction:4 year:1 run:1 svetlana:2 place:1 throughout:1 almost:1 reasonable:1 yann:1 separation:1 draw:1 resize:1 scaling:1 layer:4 hi:6 ki:4 distinguish:1 abdullah:1 correspondence:1 ahead:2 xiaogang:1 sharply:1 fei:9 scene:27 ri:2 tag:1 osr:2 performing:1 mikolov:1 structured:1 poor:1 across:6 smaller:1 hodosh:1 h0i:1 stephanie:1 formatting:1 making:4 modification:1 hockenmaier:2 supervise:5 intuitively:1 restricted:1 invariant:1 pipeline:6 taken:2 eventually:1 singer:1 know:4 mind:1 flip:1 fed:1 vip:1 end:8 umich:1 serf:3 photo:1 available:9 brevdo:1 doll:1 apply:5 away:1 appropriate:1 enforce:2 alternative:1 weinberger:1 original:1 top:4 assumes:1 include:1 ensure:1 pushing:1 yoram:1 restrictive:1 build:1 especially:1 society:1 tensor:5 malik:2 question:1 occurs:1 yunchao:1 strategy:2 sha:1 dependence:1 rosenhahn:1 unclear:2 exhibit:1 amongst:2 lends:1 distance:5 separate:6 entity:1 majority:1 evaluate:3 trivial:1 reason:3 enforcing:2 marcus:1 laying:1 assuming:1 length:4 relationship:53 balance:1 ying:1 liang:1 difficult:3 unfortunately:1 holding:1 hao:1 negative:2 ronghang:1 design:3 implementation:1 perform:2 allowing:2 convolution:3 datasets:1 benchmark:2 displayed:1 defining:2 extended:1 incorporated:2 precise:2 looking:1 frame:1 relational:1 arbitrary:2 verb:1 david:4 pair:14 dog:1 connection:2 sentence:1 coherent:1 tensorflow:2 established:1 nip:1 address:5 justin:2 beyond:1 pattern:7 ranjay:3 including:2 reliable:2 max:1 green:1 hot:3 critical:2 overlap:1 natural:2 rely:2 difficulty:1 predicting:3 zhu:2 improve:1 zhuang:1 technology:1 identifies:2 axis:3 extract:3 text:1 prior:8 understanding:3 acknowledgement:1 eugene:1 chao:1 relative:1 loss:15 fully:4 generation:4 interesting:1 proven:1 localized:1 rcnn:1 sufficient:1 consistent:4 kravitz:1 thresholding:2 playing:1 share:1 production:2 pile:1 translation:2 row:2 summary:1 course:1 placed:2 supported:1 side:1 allow:4 bias:1 telling:2 lisa:1 fall:1 saul:1 neighbor:1 emerge:1 midpoint:4 sparse:1 distributed:3 dimension:5 world:2 genome:9 rich:1 author:1 made:4 reinforcement:2 collection:1 ignore:1 logic:1 dealing:1 abstracted:1 global:2 sequentially:1 anchor:4 tuples:2 xi:1 iterative:1 triplet:1 table:4 disambiguate:1 promising:1 nature:1 learn:2 channel:5 ca:1 improving:2 european:3 constructing:1 main:2 dense:1 bounding:17 whole:2 paul:1 identifier:1 nothing:1 body:4 x1:2 xu:3 referred:2 elaborate:1 multiperson:2 georgia:1 aid:1 experienced:1 position:1 guiding:1 explicit:4 wish:3 concatenating:2 candidate:2 answering:1 third:3 learns:2 zhifeng:1 donahue:1 yannis:1 down:1 tang:1 ian:1 specific:2 offset:1 x:1 barrett:1 grouping:2 incorporating:3 exists:2 magnitude:1 push:1 margin:3 chen:2 easier:1 entropy:1 michigan:1 timothy:1 likely:3 rohrbach:1 visual:28 hitting:1 expressed:1 kaiming:2 doubling:1 bo:1 chang:1 applies:1 springer:3 newell:3 truth:17 determines:1 chance:1 extracted:2 corresponds:1 mart:1 groth:1 slot:24 goal:2 king:1 ann:1 towards:2 jeff:2 shared:3 content:3 kaiyu:1 specifically:2 determined:1 uniformly:1 yuval:1 total:1 pas:1 arbor:1 saenko:1 citro:1 indicating:2 formally:1 exception:1 people:2 support:3 jonathan:1 wearing:4 tested:1 scratch:1 |
6,427 | 6,813 | Runtime Neural Pruning
Ji Lin?
Department of Automation
Tsinghua University
[email protected]
Jiwen Lu
Department of Automation
Tsinghua University
[email protected]
Yongming Rao?
Department of Automation
Tsinghua University
[email protected]
Jie Zhou
Department of Automation
Tsinghua University
[email protected]
Abstract
In this paper, we propose a Runtime Neural Pruning (RNP) framework which
prunes the deep neural network dynamically at the runtime. Unlike existing neural
pruning methods which produce a fixed pruned model for deployment, our method
preserves the full ability of the original network and conducts pruning according to
the input image and current feature maps adaptively. The pruning is performed in a
bottom-up, layer-by-layer manner, which we model as a Markov decision process
and use reinforcement learning for training. The agent judges the importance
of each convolutional kernel and conducts channel-wise pruning conditioned on
different samples, where the network is pruned more when the image is easier
for the task. Since the ability of network is fully preserved, the balance point is
easily adjustable according to the available resources. Our method can be applied
to off-the-shelf network structures and reach a better tradeoff between speed and
accuracy, especially with a large pruning rate.
1
Introduction
Deep neural networks have been proven to be effective in various areas. Despite the great success,
the capability of deep neural networks comes at the cost of huge computational burdens and large
power consumption, which is a big challenge for real-time deployments, especially for embedded
systems. To address this, several neural pruning methods have been proposed [11, 12, 13, 25, 38] to
reduce the parameters of convolutional networks, which achieve competitive or even slightly better
performance. However, these works mainly focus on reducing the number of network weights, which
have limited effects on speeding up the computation. More specifically, fully connected layers are
proven to be more redundant and contribute more to the overall pruning rate, while convolutional
layers are the most computationally dense part of the network. Moreover, such pruning strategy
usually leads to an irregular network structure, i.e. with part of sparsity in convolution kernels, which
needs a special algorithm for speeding up and is hard to harvest actual computational savings. A
surprisingly effective approach to trade accuracy for the size and the speed is to simply reduce the
number of channels in each convolutional layer. For example, Changpinyo et al. [27] proposed a
method to speed up the network by deactivating connections between filters in convolutional layers,
achieving a better tradeoff between the accuracy and the speed.
All these methods above prune the network in a fixed way, obtaining a static model for all the input
images. However, it is obvious that some of the input sample are easier for recognition, which can be
?
indicates equal contribution
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
recognized by simple and fast models. Some other samples are more difficult, which require more
computational resources. This property is not exploited in previous neural pruning methods, where
input samples are treated equally. Since some of the weights are lost during the pruning process, the
network will lose the ability for some hard tasks forever. We argue that preserving the whole ability
of the network and pruning the neural network dynamically according to the input image is desirable
to achieve better speed and accuracy tradeoff compared to static pruning methods, which will also
not harm the upper bound ability of the network.
In this paper, we propose a Runtime Neural Pruning (RNP) framework by pruning the neural network
dynamically at the runtime. Different from existing methods that produce a fixed pruned model
for deployment, our method preserves the full ability of the original network and prunes the neural
network according to the input image and current feature maps. More specifically, we model the
pruning of each convolutional layer as a Markov decision process (MDP), and train an agent with
reinforcement learning to learn the best policy for pruning. Since the whole ability of the original
network is preserved, the balance point can be easily adjusted according to the available resources,
thus one single trained model can be adjusted for various devices from embedded systems to large
data centers. Experimental results on the CIFAR [22] and ImageNet [36] datasets show that our
framework successfully learns to allocate different amount of computational resources for different
input images, and achieves much better performance at the same cost.
2
Related Work
Network pruning: There has been several works focusing on network pruning, which is a valid way
to reduce the network complexity. For example, Hanson and Pratt [13] introduced hyperbolic and
exponential biases to the pruning objective. Damage [25] and Surgeon [14] pruned the networks with
second-order derivatives of the objective. Han et al. [11, 12] iteratively pruned near-zero weights to
obtain a pruned network with no loss of accuracy. Some other works exploited more complicated
regularizers. For example, [27, 44] introduced structured sparsity regularizers on the network weights,
[32] put them to the hidden units. [17] pruned neurons based on the network output. Anwar et
al. [2] considered channel-wise and kernel-wise sparsity, and proposed to use particle filters to decide
the importance of connections and paths. Another aspect focuses on deactivating some subsets of
connections inside a fixed network architecture. LeCun et al. [24] removed connections between the
first two convolutional feature maps in a uniform manner. Depth multiplier method was proposed
in [16] to reduce the number of filters in each convolutional layer by a factor in a uniform manner.
These methods produced a static model for all the samples, failing to exploit the different property of
input images. Moreover, most of them produced irregular network structures after pruning, which
makes it hard to harvest actual computational savings directly.
Deep reinforcement learning: Reinforcement learning [29] aims to enable the agent to decide
the behavior from its experiences. Unlike conventional machine learning methods, reinforcement
learning is supervised through the reward signals of actions. Deep reinforcement learning [31] is a
combination of deep learning and reinforcement learning, which has been widely used in recent years.
For examples, Mnih et al. [31] combined reinforcement learning with CNN and achieved the humanlevel performance in the Atari game. Caicedo et al. [8] introduced reinforcement learning for active
object localization. Zhang et al. [45] employed reinforcement learning for vision control in robotics.
Reinforcement learning is also adopted for feature selection to build a fast classifier. [4, 15, 21].
Dynamic network: Dynamic network structures and executions have been studied in previous
works [7, 28, 33, 39, 40]. Some input-dependent execution methods rely on a pre-defined strategy.
Cascade methods [26, 28, 39, 40] relied on manually-selected thresholds to control execution.
Dynamic Capacity Network [1] used a specially designed method to calculate a saliency map for
control execution. Other conditional computation methods activate part of a network under a learned
policy. Begio et al. [6] introduced Stochastic Times Smooth neurons as gaters for conditional
computation within a deep neural network, producing a sparse binary gater to be computed as a
function of the input. [5] selectively activated output of a fully-connected neural network, according
to a control policy parametrized as the sigmoid of an affine transformation from last activation. Liu et
al. [30] proposed Dynamic Deep Neural Networks (D2NN), a feed-forward deep neural network that
allows selective execution with self-defined topology, where the control policy is learned using single
step reinforcement learning.
2
RNN
encoder
decoder
global pooling
prune
encoder
global pooling
prune
conv
kernels
Ki
conv
kernels
Ki-1
feature maps
Fi-1
...
decoder
feature maps
Fi
calculated
feature maps
Fi+1
pruned
Figure 1: Overall framework of our RNP. RNP consists of two sub-networks: the backbone CNN network and the decision network. The convolution kernels of backbone CNN network are dynamically
pruned according to the output Q-value of decision network, conditioned on the state forming from
the last calculated feature maps.
3
Runtime Neural Pruning
The overall framework of our RNP is shown in Figure 1. RNP consists of two sub-networks, the
backbone CNN network and the decision network which decides how to prune the convolution kernels
conditioned on the input image and current feature maps. The backbone CNN network can be any
kinds of CNN structure. Since convolutional layers are the most computationally dense layers in a
CNN, we focus on the pruning of convolutional layers in this work, leaving fully connected layers as
a classifier.
3.1
Bottom-up Runtime Pruning
We denote the backbone CNN with m convolutional layers as C, with convolutional layers denoted
as C1 , C2 , ..., Cm , whose kernels are K1 , K2 , ..., Km , respectively, with number of channels as
ni , i = 1, 2, ..., m. These convolutional layers produce feature maps F1 , F2 , ..., Fm as shown in
Figure 1, with the size of ni ? H ? W, i = 1, 2, ..., m. The goal is to find and prune the redundant
convolutional kernels in Ki+1 , given feature maps Fi , i = 1, 2, ..., m ? 1, to reduce computation
and achieve maximum performance simultaneously.
Taking the i-th layer as an example, we denote our goal as the following objective:
min EFi [Lcls (conv(Fi , K[h(Fi )])) + Lpnt (h(Fi ))],
Ki+1 ,h
(1)
where Lcls is the loss of the classification task, Lpnt is the penalty term representing the tradeoff
between the speed and the accuracy, h(Fi ) is the conditional pruning unit that produces a list of
indexes of selected kernels according to input feature map, K[?] is the indexing operation for kernel
pruning and conv(x1 , x2 ) is the convolutional operation for input feature map x1 and kernel x2 . Note
that our framework infers through standard convolutional layer after pruning, which can be easily
boosted by utilizing GPU-accelerated neural network library such as cuDNN [9].
To solve the optimization problem in (1), we divide the whole problem into two sub-problems of
{K} and h, and adopt an alternate training strategy to solve each sub-problem independently with
the neural network optimizer such as RMSprop [42].
For an input sample, there are totally m decisions of pruning to be made. A straightforward idea
is using the optimized decisions under certain penalty to supervise the decision network. However,
forQ
a backbone CNN with m layers, the time complexity of collecting the supervised signal is
m
O( i=1 nm ), which is NP-hard and unacceptable for prevalent very deep architecture such as
3
VGG [37] and ResNet [3]. To simplify the training problem, we employ the following two strategies:
1) model the network pruning as a Markov decision process (MDP) [34] and train the decision network
by reinforcement learning; 2) redefine the action of pruning to reduce the number of decisions.
3.2
Layer-by-layer Markov Decision Process
The decision network consists of an encoder-RNN-decoder structure, where the encoder E embeds
feature map Fi into fixed-length code, RNN R aggregates codes from previous stages, and the
decoder D outputs the Q-value of each action. We formulate key elements in Markov decision
process (MDP) based on the decision network to adopt deep Q-learning in our RNP framework as
follows.
State: Given feature map Fi , we first extract a dense feature embedding pFi with global pooling,
as commonly conducted in [10, 35], whose length is ni . Since the number of channels for different
convolutional layers are different, the length of pFi varies. To address this, we use the encoder E (a
fully connected layer) to project the pooled feature into a fixed-length embedding E(pFi ). E(pFi )
from different layers are associated in a bottom-up way with a RNN structure, which produces a
latent code R(E(pFi )), regarded as embedded state information for reinforcement learning. The
decoder (also a fully connected layer) produces the Q-value for decision.
Action: The actions for each pruning are defined in an incremental way. For convolution kernel Ki
with ni output channels, we determine which output channels are calculated and which to prune. To
simplify the process, we group the output feature maps into k sets, denoted as F01 , F02 , ..., F0k . One
extreme case is k = ni , where one single output channel forms a set. The actions a1 , a2 , ..., ak are
defined as follows: taking actions ai yields calculating the feature map groups F01 , F02 , ..., F0i , i =
1, 2, ..., k. Hence the feature map groups with lower index are calculated more, and the higher indexed
feature map groups are calculated only when the sample is difficult enough. Specially, the first feature
map group is always calculated, which we mention as base feature map group. Since we do not have
state information for the first convolutional layer, it is not pruned, with totally m ? 1 actions to take.
Though the definitions of actions are rather simple, one can easily extend the definition for more
complicated network structures. Like Inception [41] and ResNet [3], we define the action based on
unit of a single block by sharing pruning rate inside the block, which is more scalable and can avoid
considering about the sophisticated structures.
Reward: The reward of each action taken at the t-th step with action ai is defined as:
??Lcls + (i ? 1) ? p, if inference terminates (t = m ? 1),
rt (ai ) =
(i ? 1) ? p, otherwise (t < m ? 1)
(2)
where p is a negative penalty that can be manually set. The reward was set according to the loss for
the original task. We took the negative loss ??Lcls as the final reward so that if a task is completed
better, the final reward of the chain will be higher, i.e., closer to 0. ? is a hyper-parameter to rescale
Lcls into a proper range, since Lcls varies a lot for different network structures and different tasks.
Taking actions that calculate more feature maps, i.e., with higher i, will bring higher penalty due to
more computations. For t = 1, ..., m ? 2, the reward is only about the computation penalty, while
at the last step, the chain will get a final reward of ??Lcls to assess how well the pruned network
completes the task.
The key step of the Markov decision model is to decide the best action at certain state. In other
words, it is to find the optimal decision policy. By introducing the Q-learning method [31, 43], we
define Q(ai , st ) as the expectation value of taking action ai at state st . So the policy is defined as
? = argmaxai Q(ai , st ).
Therefore, the optimal action-value function can be written as:
Q(st , ai ) = max E[rt + ?rt+1 + ? 2 rt+2 + ...|?],
?
(3)
where ? is the discount factor in Q-learning, providing a tradeoff between the immediate reward
and the prediction of future rewards. We use the decision network to approximate the expected
Q-value Q? (st , ai ), with all the decoders sharing parameters and outputting a k-length vector, each
representing the Q? of corresponding action. If the estimation is optimal, we will have Q? (st , ai ) =
Q(st , ai ) exactly.
4
According to the Bellman equation [3], we adopt the squared mean error (MSE) as a criterion for
training to keep decision network self-consistent. So we rewrite the objective for sub-problem of h in
optimization problem 1 as:
min Lre = E[r(st , ai ) + ? max Q(st+1 , ai ) ? Q(st , ai )]2 ,
?
ai
(4)
where ? is the weights of decision network. In our proposed framework, a series of states are created
for an given input image. And the training is conducted using -greedy strategy that selects actions
following ? with probability and select random actions with probability 1 ? , while inference
is conducted greedily. The backbone CNN network and decision network is trained alternately.
Algorithm 1 details the training procedure of the proposed method.
Algorithm 1 Runtime neural pruning for solving optimization problem (1):
Input: training set with labels {X}
Output: backbone CNN C, decision network D
1: initialize: train C in normal way or initialize C with pre-trained model
2: for i ? 1, 2, ..., M do
3:
// train decision network
4:
for j ? 1, 2, ..., N1 do
5:
Sample random minibatch from {X}
6:
Forward and sample -greedy actions {st , at }
7:
Compute corresponding rewards {rt }
8:
Backward Q values for each stage and generate ?? Lre
9:
Update ? using ?? Lre
10:
end for
11:
// fine-tune backbone CNN
12:
for k ? 1, 2, ..., N2 do
13:
Sample random minibatch from {X}
14:
Forward and calculate Lcls after runtime pruning by D
15:
Backward and generate ?C Lcls
16:
Update C using ?C Lcls
17:
end for
18: end for
19: return C and D
It is worth noticing that during the training of agent, we manually set a fixed penalty for different
actions and reach a balance status. While during deployment, we can adjust the penalty by compensating the output Q? of each action with relative penalties accordingly to switch between different
balance point of accuracy and computation costs, since penalty is input-independent. Thus one single
model can be deployed to different systems according to the available resources.
4
Experiments
We conducted experiments on three different datasets including CIFAR-10, CIFAR-100 [22] and
ILSVRC2012 [36] to show the effectiveness of our method. For CIFAR-10, we used a four convolutional layer network with 3 ? 3 kernels. For CIFAR-100 and ILSVRC2012, we used the VGG-16
network for evaluation. For results on the CIFAR dataset, we compared the results obtained by our
RNP and naive channel reduction methods. For results on the ILSVRC2012 dataset, we compared
the results achieved by our RNP with recent state-of-the-art network pruning methods.
4.1
Implementation Details
We trained RNP in an alternative manner, where the backbone CNN network and the decision network
were trained iteratively. To help the training converge faster, we first initialized the CNN with random
pruning, where decisions were randomly made. Then we fixed the CNN parameters and trained the
decision network, regarding the backbone CNN as a environment, where the agent can take actions
and get corresponding rewards. We fixed the decision network and fine-tuned the backbone CNN
following the policy of the decision network, which helps CNN specialize in a specific task. The
5
initialization was trained using SGD, with an initial learning rate 0.01, decay by a factor of 10 after
120, 160 epochs, with totally 200 epochs in total. The other training progress was conducted using
RMSprop [42] with the learning rate of 1e-6. For the -greedy strategy, the hyper-parameter was
annealed linearly from 1.0 to 0.1 in the beginning and fixed at 0.1 thereafter.
For most experiments, we set the number of convolutional group to k = 4, which is a tradeoff between
the performance and the complicity. Increasing k will enable more possible pruning combinations,
while at the same time making it harder for reinforcement learning with an enlarged action space.
Since the action is taken conditioned on the current feature map, the first convolutional layer is not
pruned, where we have totally m ? 1 decisions to make, forming a decision sequence. During the
training, we set the penalty for extra feature map calculation as p = ?0.1, which is adjusted during
the deployment. The scale ? factor was set such that the average ?Lcls is approximately 1 to make
the relative difference more significant. For experiments on VGG-16 model, we define the actions
based on unit of a single block by sharing pruning rate inside the block as mentioned in Section 3.2
to simplify implementation and accelerate convergence.
For vanilla baseline methods comparison on CIFAR, we evaluated the performance of normal neural
network with the same computations. More specifically, we calculated the average number of
multiplications of every convolution layer and rounded it up to the nearest number of channels sharing
same computations, which resulted in an identical network topology with reduced convolutional
channels. We trained the vanilla baseline network with the SGD until convergence for comparison.
All our experiments were implemented using the modified Caffe toolbox [20].
4.2
Intuitive Experiments
To have an intuitive understanding of our framework, we first conducted a simple experiment to
show the effectiveness and undergoing logic of our RNP. We considered a 3-category classification
problem, consisting of male faces, female faces and background samples. It is intuitive to think that
separating male faces from female faces is a much more difficult task than separating faces from
background, needing more detailed attention, so more resources should be allocated to face images
than background images. In other words, a good tradeoff for RNP is to prune the neural network
more when dealing with background images and keep more convolutional channels when inputting a
face image.
To validate this idea, we constructed a 3-category dataset using Labeled Faces in the Wild [18] dataset,
which we referred to as LFW-T. More specifically, we randomly cropped 3000 images for both male
and female faces, and also 3000 background images randomly cropped from LFW. We used the
attributes from [23] as labels for male and female faces. All these images were resized to 32 ? 32
pixels. We held out 2000 images for testing and the remaining for training. For this experiment, we
designed a 3-layer convolutional network with two fully connected layers. All convolutional kernels
are 3 ? 3 and with 32, 32, 64 output channels respectively. We followed the same training protocol as
mentioned above with p = 0.1, and focused on the difference between different classes.
The original network achieved 91.1% accuracy. By adjusting the penalty, we managed to get a certain
point of accuracy-computation tradeoff, where computations (multiplications) were reduced by a
factor of 2, while obtaining even slightly higher accuracy of 91.75%. We looked into the average
computations of different classes by counting multiplications of convolutional layers. The results
were shown in Figure 2. For the whole network, RNP allocated more computations on faces images
than background images, at approximately a ratio of 2, which clearly demonstrates the effectiveness
of RNP. However, since the first convolutional layers and fully connected layers were not pruned, to
get the absolute ratio of pruning rate, we also studied the pruning of a certain convolutional layer. In
this case, we selected the last convolutional layer conv3. The results are shown on the right figure.
We see that for this certain layer, computations for face images are almost 5 times of background
images. The differences in computations show that RNP is able to find the relative difficulty of
different tasks and exploit such property to prune the neural network accordingly.
4.3
Results
CIFAR-10 & CIFAR-100: For CIFAR-10 and CIFAR-100, we used a four-layer convolutional
network and the VGG-16 network for experiments, respectively. The goal of these two experiments
is to compare our RNP with vanilla baseline network, where the number of convolutional layers was
6
Average Mults. of Conv3
(original: 1.180M mults.)
3
0.6
2.5
0.5
#Multiply (mil.)
#Multiply (mil.)
Average Mults. of Whole Network
(original: 4.950M mults.)
2
1.5
1
0.5
0.4
0.3
0.2
0.1
0
0
Average
Male
Female
Background
Average
Male
(a)
Female
Background
(b)
0.86
0.66
0.84
0.64
0.82
0.62
Accuracy (%)
Accuracy (%)
Figure 2: The average multiplication numbers of different classes in our intuitive experiment. We
show the computation numbers for both the whole network (on the left) and the fully pruned
convolutional layer conv3 (on the right). The results show that RNP succeeds to focus more on faces
images by preserving more convolutional channels while prunes the network more when dealing with
background images, reaching a good tradeoff between accuracy and speed.
0.8
0.78
0.76
0.6
0.58
0.56
RNP
vanilla
0.74
RNP
vanilla
0.54
0.72
0.52
0
5
10
15
20
25
0
100
#Multiply (mil.)
200
300
400
#Multiply (mil.)
Figure 3: The results on CIFAR-10 (on the left) and CIFAR-100 (on the right). For vanilla curve, the
rightmost point is the full model and the leftmost is the 14 model. RNP outperforms naive channel
reduction models consistently by a very large margin.
reduced directly from the beginning. The fully connected layers of standard VGG-16 are too redundant
for CIFAR-100, so we eliminated one of the fully connected layer and set the inner dimension as 512.
The modified VGG-16 model was easier to converge and actually slightly outperformed the original
model on CIFAR-100. The results are shown in Figure 3. We see that for vanilla baseline method, the
accuracy suffered from a stiff drop when computations savings were than 2.5 times. While our RNP
consistently outperformed the baseline model, and achieved competitive performance even with a
very large computation saving rate.
ILSVRC2012: We compared our RNP with recent state-of-the-art neural pruning methods [19, 27,
46] on the ImageNet dataset using the VGG-16 model, which won the 2-nd place in ILSVRC2014
challenge. We evaluated the top-5 error using single-view testing on ILSVRC2012-val set and trained
RNP model using ILSVRC2012-train set. The view was the center 224 ? 224 region cropped from the
Table 1: Comparisons of increase of top-5 error on ILSVRC2012-val (%) with recent state-of-the-art
methods, where we used 10.1% top-5 error baseline as the reference.
Speed-up
Jaderberg et al. [19] ([46]?s implementation)
Asymmetric [46]
Filter pruning [27] (our implementation)
Ours
7
3?
2.3
3.2
2.32
4?
9.7
3.84
8.6
3.23
5?
29.7
14.6
3.58
10?
4.89
Figure 4: Visualization of the original images and the feature maps of four convolutional groups,
respectively. The presented feature maps are the average of corresponding convolutional groups.
Table 2: GPU inference time under different theoretical speed-up ratios on ILSVRC2012-val set.
Speed-up solution
VGG-16 (1?)
Ours (3?)
Ours (4?)
Ours (5?)
Ours (10?)
Increase of top-5 error (%)
0
2.32
3.23
3.58
4.89
Mean inference time (ms)
3.26 (1.0?)
1.38 (2.3?)
1.07 (3.0?)
0.880 (3.7?)
0.554 (5.9?)
resized images whose shorter side is 256 by following [46]. RNP was fine-tuned based on the public
available model 2 which achieves 10.1% top-5 error on ILSVRC2012-val set. Results are shown in
Table 1, where speed-up is the theoretical speed-up ratio computed by the complexity. We see that
RNP achieves similar performance with a relatively small speed-up ratio with other methods and
outperforms other methods by a significant margin with a large speed-up ratio. We further conducted
our experiments on larger ratio (10?) and found RNP only suffered slight drops (1.31% compared to
5?), far beyond others? results on 5? setting.
4.4
Analysis
Analysis of Feature Maps: Since we define the actions in an incremental way, the convolutional
channels of lower index are calculated more (a special case is the base network that is always
calculated). The convolutional groups with higher index are increments to the lower-indexed ones,
so the functions of different convolution groups might be similar to "low-frequency" and "highfrequency" filters. We visualized different functions of convolutional groups by calculating average
feature maps produced by each convolutional group. Specially, we took CIFAR-10 as example and
visualized the feature maps of conv2 with k = 4. The results are shown in Figure 4.
From the figure, we see that the base convolutional groups have highest activations to the input
images, which can well describe the overall appearance of the object. While convolutional groups
with higher index have sparse activations, which can be considered as a compensation to the base
convolutional groups. So the undergoing logic of RNP is to judge when it is necessary to compensate
the base convolutional groups with higher ones: if tasks are easy, RNP will prune the high-order
feature maps for speed, otherwise bring in more computations to pursue accuracy.
Runtime Analysis: One advantage of our RNP is its convenience for deployment, which makes it
easy to harvest actual computational time savings. Therefore, we measured the actual runtime under
GPU acceleration, where we measured the actual inference time for VGG-16 on ILSVRC2012-val
set. Inference time were measured on a Titan X (Pascal) GPU with batch size 64. Table 2 shows the
GPU inference time of different settings. We see that our RNP generalizes well on GPU.
2
http://www.robots.ox.ac.uk/~vgg/research/very_deep/
8
5
Conclusion
In this paper, we have proposed a Runtime Neural Pruning (RNP) framework to prune the neural
network dynamically. Since the ability of network is fully preserved, the balance point is easily
adjustable according to the available resources. Our method can be applied to off-the-shelf network structures and reaches a better tradeoff between speed and accuracy. Experimental results
demonstrated the effectiveness of the proposed approach.
Acknowledgements
We would like to thank Song Han, Huazhe (Harry) Xu, Xiangyu Zhang and Jian Sun for their generous
help and insightful advice. This work is supported by the National Natural Science Foundation of
China under Grants 61672306 and the National 1000 Young Talents Plan Program. The corresponding
author of this work is Jiwen Lu.
References
[1] Amjad Almahairi, Nicolas Ballas, Tim Cooijmans, Yin Zheng, Hugo Larochelle, and Aaron Courville.
Dynamic capacity networks. arXiv preprint arXiv:1511.07838, 2015.
[2] Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured pruning of deep convolutional neural
networks. arXiv preprint arXiv:1512.08571, 2015.
[3] Richard Bellman. Dynamic programming and lagrange multipliers. PNAS, 42(10):767?769, 1956.
[4] Djalel Benbouzid, R?bert Busa-Fekete, and Bal?zs K?gl. Fast classification using sparse decision dags.
arXiv preprint arXiv:1206.6387, 2012.
[5] Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural
networks for faster models. arXiv preprint arXiv:1511.06297, 2015.
[6] Yoshua Bengio, Nicholas L?onard, and Aaron Courville. Estimating or propagating gradients through
stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
[7] Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. Adaptive neural networks for fast
test-time prediction. arXiv preprint arXiv:1702.07811, 2017.
[8] Juan C Caicedo and Svetlana Lazebnik. Active object localization with deep reinforcement learning. In
ICCV, pages 2488?2496, 2015.
[9] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and
Evan Shelhamer. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014.
[10] Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan
Salakhutdinov. Spatially adaptive computation time for residual networks. arXiv preprint arXiv:1612.02297,
2016.
[11] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with
pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[12] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient
neural network. In NIPS, pages 1135?1143, 2015.
[13] Stephen Jos? Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with
back-propagation. In NIPS, pages 177?185, 1989.
[14] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon.
NIPS, pages 164?164, 1993.
[15] He He, Jason Eisner, and Hal Daume. Imitation learning by coaching. In Advances in Neural Information
Processing Systems, pages 3149?3157, 2012.
[16] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco
Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision
applications. arXiv preprint arXiv:1704.04861, 2017.
9
[17] Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron
pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250, 2016.
[18] Gary B Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A
database for studying face recognition in unconstrained environments. Technical report, Technical Report
07-49, University of Massachusetts, Amherst, 2007.
[19] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with
low rank expansions. arXiv preprint arXiv:1405.3866, 2014.
[20] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv
preprint arXiv:1408.5093, 2014.
[21] Sergey Karayev, Tobias Baumgartner, Mario Fritz, and Trevor Darrell. Timely object recognition. In
Advances in Neural Information Processing Systems, pages 890?898, 2012.
[22] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
[23] Neeraj Kumar, Alexander Berg, Peter N Belhumeur, and Shree Nayar. Describable visual attributes for
face verification and image search. PAMI, 33(10):1962?1977, 2011.
[24] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[25] Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In NIPS, pages 598?605, 1990.
[26] Sam Leroux, Steven Bohez, Elias De Coninck, Tim Verbelen, Bert Vankeirsbilck, Pieter Simoens, and Bart
Dhoedt. The cascading neural network: building the internet of smart things. Knowledge and Information
Systems, pages 1?24, 2017.
[27] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient
convnets. arXiv preprint arXiv:1608.08710, 2016.
[28] Haoxiang Li, Zhe Lin, Xiaohui Shen, Jonathan Brandt, and Gang Hua. A convolutional neural network
cascade for face detection. In CVPR, pages 5325?5334, 2015.
[29] Michael L Littman. Reinforcement learning improves behaviour from evaluative feedback. Nature,
521(7553):445?451, 2015.
[30] Lanlan Liu and Jia Deng. Dynamic deep neural networks: Optimizing accuracy-efficiency trade-offs by
selective execution. arXiv preprint arXiv:1701.00299, 2017.
[31] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare,
Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through
deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[32] Kenton Murray and David Chiang. Auto-sizing neural networks: With applications to n-gram language
models. arXiv preprint arXiv:1508.05051, 2015.
[33] Augustus Odena, Dieterich Lawson, and Christopher Olah. Changing model behavior at test-time using
reinforcement learning. arXiv preprint arXiv:1702.07780, 2017.
[34] Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley &
Sons, 2014.
[35] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection
with region proposal networks. In NIPS, pages 91?99, 2015.
[36] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large
Scale Visual Recognition Challenge. IJCV, 115(3):211?252, 2015.
[37] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[38] Nikko Str?m. Phoneme probability estimation with dynamic sparsely connected artificial neural networks.
The Free Speech Journal, 5:1?41, 1997.
10
[39] Chen Sun, Manohar Paluri, Ronan Collobert, Ram Nevatia, and Lubomir Bourdev. Pronet: Learning to
propose object-specific boxes for cascaded neural networks. In CVPR, pages 3485?3493, 2016.
[40] Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deep convolutional network cascade for facial point detection.
In CVPR, pages 3476?3483, 2013.
[41] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi. Inception-v4, inception-resnet and
the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
[42] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
[43] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279?292, 1992.
[44] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep
neural networks. In NIPS, pages 2074?2082, 2016.
[45] Fangyi Zhang, J?rgen Leitner, Michael Milford, Ben Upcroft, and Peter Corke. Towards vision-based deep
reinforcement learning for robotic motion control. arXiv preprint arXiv:1511.03791, 2015.
[46] Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating very deep convolutional networks
for classification and detection. PAMI, 38(10):1943?1955, 2016.
11
| 6813 |@word cnn:19 compression:1 nd:1 dekel:1 km:1 hu:1 pieter:1 sgd:2 mention:1 harder:1 f0k:1 reduction:2 initial:1 liu:2 series:1 tuned:2 ours:5 humanlevel:1 document:1 rightmost:1 outperforms:2 existing:2 current:4 com:1 comparing:1 amjad:1 guadarrama:1 activation:3 gmail:1 written:1 gpu:6 john:4 ronan:1 christian:1 lcls:11 designed:2 drop:2 update:2 bart:1 greedy:3 selected:3 device:1 accordingly:2 bolukbasi:1 beginning:2 chiang:1 contribute:1 deactivating:2 brandt:1 zhang:5 unacceptable:1 c2:1 constructed:1 olah:1 consists:3 specialize:1 ijcv:1 wild:2 busa:1 redefine:1 inside:3 manner:4 peng:1 expected:1 paluri:1 andrea:1 behavior:2 brain:2 chi:1 bellman:2 compensating:1 salakhutdinov:1 yiran:1 actual:5 str:1 considering:1 totally:4 conv:4 project:1 increasing:1 moreover:2 estimating:1 backbone:12 atari:1 kind:1 cm:1 pursue:1 z:1 inputting:1 transformation:1 sung:1 every:1 collecting:1 runtime:12 exactly:1 classifier:2 k2:1 demonstrates:1 control:7 unit:4 uk:1 grant:1 producing:1 tsinghua:7 despite:1 vetrov:1 ak:1 cliff:1 path:1 approximately:2 pami:2 might:1 sajid:1 initialization:1 studied:2 china:1 dynamically:5 sara:1 deployment:6 catanzaro:1 limited:1 range:1 lecun:3 testing:2 wonyong:1 lost:1 block:4 gater:1 procedure:1 evan:2 area:1 riedmiller:1 rnn:4 hyperbolic:1 cascade:3 onard:1 vedaldi:1 pre:2 word:2 tolga:1 get:4 convenience:1 selection:1 andrej:1 put:1 bellemare:1 www:1 conventional:1 map:30 demonstrated:1 center:2 xiaohui:1 annealed:1 straightforward:1 attention:1 primitive:1 independently:1 focused:1 formulate:1 shen:1 utilizing:1 regarded:1 weyand:1 cascading:1 pfi:5 embedding:3 increment:1 construction:1 programming:2 element:1 recognition:6 asymmetric:1 sparsely:1 labeled:2 database:1 bottom:3 steven:1 preprint:20 wang:4 calculate:3 region:2 compressing:1 connected:10 ilsvrc2012:10 sun:5 coursera:1 solla:1 trade:2 removed:1 highest:1 caicedo:2 mentioned:2 environment:2 complexity:3 rmsprop:3 reward:12 kalenichenko:1 littman:1 tobias:2 dynamic:9 babak:1 trained:10 rewrite:1 solving:1 smart:1 surgeon:2 localization:2 f2:1 efficiency:1 easily:5 accelerate:1 xiaoou:1 various:2 train:5 fast:5 effective:2 activate:1 describe:1 artificial:1 aggregate:1 hyper:2 caffe:2 whose:3 widely:1 solve:2 larger:1 cvpr:3 otherwise:2 encoder:5 ability:8 dieterich:1 simonyan:1 augustus:1 think:1 final:3 sequence:1 advantage:1 karayev:2 took:2 propose:3 outputting:1 tran:2 saligrama:1 achieve:3 intuitive:4 validate:1 convergence:2 darrell:2 produce:6 incremental:2 adam:1 ben:1 object:6 resnet:3 help:3 tim:2 ac:1 propagating:1 andrew:3 measured:3 nearest:1 rescale:1 bourdev:1 progress:1 implemented:1 judge:2 come:1 larochelle:1 attribute:2 filter:6 stochastic:3 human:1 leitner:1 j14:1 enable:2 anwar:2 public:1 require:1 behaviour:1 f1:1 samet:1 sizing:1 manohar:1 adjusted:3 marco:1 considered:3 normal:2 great:1 rgen:1 achieves:3 adopt:3 optimizer:1 a2:1 generous:1 andreetto:1 failing:1 estimation:2 ruslan:1 outperformed:2 lose:1 label:2 ross:2 almahairi:1 successfully:1 offs:1 clearly:1 always:2 aim:1 modified:2 rather:1 reaching:1 zhou:1 shelf:2 avoid:1 boosted:1 resized:2 rusu:1 mil:4 mobile:1 coaching:1 focus:4 consistently:2 prevalent:1 indicates:1 mainly:1 rank:1 sharan:1 greedily:1 baseline:6 inference:7 dependent:1 dayan:1 hidden:1 selective:2 selects:1 pixel:1 overall:4 classification:4 hanan:1 pascal:1 denoted:2 plan:1 art:3 special:2 changpinyo:1 initialize:2 equal:1 saving:5 beach:1 eliminated:1 manually:3 identical:1 koray:1 veness:1 yu:1 igor:1 future:1 np:1 others:1 simplify:3 richard:1 employ:1 yoshua:2 report:2 randomly:3 wen:1 preserve:2 simultaneously:1 resulted:1 national:2 consisting:1 n1:1 william:2 detection:4 huge:1 trimming:1 ostrovski:1 mnih:2 multiply:4 zheng:1 evaluation:1 adjust:1 joel:1 male:6 extreme:1 activated:1 regularizers:2 held:1 chain:2 vandermersch:1 closer:1 necessary:1 experience:1 shorter:1 facial:1 conduct:2 indexed:2 divide:2 initialized:1 benbouzid:1 girshick:2 theoretical:2 minimal:1 rao:1 kyuyeon:1 cost:3 introducing:1 subset:1 uniform:2 krizhevsky:1 conducted:7 too:1 varies:2 combined:1 adaptively:1 st:12 fritz:1 amherst:1 evaluative:1 v4:1 off:2 rounded:1 michael:4 pool:1 precup:1 jos:1 alemi:1 sanjeev:1 squared:1 lorien:1 nm:1 huang:3 juan:1 derivative:2 wing:1 return:1 nevatia:1 li:5 szegedy:1 volodymyr:1 de:1 harry:1 pooled:1 coding:1 automation:4 titan:1 doina:1 collobert:1 performed:1 view:2 lot:1 dally:2 jason:1 mario:1 competitive:2 relied:1 complicated:2 capability:1 timely:1 yandan:1 jia:3 contribution:1 ass:1 ni:5 accuracy:17 convolutional:51 phoneme:1 miller:1 yield:1 saliency:1 vincent:1 kavukcuoglu:1 produced:3 lu:2 ren:1 worth:1 russakovsky:1 reach:3 sharing:4 trevor:2 definition:2 frequency:1 tamara:1 obvious:1 associated:1 static:3 silver:1 dataset:5 adjusting:1 massachusetts:1 knowledge:1 infers:1 improves:1 sean:1 sophisticated:1 actually:1 back:1 focusing:1 feed:1 maxwell:1 higher:8 supervised:2 zisserman:2 wei:1 huizi:1 evaluated:2 though:1 ox:1 box:1 inception:3 stage:2 until:1 convnets:1 christopher:2 su:1 propagation:1 minibatch:2 pineau:1 hwang:1 mdp:3 hal:1 usa:1 effect:1 building:1 jch:1 multiplier:2 managed:1 f0i:1 hence:1 spatially:1 iteratively:2 puterman:1 during:5 game:1 self:2 won:1 criterion:1 leftmost:1 m:1 djalel:1 bal:1 motion:1 bring:2 zhiheng:1 image:29 wise:3 lazebnik:1 fi:10 sigmoid:1 leroux:1 ji:1 hugo:1 cohen:1 lre:3 ballas:1 stork:1 extend:1 slight:1 he:4 significant:2 ai:14 dag:1 talent:1 vanilla:7 unconstrained:1 particle:1 language:1 robot:1 han:5 base:5 sergio:1 patrick:1 recent:5 female:6 stiff:1 optimizing:1 driven:1 certain:5 binary:1 success:1 joelle:1 yi:1 exploited:2 preserving:2 employed:1 prune:13 recognized:1 determine:1 converge:2 redundant:3 xiangyu:2 signal:2 belhumeur:1 stephen:1 multiple:1 full:3 desirable:1 needing:1 pnas:1 smooth:1 technical:2 faster:3 calculation:1 long:2 cifar:16 lin:3 compensate:1 equally:1 a1:1 impact:1 prediction:2 scalable:1 vision:3 expectation:1 lfw:2 arxiv:40 kernel:15 sergey:3 achieved:4 robotics:1 irregular:2 preserved:3 c1:1 background:10 fine:3 cropped:3 huffman:1 proposal:1 completes:1 krause:1 leaving:1 suffered:2 allocated:2 jian:3 extra:1 unlike:2 specially:3 pooling:3 thing:1 effectiveness:4 near:1 counting:1 manu:1 bernstein:1 bengio:3 pratt:2 enough:1 easy:2 switch:1 architecture:4 topology:2 fm:1 andreas:1 reduce:6 cn:3 idea:2 tradeoff:10 vgg:10 regarding:1 inner:1 haffner:1 lubomir:1 allocate:1 accelerating:1 harvest:3 penalty:11 song:3 baumgartner:1 peter:4 karen:1 speech:1 shaoqing:1 action:27 jie:1 deep:24 detailed:1 tune:1 karpathy:1 amount:1 discount:1 visualized:2 category:2 reduced:3 generate:2 http:1 bryan:1 discrete:1 georg:1 group:17 key:2 four:3 thereafter:1 threshold:1 achieving:1 yangqing:1 changing:1 chunpeng:1 figurnov:1 backward:2 ram:1 year:1 noticing:1 svetlana:1 place:1 almost:1 pronet:1 decide:3 yann:2 wu:1 decision:33 jianhua:1 layer:42 bound:1 ki:5 followed:1 internet:1 courville:2 gang:1 xiaogang:1 chetlur:1 alex:3 fei:2 x2:2 asim:1 aspect:1 speed:16 min:2 pruned:14 kumar:1 f01:2 relatively:1 martin:2 department:4 structured:3 according:12 alternate:1 combination:2 rnp:31 terminates:1 slightly:3 son:1 sam:1 joseph:1 describable:1 making:1 supervise:1 iccv:1 indexing:1 hartwig:1 taken:2 computationally:2 resource:7 equation:1 visualization:1 tai:1 end:3 studying:1 available:5 adopted:1 operation:2 generalizes:1 efi:1 ofer:1 denker:1 pierre:1 nicholas:1 alternative:1 batch:1 original:9 top:5 remaining:1 running:1 completed:1 calculating:2 exploit:2 eisner:1 k1:1 especially:2 build:1 emmanuel:1 murray:1 objective:4 looked:1 strategy:6 damage:2 rt:5 highfrequency:1 hai:1 cudnn:2 gradient:3 thank:1 separating:2 capacity:2 parametrized:1 decoder:6 consumption:1 fidjeland:1 mail:1 argue:1 erik:1 length:5 code:3 index:5 tijmen:1 providing:1 balance:5 ratio:7 difficult:3 hao:2 negative:2 corke:1 implementation:4 proper:1 policy:7 adjustable:2 conv2:1 satheesh:1 upper:1 convolution:6 neuron:4 markov:7 datasets:2 howard:1 ramesh:1 compensation:1 philippe:1 immediate:1 hinton:2 bert:2 introduced:4 venkatesh:1 david:3 nikko:1 toolbox:1 connection:6 imagenet:3 hanson:2 optimized:1 learned:3 nip:7 alternately:1 address:2 able:1 beyond:1 usually:1 sparsity:4 challenge:3 program:1 max:3 including:1 power:1 odena:1 treated:1 rely:1 difficulty:1 natural:1 cascaded:1 bacon:1 residual:2 zhu:2 representing:2 library:1 created:1 milford:1 extract:1 naive:2 auto:1 speeding:3 epoch:2 understanding:1 acknowledgement:1 val:5 multiplication:4 relative:3 graf:2 embedded:3 fully:12 loss:4 lecture:1 proven:2 geoffrey:2 shelhamer:2 foundation:1 agent:5 forq:1 affine:1 verification:1 consistent:1 elia:1 vanhoucke:1 tiny:1 surprisingly:1 last:4 supported:1 gl:1 free:1 bias:2 side:1 conv3:3 taking:4 face:17 absolute:1 sparse:3 curve:1 depth:1 calculated:9 valid:1 dimension:1 feedback:1 gram:1 forward:3 made:2 reinforcement:20 commonly:1 author:1 adaptive:2 far:1 pruning:49 approximate:1 forever:1 status:1 jaderberg:2 keep:2 logic:2 dealing:2 global:3 active:2 decides:1 dmitry:2 deng:2 harm:1 cooijmans:1 ioffe:1 robotic:1 imitation:1 zhe:1 search:1 latent:1 khosla:1 table:4 channel:16 learn:1 nature:2 ca:1 nicolas:1 obtaining:2 expansion:1 mse:1 bottou:1 durdanovic:1 zou:1 protocol:1 marc:1 dense:3 linearly:1 big:1 whole:6 daume:1 n2:1 x1:2 enlarged:1 xu:1 referred:1 advice:1 andrei:1 deployed:1 wiley:1 embeds:1 hassibi:1 sub:5 mao:1 exponential:1 lawson:1 watkins:1 learns:1 young:1 tang:2 donahue:1 specific:2 insightful:1 undergoing:2 list:1 decay:1 burden:1 quantization:1 importance:2 woolley:1 magnitude:1 execution:6 conditioned:4 margin:2 rui:1 chen:3 easier:3 yin:1 simply:1 appearance:1 forming:2 visual:2 lagrange:1 aditya:1 kaiming:2 bo:1 hua:1 fekete:1 gary:1 tieleman:1 ma:1 conditional:5 yukun:1 goal:3 acceleration:1 towards:3 jeff:2 luc:1 hard:4 specifically:4 reducing:1 olga:1 total:1 experimental:2 succeeds:1 aaron:2 selectively:1 select:1 berg:3 jonathan:5 collins:1 alexander:2 accelerated:1 nayar:1 |
6,428 | 6,814 | Eigenvalue Decay Implies Polynomial-Time
Learnability for Neural Networks
Surbhi Goel ?
Department of Computer Science
University of Texas at Austin
[email protected]
Adam Klivans ?
Department of Computer Science
University of Texas at Austin
[email protected]
Abstract
We consider the problem of learning function classes computed by neural networks with various activations (e.g. ReLU or Sigmoid), a task believed to be computationally intractable in the worst-case. A major open problem is to understand
the minimal assumptions under which these classes admit provably efficient algorithms. In this work we show that a natural distributional assumption corresponding to eigenvalue decay of the Gram matrix yields polynomial-time algorithms in
the non-realizable setting for expressive classes of networks (e.g. feed-forward
networks of ReLUs). We make no assumptions on the structure of the network or
the labels. Given sufficiently-strong eigenvalue decay, we obtain fully-polynomial
time algorithms in all the relevant parameters with respect to square-loss. This is
the first purely distributional assumption that leads to polynomial-time algorithms
for networks of ReLUs. Further, unlike prior distributional assumptions (e.g., the
marginal distribution is Gaussian), eigenvalue decay has been observed in practice
on common data sets.
1
Introduction
Understanding the computational complexity of learning neural networks from random examples
is a fundamental problem in machine learning. Several researchers have proved results showing
computational hardness for the worst-case complexity of learning various networks? that is, when
no assumptions are made on the underlying distribution or the structure of the network [10, 16,
21, 26, 43]. As such, it seems necessary to take some assumptions in order to develop efficient
algorithms for learning deep networks (the most expressive class of networks known to be learnable
in polynomial-time without any assumptions is a sum of one hidden layer of sigmoids [16]). A
major open question is to understand what are the ?correct? or minimal assumptions to take in
order to guarantee efficient learnability3 . An oft-taken assumption is that the marginal distribution is
equal to some smooth distribution such as a multivariate Gaussian. Even under such a distributional
assumption, however, there is evidence that fully polynomial-time algorithms are still hard to obtain
for simple classes of networks [19, 36]. As such, several authors have made further assumptions on
the underlying structure of the model (and/or work in the noiseless or realizable setting).
In fact, in an interesting recent work, Shamir [34] has given evidence that both distributional assumptions and assumptions on the network structure are necessary for efficient learnability using
gradient-based methods. Our main result is that under only an assumption on the marginal distribution, namely eigenvalue decay of the Gram matrix, there exist efficient algorithms for learning broad
?
Work supported by a Microsoft Data Science Initiative Award.
Part of this work was done while visiting the Simons Institute for Theoretical Computer Science.
3
For example, a very recent paper of Song, Vempala, Xie, and Williams [36] asks ?What form would such
an explanation take, in the face of existing complexity-theoretic lower bounds??
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
classes of neural networks even in the non-realizable (agnostic) setting with respect to square loss.
Furthermore, eigenvalue decay has been observed often in real-world data sets, unlike distributional
assumptions that take the marginal to be unimodal or Gaussian. As one would expect, stronger assumptions on the eigenvalue decay result in polynomial learnability for broader classes of networks,
but even mild eigenvalue decay will result in savings in runtime and sample complexity.
The relationship between our assumption on eigenvalue decay and prior assumptions on the
marginal distribution being Gaussian is similar in spirit to the dichotomy between the complexity of
certain algorithmic problems on power-law graphs versus Erd?os-R?nyi graphs. Several important
graph problems such as clique-finding become much easier when the underlying model is a
random graph with appropriate power-law decay (as opposed to assuming the graph is generated
from the classical G(n, p) model) [6, 22]. In this work we prove that neural network learning
problems become tractable when the underlying distribution induces an empirical gram matrix with
sufficiently strong eigenvalue-decay.
Our Contributions. Our main result is quite general and holds for any function class that can
be suitably embedded in an RKHS (Reproducing Kernel Hilbert Space) with corresponding kernel
function k (we refer readers unfamiliar with kernel methods to [30]). Given m draws from a distribution (x1 , . . . , xm ) and kernel k, recall that the Gram matrix K is an m ? m matrix where the i, j
entry equals k(xi , xj ). For ease of presentation, we begin with an informal statement of our main
result that highlights the relationship between the eigenvalue decay assumption and the run-time and
sample complexity of our final algorithm.
Theorem 1 (Informal). Fix function class C and kernel function k. Assume C is approximated in the
corresponding RKHS with norm bound B. After drawing m samples, let K/m be the (normalized)
m ? m Gram matrix with eigenvalues {?1 , . . . , ?m }. For error parameter > 0,
? 1/p /2+3/p ).
1. If, for sufficiently large i, ?i ? O(i?p ), then C is efficiently learnable with m = O(B
?
2. If, for sufficiently large i, ?i ? O(e?i ), then C is efficiently learnable with m = O(log
B/2 ).
We allow a failure probability for the event that the eigenvalues do not decay. In all prior work,
the sample complexity m depends linearly on B, and for many interesting concept classes (such as
ReLUs), B is exponential in one or more relevant parameters. Given Theorem 1, we can use known
structural results for embedding neural networks into an RKHS to estimate B and take a corresponding eigenvalue decay assumption to obtain polynomial-time learnability. Applying bounds recently
obtained by Goel et al. [16] we have
Corollary 2. Let C be the class of all fully-connected networks of ReLUs with one-hidden layer
of ` hidden ReLU activations feeding into a single ReLU output activation (i.e., two hidden layers
or depth-3). Then, assuming eigenvalue decay of O(i?`/ ), C is learnable in polynomial time with
respect
to square
loss on Sn?1 . If ReLU is replaced with sigmoid, then we require eigenvalue decay
?
?
? ` log( `/)
O(i
).
For higher depth networks, bounds on the required eigenvalue decay can be derived from structural results in [16]. Without taking an assumption, the fastest known algorithms for learning the
above networks run in time exponential in the number of hidden units and accuracy parameter (but
polynomial in the dimension) [16].
Our proof develops a novel approach for bounding the generalization error of kernel methods,
namely we develop compression schemes tailor-made for classifiers induced by kernel-based regression, as opposed to current Rademacher-complexity based approaches. Roughly, a compression
scheme is a mapping from a training set S to a small subsample S 0 and side-information I. Given
this compressed version of S, the decompression algorithm should be able to generate a classifier h.
In recent work, David, Moran and Yehudayoff [13] have observed that if the size of the compression
is much less than m (the number of samples), then the empirical error of h on S is close to its true
error with high probability.
At the core of our compression scheme is a method for giving small description length (i.e., o(m)
bit complexity), approximate solutions to instances of kernel ridge regression. Even though we
assume K has decaying eigenvalues, K is neither sparse nor low-rank, and even a single column
or row of K has bit complexity at least m, since K is an m ? m matrix! Nevertheless, we can
prove that recent tools from Nystr?m sampling [28] imply a type of sparsification for solutions
2
of certain regression problems involving K. Additionally, using preconditioning, we can bound
the bit complexity of these solutions and obtain the desired compression scheme. At each stage
we must ensure that our compressed solutions do not lose too much accuracy, and this involves
carefully analyzing various matrix approximations. Our methods are the first compression-based
generalization bounds for kernelized regression.
Related Work. Kernel methods [30] such as SVM, kernel ridge regression and kernel PCA have
been extensively studied due to their excellent performance and strong theoretical properties. For
large data sets, however, many kernel methods become computationally expensive. The literature
on approximating the Gram matrix with the overarching goal of reducing the time and space complexity of kernel methods is now vast. Various techniques such as random sampling [39], subspace
embedding [2], and matrix factorization [15] have been used to find a low-rank approximation that
is efficient to compute and gives small approximation error. The most relevant set of tools for our
paper is Nystr?m sampling [39, 14], which constructs an approximation of K using a subset of
the columns indicated by a selection matrix S to generate a positive semi-definite approximation.
Recent work on leverage scores have been used to improve the guarantees of Nystr?m sampling in
order to obtain linear time algorithms for generating these approximations [28].
The novelty of our approach is to use Nystr?m sampling in conjunction with compression schemes
to give a new method for giving provable generalization error bounds for kernel methods. Compression schemes have typically been studied in the context of classification problems in PAC learning
and for combinatorial problems related to VC dimension [23, 24]. Only recently some authors
considered compression schemes in a general, real-valued learning scenario [13]. Cotter, ShalevShwartz, and Srebro have studied compression for classification using SVMs to prove that for general distributions, compressing classifiers with low generalization error is not possible [9].
The general phenomenon of eigenvalue decay of the Gram matrix has been studied from both a theoretical and applied perspective. Some empirical studies of eigenvalue decay and related discussion
can be found in [27, 35, 38]. There has also been prior work relating eigenvalue decay to generalization error in the context of SVMs or Kernel PCA (e.g., [29, 35]). Closely related notions to
eigenvalue decay are that of local Rademacher complexity due to Bartlett, Bousquet, and Mendelson
[4] (see also [5]) and that of effective dimensionality due to Zhang [42].
The above works of Bartlett et al. and Zhang give improved generalization bounds via datadependent estimates of eigenvalue decay of the kernel. At a high level, the goal of these works
is to work under an assumption
? on the effective dimension and improve Rademacher-based generalization error bounds from 1/ m to 1/m (m is the number of samples) for functions embedded in a
RKHS of unit norm. These works do not address the main obstacle of this paper, however, namely
overcoming the complexity of the norm of the approximating RKHS. Their techniques are mostly
incomparable even though the intent of using effective dimension as a measure of complexity is the
same.
Shamir has shown that for general linear prediction problems with respect to square-loss and norm
bound B, a sample complexity of ?(B) is required for gradient-based methods [33]. Our work
shows that eigenvalue decay can dramatically reduce this dependence, even in the context of kernel
regression where we want to run in time polynomial in n, the dimension, rather than the (much
larger) dimension of the RKHS.
Recent work on Learning Neural Networks. Due in part to the recent exciting developments in
deep learning, there have been several works giving provable results for learning neural networks
with various activations (threshold, sigmoid, or ReLU). For the most part, these results take various
assumptions on either 1) the distribution (e.g., Gaussian or Log-Concave) or 2) the structure of the
network architecture (e.g. sparse, random, or non-overlapping weight vectors) or both and often have
a bad dependence on one or more of the relevant parameters (dimension, number of hidden units,
depth, or accuracy). Another way to restrict the problem is to work only in the noiseless/realizable
setting. Works that fall into one or more of these categories include [20, 44, 40, 17, 31, 41, 11].
Kernel methods have been applied previously to learning neural networks [43, 26, 16, 12]. The
current broadest class of networks known to be learnable in fully polynomial-time in all parameters
with no assumptions is due to Goel et al. [16], who showed how to learn a sum of one hidden layer of
sigmoids over the domain of Sn?1 , the unit sphere in n dimensions. We are not aware of other prior
3
work that takes only a distributional assumption on the marginal and achieves fully polynomial-time
algorithms for even simple networks (for example, one hidden layer of ReLUs).
Much work has also focused on the ability of gradient descent to succeed in parameter estimation
for learning neural networks under various assumptions with an intense focus on the structure of
local versus global minima [8, 18, 7, 37]. Here we are interested in the traditional task of learning in
the non-realizable or agnostic setting and allow ourselves to output a hypothesis outside the function
class (i.e., we allow improper learning). It is well known that for even simple neural networks, for
example for learning a sigmoid with respect to square-loss, there may be many bad local minima
[1]. Improper learning allows us to avoid these pitfalls.
2
Preliminaries
Notation. The input space is denoted by X and the output space is denoted by Y. Vectors are represented with boldface letters such as x. We denote a kernel function by k? (x, x0 ) = h?(x), ?(x0 )i
where ? is the associated feature map and for the kernel and K? is the corresponding reproducing
kernel Hilbert space (RKHS). For necessary background material on kernel methods we refer the
reader to [30].
Selection and Compression Schemes. It is well known that in the context of PAC learning Boolean
function classes, a suitable type of compression of the training data implies learnability [25]. Perhaps
surprisingly, the details regarding the relationship between compression and ceratin other real-valued
learning tasks have not been worked out until very recently. A convenient framework for us will be
the notion of compression and selection schemes due to David et al. [13].
A selection scheme is a pair of maps (?, ?) where ? is the selection map and ? is the reconstruction
map. ? takes as input a sample S = ((x1 , y1 ), . . . , (xm , ym )) and outputs a sub-sample S 0 and a
finite binary string b as side information. ? takes this input and outputs a hypothesis h. The size of
the selection scheme is defined to be k(m) = |S 0 | + |b|. We present a slightly modified version of
the definition of an approximate compression scheme due to [13]:
Definition 3 ((, ?)-approximate agnostic compression scheme). A selection scheme (?, ?) is an
(, ?)-approximate agnostic compression scheme for hypothesis class H and sample satisfying
property P if for all samples P
S that satisfy P with probability 1 ? ?, f = ?(?(S)) satisfies
P
m
m
i=1 l(f (xi ), yi ) ? minh?H (
i=1 l(h(xi ), yi )) + .
Compression has connections to learning in the general loss setting through the following theorem
which shows that as long as k(m) is small, the selection scheme generalizes.
Theorem 4 (Theorem 30.2 [32], Theorem 3.2 [13]). Let (?, ?) be a selection scheme of size k =
k(m), and let AS = ?(?(S)). Given m i.i.d. samples drawn from any distribution D such that
k ? m/2, for constant bounded loss function l : Y 0 ? Y ? R+ with probability 1 ? ?, we have
v
m
u
X
u
l(AS (xi ), yi ) ? t ?
E(x,y)?D [l(AS (x), y)] ?
i=1
where = 50 ?
3
!
m
1 X
l(AS (xi ), yi ) +
m i=1
k log(m/k)+log(1/?)
.
m
Problem Overview
In this section we give a general outline for our main result. Let S = {(x1 , y1 ), . . . , (xm , ym )} be a
training set of samples drawn i.i.d. from some arbitrary distribution D on X ? [0, 1] where X ? Rn .
Let us consider a concept class C such that for all c ? C and x ? X we have c(x) ? [0, 1]. We
wish to learn the concept class C with respect to the square loss, that is, we wish to find c ? C that
approximately minimizes E(x,y)?D [(c(x) ? y)2 ]. A common way of solving this is by solving the
empirical minimization problem (ERM) given below and subsequently proving that it generalizes.
4
Optimization Problem 1
m
minimize
c?C
1 X
(c(xi ) ? yi )2
m i=1
Unfortunately, it may not be possible to efficiently solve the ERM in polynomial-time due to issues
such as non-convexity. A way of tackling this is to show that the concept class can be approximately
minimized by another hypothesis class of linear functions in a high dimensional feature space (this
in turn presents new obstacles for proving generalization-error bounds, which is the focus of this
paper).
Definition 5 (-approximation). Let C1 and C2 be function classes mapping domain X to R. C1 is approximated by C2 if for every c ? C1 there exists c0 ? C2 such that for all x ? X , |c(x)?c0 (x)| ? .
Suppose C can be -approximated in the above sense by the hypothesis class H? = {x ?
hv, ?(x)i|v ? K? , hv, vi ? B} for some B and kernel function k? . We further assume that the
kernel is bounded, that is, |k? (x, x?)| ? M for some M > 0 for all x, x? ? X . Thus, the problem
relaxes to the following,
Optimization Problem 2
m
minimize
v?K?
1 X
(hv, ?(xi )i ? yi )2
m i=1
subject to
hv, vi ? B
UsingP
the Representer theorem, we have that the optimum solution for the above is of the form
m
v? = i=1 ?i ?(xi ) for some ? ? Rn . Denoting the sample kernel matrix be K such that Ki,j =
k? (xi , xj ), the above optimization problem is equivalent to the following optimization problem,
Optimization Problem 3
minimize
m
??R
1
||K? ? Y ||22
m
subject to
?T K? ? B
where Y is the vector corresponding to all yi and ||Y ||? ? 1 since ?i ? [m], yi ? [0, 1]. Let ?B
be the optimal solution of the above problem. This is known to be efficiently solvable in poly(m, n)
time as long as the kernel function is efficiently computable.
Applying Rademacher complexity
bounds to H? yields generalization error bounds that decrease,
?
roughly, on the order of B/ m (c.f. Supplemental 1.1). If B is exponential in 1/, the accuracy
parameter, or in n, the dimension, as in the case of bounded depth networks of ReLUs, then this
dependence leads to exponential sample complexity. As mentioned in Section 1, in the context
? of
eigenvalue decay, various results [42, 4, 5] have been obtained to improve the dependence of B/ m
to B/m, but little is known about improving the dependence on B.
Our goal is to show that eigenvalue decay of the empirical Gram matrix does yield generalization bounds with better dependence on B. The key is to develop a novel compression scheme for
kernelized ridge regression. We give a step-by-step analysis for how to generate an approximate,
compressed version of the solution to Optimization Problem 3. Then, we will carefully analyze the
bit complexity of our approximate solution and realize our compression scheme. Finally, we can put
everything together and show how quantitative bounds on eigenvalue decay directly translate into
compressions schemes with low generalization error.
4
Compressing the Kernel Solution
Through a sequence of steps, we will sparsify ? to find a solution of much smaller bit complexity
that is still an approximate solution (to within a small additive error). The quality and size of the
approximation will depend on the eigenvalue decay.
5
Lagrangian Relaxation. We relax Optimization Problem 3 and consider the Lagrangian version of
the problem to account for the norm bound constraint. This version is convenient for us, as it has a
nice closed-form solution.
Optimization Problem 4
minimize
m
??R
1
||K? ? Y ||22 + ??T K?
m
We will later set ? such that the error of considering this relaxation is small. It is easy to see that the
?1
optimal solution for the above lagrangian version is ? = (K + ?mI) Y .
Preconditioning. To avoid extremely small or non-zero eigenvalues, we consider a perturbed version of K, K? = K + ?mI. This gives us that the eigenvalues of K? are always greater than
or equal to ?m. This property is useful for us in our later analysis. Henceforth, we consider the
following optimization problem on the perturbed version of K:
Optimization Problem 5
minimize
m
??R
1
||K? ? ? Y ||22 + ??T K? ?
m
?1
The optimal solution for perturbed version is ?? = (K? + ?mI)
Y = (K + (? + ?)mI)
?1
Y.
Sparsifying the Solution via Nystr?m Sampling. We will now use tools from Nystr?m Sampling
to sparsify the solution obtained from Optimzation Problem 5. To do so, we first recall the definition
of effective dimension or degrees of freedom for the kernel [42]:
Definition 6 (?-effective dimension). For a positive semidefinite m ? m matrix K and parameter
?, the ?-effective dimension of K is defined as d? (K) = tr(K(K + ?mI)?1 ).
Various kernel approximation results have relied on this quantity, and here we state a recent result
due to [28] who gave the first application independent result that shows that there is an efficient way
? a matrix constructed from the columns is close in
of computing a set of columns of K such that K,
terms of 2-norm to the matrix K. More formally,
Theorem 7 ([28]). For kernel matrix K, there exists an algorithm that gives a set of
? = KS(S T KS)? S T K where S is the matrix that
O (d? (K) log (d? (K)/?)) columns, such that K
? KK
? + ?mI.
selects the specific columns, satisfies with probability 1 ? ?, K
? is positive semi-definite. Also, the above implies ||K ? K||
? 2 ? ?m. We use
It can be shown that K
the decay to approximate the Kernel matrix with a low-rank matrix constructed using the columns
? ? be the matrix obtained by applying Theorem 7 to K? for ? > 0 and consider the
of K. Let K
following optimization problem,
Optimization Problem 6
minimize
m
??R
1 ?
???
||K? ? ? Y ||22 + ??T K
m
? ? + ?mI ?1 Y . Since K
? ? = K? S(S T K? S)? S T K? ,
The optimal solution for the above is ?
?? = K
?
T
? T
solving for the above enables us to get a solution ? = S(S K? S) S K? ?
? ? , which is a k-sparse
vector for k = O (d? (K? ) log (d? (K? )/?)).
Bounding the Error of the Sparse Solution. We bound the additional error incurred by our sparse
hypothesis ?? compared to ?B . To do so, we bound the error for each of the approximations: sparsification, preconditioning and lagrangian relaxation and then combine them to give the following
theorem.
2
3
3
1
, ? ? 729B
and ? ? 729B
, we have m
||K? ?? ? Y ||22 ?
Theorem 8 (Total Error). For ? = 81B
1
2
m ||K?B ? Y ||2 + .
6
Computing the Sparsity of the Solution. To compute the sparsity of the solution, we need to bound
d? (K? ). We consider the following different eigenvalue decays.
Definition 9 (Eigenvalue Decay). Let the real eigenvalues of a symmetric m ? m matrix A be
denoted by ?1 ? ? ? ? ? ?m .
1. A is said to have (C, p)-polynomial eigenvalue decay if for all i ? {1, . . . , m}, ?i ? Ci?p .
2. A is said to have C-exponential eigenvalue decay if for all i ? {1, . . . , m}, ?i ? Ce?i .
Note that in the above definitions C and p are not necessarily constants. We allow C and p to
depend on other parameters (the choice of these parameters will be made explicit in subsequent
theorem statements). We can now bound the effective dimension in terms of eigenvalue decay:
Theorem 10 (Bounding effective dimension). For ?m ? ?,
1/p
C
1. If K/m has (C, p)-polynomial eigenvalue decay for p > 1 then d? (K? ) ? (p?1)?
+ 2.
C
+ 2.
2. If K/m has C-exponential eigenvalue decay then d? (K? ) ? log (e?1)?
5
Compression Scheme
The above analysis gives us a sparse solution for the problem and, in turn, an -approximation for
the error on the overall sample S with probability 1 ? ?. We can now fully define our compression
scheme for the hypothesis class H? with respect to samples satisfying the eigenvalue decay property.
Selection Scheme ?: Given input S = (xi , yi )m
i=1 ,
1. Use RLS-Nystr?m Sampling [28] to compute K?? = K? S(S T K? S)? S T K? for ? =
3
? = 5832Bm
. Let I be the sub-sample corresponding to the columns selected using S.
2. Solve Optimization Problem 6 for ? =
2
324B
3
5832B
and
to get ?
?? .
???
? ? (K? is invertible as all
3. Compute the |I|-sparse vector ?? = S(S T K? S)? S T K? ?
? ? = K??1 K
eigenvalues are non-zero).
4. Output subsample I along with ?
? ? which is ?? truncated to precision
4M |I|
per non-zero index.
?
Reconstruction Scheme ?:
Given input subsample I and ?
? , output hypothesis,
hS (x) = clip0,1 (wT ?
? ? ) where w is a vector with entries K(xi , x) + ?m1[x = xi ] for
3
. Note, clipa,b (x) = max(a, min(b, x)) for some a < b.
i ? I and 0 otherwise where ? = 5832Bm
The following theorem shows that the above is a compression scheme for H? .
Theorem 11. (?, ?) is an (, ?)-approximate agnostic
compression
hypothesis class
? scheme for the
mBM d log(d/?)
d
H? for sample S of size k(m, , ?, B, M ) = O d log ? log
where d is the
4
?-effective dimension of K? for ? =
6
3
5832B
and ? =
3
5832Bm .
Putting It All Together: From Compression to Learning
We now present our final algorithm: Compressed Kernel Regression (Algorithm 1). Note that the
algorithm is efficient and takes at most O(m3 ) time.
For our learnability result, we restrict distributions to those that satisfy eigenvalue decay.
Definition 12 (Distribution Satisfying Eigenvalue Decay). Consider distribution D over X and
kernel function k? . Let S be a sample drawn i.i.d. from the distribution D and K be the empirical
gram matrix corresponding to kernel function k? on S.
1. D is said to satisfy (C, p, N )-polynomial eigenvalue decay if with probability 1 ? ? over the
drawn sample of size m ? N , K/m satisfies (C, p)-polynomial eigenvalue decay.
7
Algorithm 1 Compressed Kernel Regression
1:
2:
3:
4:
Input: Samples S = (xi , yi )m
i=1 , gram matrix K on S, constants , ? > 0, norm bound B and
maximum kernel function value M on X .
3
3
and ? = 5832B
Using RLS-Nystr?m Sampling [28] with input (K? , ?m) for ? = 5832Bm
compute K?? = K? S(S T K? S)? S T K? . Let I be the subsample corresponding to the columns
selected using S. Note that the number of columns selected depends on the ? effective dimension of K? .
2
Solve Optimization Problem 6 for ? = 324B
to get ?
? ? over S
???
Compute ?? = S(S T K? S)? S T K? ?
? ? = K??1 K
??
Compute ?
? ? by truncating each entry of ?? up to precision 4M|I|
Output: hS such that for all x ? X , hS (x) = clip0,1 (wT ?
? ? ) where w is a vector with entries
K(xi , x) + ?m1[x = xi ] for i ? I and 0 otherwise.
2. D is said to satisfy (C, N )-exponential eigenvalue decay if with probability 1 ? ? over the drawn
sample of size m ? N , K/m satisfies C-exponential eigenvalue decay.
Our main theorem proves generalization of the hypothesis output by Algorithm 1 for distributions
satisfying eigenvalue decay in the above sense.
Theorem 13 (Formal for Theorem 1). Fix function class C with output bounded in [0, 1] and
M -bounded kernel function k? such that C is 0 -approximated by H? = {x ? hv, ?(x)i|v ?
K? , hv, vi ? B} for some ?, B. Consider a sample S = {(xi , yi )m
i=1 } drawn i.i.d. from D on
X ? [0, 1]. There exists an algorithm A that outputs hypothesis hS = A(S), such that,
1. If DX satisfies (C, p, m)-polynomial eigenvalue decay with probability 1 ? ?/4 then with proba1/p
?
bility 1 ? ? for m = O((CB)
log(M ) log(1/?)/2+3/p ),
E(x,y)?D (hS (x) ? y)2 ? min E(x,y)?D (c(x) ? y)2 + 20 +
c?C
2. If DX satisfies (C, m)-exponential eigenvalue decay with probability 1??/4 then with probability
?
1 ? ? for m = O(log
CB log(M ) log(1/?)/2 ),
E(x,y)?D (hS (x) ? y)2 ? min E(x,y)?D (c(x) ? y)2 + 20 +
c?C
Algorithm A runs in time poly(m, n).
Remark: The above theorem can be extended to different rates of eigenvalue decay. For example,
for finite rank r the obtained bound is independent of B but dependent instead on r. Also, as in the
proof of Theorem 10, it suffices for the eigenvalue decay to hold only after sufficiently large i.
7
Learning Neural Networks
Here we apply our main theorem to the problem of learning neural networks. For technical definitions of neural networks, we refer the reader to [43].
Definition 14 (Class of Neural Networks [16]). Let N [?, D, W, T ] be the class of fully-connected,
feed-forward networks with D hidden layers, activation function ? and quantities W and T described as follows:
1. Weight vectors in layer 0 have 2-norm bounded by T .
2. Weight vectors in layers 1, . . . , D have 1-norm bounded by W .
3. For each hidden unit ?(w ? z) in the network, we have |w ? z| ? T (by z we denote the input feeding
into unit ? from the previous layer).
We consider activation functions ?relu (x) = max(0, x) and ?sig = 1+e1?x , though other activation
functions fit within our framework. Goel et al. [16] showed that the class of ReLUs/Sigmoids along
with their compositions can be approximated by linear functions in a high dimensional Hilbert space
8
(corresponding to a particular type of polynomial kernel). As mentioned earlier, the sample complexity of prior work depends linearly on B, which, for even a single ReLU, is exponential in 1/.
Assuming sufficiently strong eigenvalue decay, we can show that we can obtain fully polynomial
time algorithms for the above classes.
Theorem 15. For , ? > 0, consider D on Sn?1 ? [0, 1] such that,
1. For Crelu = N [?relu , 0, ?, 1], DX satisfies (C, p, m)-polynomial eigenvalue decay for p ? ?/,
2. For Crelu?D = N [?relu , D, W, T ], DX satisfies (C, p, m)-polynomial eigenvalue decay for
p ? (?W D DT /)D ,
3. For Csig?D = N [?sig , D, W, T ], DX satisfies (C, p, m)-polynomial eigenvalue decay for p ?
(?T log(W D D/)))D ,
where DX is the marginal distribution on X = Sn?1 , ? > 0 is some sufficiently large constant and
C ? (n ? 1/ ? log(1/?))?p for some constant ? > 0. The value of m is obtained from Theorem 13
? 1/p 2+3/p ).
as m = O(C
Each decay assumption above implies an algorithm for agnostically learning the corresponding
class on Sn?1 ? [0, 1] with respect to the square loss in time poly(n, 1/, log(1/?)).
Note that assuming an exponential eigenvalue decay (stronger than polynomial) will result in efficient learnability for much broader classes of networks.
Since it is not known how to agnostically learn even a single ReLU with respect to arbitrary distributions on Sn?1 in polynomial-time4 , much less a network of ReLUs, we state the following corollary
highlighting the decay we require to obtain efficient learnability for simple networks:
Corollary 16 (Restating Corollary 2). Let C be the class of all fully-connected networks of ReLUs
with one-hidden layer of size ` feeding into a final output ReLU activation where the 2-norms of
all weight vectors are bounded by 1. Then, (suppressing the parameter m for simplicity), assuming
(C, i?`/ )-polynomial eigenvalue decay for C = poly(n, 1/, `), C is learnable in polynomial time
with respect?to square
loss on Sn?1 . If ReLU is replaced with sigmoid, then we require eigenvalue
?
decay of i? ` log( `/) .
8
Conclusions and Future Work
We have proposed the first set of distributional assumptions that guarantee fully polynomial-time
algorithms for learning expressive classes of neural networks (without restricting the structure of
the network). The key abstraction was that of a compression scheme for kernel approximations,
specifically Nystr?m sampling. We proved that eigenvalue decay of the Gram matrix reduces the
dependence on the norm B in the kernel regression problem.
Prior distributional assumptions, such as the underlying marginal equaling a Gaussian, neither lead
to fully polynomial-time algorithms nor are representative of real-world data sets5 . Eigenvalue decay, on the other hand, has been observed in practice and does lead to provably efficient algorithms
for learning neural networks.
A natural criticism of our assumption is that the rate of eigenvalue decay we require is too strong.
In some cases, especially for large depth networks with many hidden units, this may be true6 . Note,
however, that our results show that even moderate eigenvalue decay will lead to improved algorithms. Further, it is quite possible our assumptions can be relaxed. An obvious question for future
work is what is the minimal rate of eigenvalue decay needed for efficient learnability? Another direction would be to understand how these eigenvalue decay assumptions relate to other distributional
assumptions.
Goel et al. [16] show that agnostically learning a single ReLU over {?1, 1}n is as hard as learning sparse
parities with noise. This reduction can be extended to the case of distributions over Sn?1 [3].
5
Despite these limitations, we still think uniform or Gaussian assumptions are worthwhile and have provided
highly nontrivial learning results.
6
It is useful to keep in mind that agnostically learning even a single ReLU with respect to all distributions
seems computationally intractable, and that our required eigenvalue decay in this case is only a function of the
accuracy parameter .
4
9
Acknowledgements. We would like to thank Misha Belkin and Nikhil Srivastava for very helpful
conversations regarding kernel ridge regression and eigenvalue decay. We also thank Daniel Hsu,
Karthik Sridharan, and Justin Thaler for useful feedback. The analogy between eigenvalue decay
and power-law graphs is due to Raghu Meka.
References
[1] Peter Auer, Mark Herbster, and Manfred K. Warmuth. Exponentially many local minima for
single neurons. In Advances in Neural Information Processing Systems, volume 8, pages 316?
322. The MIT Press, 1996.
[2] Haim Avron, Huy Nguyen, and David Woodruff. Subspace embeddings for the polynomial
kernel. In Advances in Neural Information Processing Systems, pages 2258?2266, 2014.
[3] Peter Bartlett, Daniel Kane, and Adam Klivans. personal communication.
[4] Peter L. Bartlett, Olivier Bousquet, and Shahar Mendelson. Local rademacher complexities.
33(4), August 16 2005.
[5] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds
and structural results. Journal of Machine Learning Research, 3:463?482, 2002.
[6] Pawel Brach, Marek Cygan, Jakub Lacki, and Piotr Sankowski. Algorithmic complexity of
power law networks. CoRR, abs/1507.02426, 2015.
[7] Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with
gaussian inputs. CoRR, abs/1702.07966, 2017.
[8] Anna Choromanska, Mikael Henaff, Micha?l Mathieu, G?rard Ben Arous, and Yann LeCun.
The loss surfaces of multilayer networks. In AISTATS, volume 38 of JMLR Workshop and
Conference Proceedings. JMLR.org, 2015.
[9] Andrew Cotter, Shai Shalev-Shwartz, and Nati Srebro. Learning optimally sparse support
vector machines. In Proceedings of the 30th International Conference on Machine Learning
(ICML-13), pages 266?274, 2013.
[10] Amit Daniely. Complexity theoretic limitations on learning halfspaces. In STOC, pages 105?
117. ACM, 2016.
[11] Amit Daniely. SGD learns the conjugate kernel class of the network. CoRR, abs/1702.08503,
2017.
[12] Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In NIPS, pages 2253?2261,
2016.
[13] Ofir David, Shay Moran, and Amir Yehudayoff. On statistical learning via the lens of compression. arXiv preprint arXiv:1610.03592, 2016.
[14] Petros Drineas and Michael W Mahoney. On the nystr?m method for approximating a
gram matrix for improved kernel-based learning. journal of machine learning research,
6(Dec):2153?2175, 2005.
[15] Petros Drineas, Michael W Mahoney, and S Muthukrishnan. Relative-error cur matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844?881, 2008.
[16] Surbhi Goel, Varun Kanade, Adam Klivans, and Justin Thaler. Reliably learning the relu in
polynomial time. arXiv preprint arXiv:1611.10258, 2016.
[17] Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of nonconvexity: Guaranteed training of neural networks using tensor methods. arXiv preprint
arXiv:1506.08473, 2015.
[18] Kenji Kawaguchi. Deep learning without poor local minima. In NIPS, pages 586?594, 2016.
[19] Adam R. Klivans and Pravesh Kothari. Embedding hard learning problems into gaussian space.
In APPROX-RANDOM, volume 28 of LIPIcs, pages 793?809. Schloss Dagstuhl - LeibnizZentrum fuer Informatik, 2014.
[20] Adam R. Klivans and Raghu Meka. Moment-matching polynomials. Electronic Colloquium
on Computational Complexity (ECCC), 20:8, 2013.
10
[21] Adam R. Klivans and Alexander A. Sherstov. Cryptographic hardness for learning intersections of halfspaces. J. Comput. Syst. Sci, 75(1):2?12, 2009.
[22] Anton Krohmer. Finding Cliques in Scale-Free Networks. Master?s thesis, Saarland University,
Germany, 2012.
[23] Dima Kuzmin and Manfred K. Warmuth. Unlabeled compression schemes for maximum
classes. Journal of Machine Learning Research, 8:2047?2081, 2007.
[24] Nick Littlestone and Manfred Warmuth. Relating data compression and learnability. Technical
report, Technical report, University of California, Santa Cruz, 1986.
[25] Nick Littlestone and Manfred Warmuth. Relating data compression and learnability. Technical
report, 1986.
[26] Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training
neural networks. In Advances in Neural Information Processing Systems, pages 855?863,
2014.
[27] Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on
large-scale shallow learning. CoRR, abs/1703.10622, 2017.
[28] Cameron Musco and Christopher Musco. Recursive sampling for the nystr?m method. arXiv
preprint arXiv:1605.07583, 2016.
[29] B. Sch?lkopf, J. Shawe-Taylor, AJ. Smola, and RC. Williamson. Generalization bounds via
eigenvalues of the gram matrix. Technical Report 99-035, NeuroCOLT, 1999.
[30] Bernhard Sch?lkopf and Alexander J Smola. Learning with kernels: support vector machines,
regularization, optimization, and beyond. MIT press, 2002.
[31] Hanie Sedghi and Anima Anandkumar. Provable methods for training neural networks with
sparse connectivity. arXiv preprint arXiv:1412.2693, 2014.
[32] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to
algorithms. Cambridge university press, 2014.
[33] Ohad Shamir. The sample complexity of learning linear predictors with the squared loss.
Journal of Machine Learning Research, 16:3475?3486, 2015.
[34] Ohad Shamir. Distribution-specific hardness of learning neural networks. arXiv preprint
arXiv:1609.01037, 2016.
[35] John Shawe-Taylor, Christopher KI Williams, Nello Cristianini, and Jaz Kandola. On the
eigenspectrum of the gram matrix and the generalization error of kernel-pca. IEEE Transactions on Information Theory, 51(7):2510?2522, 2005.
[36] Le Song, Santosh Vempala, John Wilmes, and Bo Xie. On the complexity of learning neural
networks. arXiv preprint arXiv:1707.04615, 2017.
[37] Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. CoRR, abs/1605.08361, 2016.
[38] Ameet Talwalkar and Afshin Rostamizadeh. Matrix coherence and the nystrom method. CoRR,
abs/1408.2044, 2014.
[39] Christopher KI Williams and Matthias Seeger. Using the nystr?m method to speed up kernel machines. In Proceedings of the 13th International Conference on Neural Information
Processing Systems, pages 661?667. MIT press, 2000.
[40] Bo Xie, Yingyu Liang, and Le Song. Diversity leads to generalization in neural networks.
CoRR, abs/1611.03131, 2016.
[41] Qiuyi Zhang, Rina Panigrahy, and Sushant Sachdeva. Electron-proton dynamics in deep learning. CoRR, abs/1702.00458, 2017.
[42] Tong Zhang. Effective dimension and generalization of kernel learning. In Advances in Neural
Information Processing Systems, pages 471?478, 2003.
[43] Yuchen Zhang, Jason D Lee, and Michael I Jordan. l1-regularized neural networks are improperly learnable in polynomial time. In International Conference on Machine Learning, pages
993?1001, 2016.
[44] Yuchen Zhang, Jason D. Lee, Martin J. Wainwright, and Michael I. Jordan. Learning halfspaces and neural networks with random initialization. CoRR, abs/1511.07948, 2015.
11
| 6814 |@word mild:1 h:6 version:9 polynomial:34 seems:2 stronger:2 norm:11 suitably:1 compression:31 open:2 c0:2 decomposition:1 sgd:1 asks:1 nystr:12 tr:1 arous:1 moment:1 reduction:1 score:1 daniel:3 denoting:1 rkhs:7 suppressing:1 woodruff:1 existing:1 current:2 jaz:1 activation:8 tackling:1 dx:6 must:1 john:2 realize:1 cruz:1 additive:1 subsequent:1 hanie:2 enables:1 selected:3 warmuth:4 amir:2 core:1 manfred:4 org:1 zhang:6 saarland:1 along:2 c2:3 constructed:2 become:3 rc:1 initiative:1 prove:3 combine:1 yingyu:1 x0:2 hardness:3 roughly:2 nor:2 bility:1 globally:1 pitfall:1 little:1 considering:1 begin:1 provided:1 underlying:5 notation:1 bounded:8 agnostic:5 what:3 string:1 minimizes:1 supplemental:1 finding:2 sparsification:2 guarantee:4 quantitative:1 every:1 avron:1 concave:1 runtime:1 classifier:3 sherstov:1 dima:1 unit:7 positive:3 local:7 soudry:1 despite:1 analyzing:1 approximately:2 initialization:2 studied:4 k:2 kane:1 micha:1 ease:1 fastest:1 factorization:1 globerson:1 lecun:1 practice:2 recursive:1 definite:2 empirical:6 convenient:2 matching:1 get:3 close:2 selection:10 unlabeled:1 put:1 context:5 applying:3 risk:1 equivalent:1 map:4 lagrangian:4 williams:3 overarching:1 sets5:1 truncating:1 focused:1 musco:2 simplicity:1 surbhi:3 embedding:3 proving:2 notion:2 shamir:5 suppose:1 mbm:1 olivier:1 hypothesis:11 sig:2 roy:1 approximated:5 expensive:1 satisfying:4 distributional:10 observed:4 preprint:7 hv:6 worst:2 equaling:1 compressing:2 connected:3 improper:2 rina:1 decrease:1 halfspaces:3 mentioned:2 dagstuhl:1 convexity:1 complexity:28 colloquium:1 cristianini:1 dynamic:1 personal:1 depend:2 solving:3 purely:1 efficiency:1 preconditioning:3 drineas:2 various:9 represented:1 muthukrishnan:1 effective:11 dichotomy:1 outside:1 shalev:3 quite:2 larger:1 valued:2 solve:3 nikhil:1 drawing:1 relax:1 compressed:5 otherwise:2 ability:1 think:1 final:3 sequence:1 eigenvalue:68 matthias:1 reconstruction:2 relevant:4 translate:1 description:1 optimum:1 rademacher:6 generating:1 adam:6 ben:2 develop:3 alon:1 andrew:1 strong:5 c:2 kenji:1 implies:4 involves:1 direction:1 closely:1 correct:1 subsequently:1 vc:1 material:1 everything:1 require:4 feeding:3 fix:2 generalization:16 suffices:1 preliminary:1 hold:2 sufficiently:7 yehudayoff:2 considered:1 roi:1 cb:2 algorithmic:2 mapping:2 electron:1 major:2 achieves:1 estimation:1 lose:1 label:1 combinatorial:1 pravesh:1 utexas:2 tool:3 cotter:2 minimization:1 mit:3 gaussian:10 always:1 modified:1 rather:1 avoid:2 broader:2 sparsify:2 conjunction:1 corollary:4 derived:1 focus:2 rank:4 seeger:1 criticism:1 talwalkar:1 realizable:5 sense:2 helpful:1 rostamizadeh:1 dependent:1 abstraction:1 typically:1 hidden:12 kernelized:2 choromanska:1 interested:1 selects:1 provably:2 germany:1 issue:1 classification:2 dual:1 overall:1 denoted:3 development:1 marginal:8 santosh:1 equal:3 construct:1 saving:1 beach:1 sampling:11 piotr:1 aware:1 broad:1 rls:2 icml:1 representer:1 future:2 minimized:1 report:4 develops:1 belkin:2 kandola:1 replaced:2 brutzkus:1 ourselves:1 microsoft:1 karthik:1 ab:9 freedom:1 highly:1 ofir:1 mahoney:2 semidefinite:1 misha:1 necessary:3 janzamin:1 pawel:1 ohad:3 intense:1 taylor:2 yuchen:2 littlestone:2 desired:1 theoretical:3 minimal:3 instance:1 column:10 earlier:1 obstacle:2 boolean:1 entry:4 subset:1 daniely:3 uniform:1 predictor:1 too:2 learnability:11 optimally:1 perturbed:3 st:1 fundamental:1 herbster:1 international:3 siam:1 lee:2 invertible:1 michael:4 ym:2 together:2 connectivity:1 thesis:1 squared:1 opposed:2 henceforth:1 admit:1 usingp:1 syst:1 account:1 diversity:1 satisfy:4 depends:3 vi:3 later:2 view:1 jason:2 closed:1 analyze:1 relus:9 decaying:1 relied:1 shai:4 simon:1 contribution:1 minimize:6 square:8 accuracy:5 who:2 efficiently:5 yield:3 peril:1 anton:1 lkopf:2 informatik:1 researcher:1 anima:2 carmon:1 definition:10 failure:1 obvious:1 broadest:1 nystrom:1 proof:2 associated:1 mi:7 petros:2 cur:1 hsu:1 proved:2 recall:2 conversation:1 dimensionality:1 hilbert:3 carefully:2 auer:1 feed:2 higher:1 dt:1 xie:3 varun:1 improved:3 erd:1 rard:1 done:1 though:3 furthermore:1 stage:1 smola:2 until:1 hand:1 expressive:3 christopher:3 o:1 overlapping:1 quality:1 indicated:1 perhaps:1 aj:1 usa:1 concept:4 normalized:1 true:1 regularization:1 symmetric:1 outline:1 theoretic:2 ridge:4 l1:1 novel:2 recently:3 sigmoid:5 common:2 overview:1 exponentially:1 volume:3 m1:2 relating:3 refer:3 unfamiliar:1 composition:1 cambridge:1 meka:2 approx:1 decompression:1 frostig:1 shawe:2 surface:1 multivariate:1 recent:8 showed:2 perspective:2 henaff:1 moderate:1 diving:1 scenario:1 certain:2 binary:1 shahar:2 siyuan:1 yi:11 minimum:5 greater:1 additional:1 relaxed:1 goel:6 novelty:1 schloss:1 semi:2 unimodal:1 reduces:1 smooth:1 technical:5 believed:1 long:3 sphere:1 e1:1 award:1 cameron:1 prediction:1 involving:1 regression:11 multilayer:2 noiseless:2 arxiv:14 kernel:48 dec:1 c1:3 background:1 want:1 sch:2 unlike:2 induced:1 subject:2 majid:1 spirit:1 sridharan:1 jordan:2 anandkumar:2 structural:3 leverage:1 easy:1 relaxes:1 embeddings:1 xj:2 relu:15 gave:1 fit:1 architecture:1 restrict:2 agnostically:4 fuer:1 incomparable:1 reduce:1 regarding:2 computable:1 texas:2 pca:3 bartlett:5 improperly:1 song:3 peter:4 remark:1 deep:4 dramatically:1 useful:3 santa:1 extensively:1 induces:1 svms:2 category:1 generate:3 exist:1 per:1 sparsifying:1 key:2 putting:1 nevertheless:1 threshold:1 drawn:6 neither:2 ce:1 nonconvexity:1 vast:1 graph:6 relaxation:3 sum:2 run:4 letter:1 master:1 tailor:1 reader:3 yann:1 electronic:1 draw:1 coherence:1 bit:5 bound:24 layer:10 ki:3 haim:1 guaranteed:1 nontrivial:1 constraint:1 worked:1 bousquet:2 speed:1 klivans:7 extremely:1 min:3 vempala:2 ameet:1 martin:1 department:2 poor:1 conjugate:1 smaller:1 slightly:1 shallow:2 erm:2 taken:1 computationally:3 previously:1 turn:2 needed:1 mind:1 singer:1 tractable:1 raghu:2 informal:2 generalizes:2 apply:1 worthwhile:1 appropriate:1 sachdeva:1 yair:1 ensure:1 include:1 mikael:1 yoram:1 giving:3 prof:1 especially:1 nyi:1 classical:1 approximating:3 amit:3 kawaguchi:1 tensor:1 question:2 quantity:2 dependence:7 traditional:1 visiting:1 said:4 gradient:4 subspace:2 convnet:1 thank:2 sci:1 neurocolt:1 restating:1 nello:1 eigenspectrum:1 toward:1 provable:3 boldface:1 sedghi:2 assuming:5 afshin:1 length:1 panigrahy:1 index:1 relationship:3 liang:1 mostly:1 unfortunately:1 statement:2 relate:1 stoc:1 intent:1 reliably:1 cryptographic:1 neuron:1 kothari:1 finite:2 minh:1 descent:2 truncated:1 extended:2 communication:1 y1:2 rn:2 reproducing:2 arbitrary:2 august:1 overcoming:1 david:5 namely:3 required:3 pair:1 connection:1 nick:2 proton:1 california:1 expressivity:1 wilmes:1 nip:3 address:1 able:1 justin:2 beyond:1 below:1 xm:3 beating:1 oft:1 sparsity:2 max:2 explanation:1 marek:1 wainwright:1 power:5 event:1 suitable:1 natural:2 regularized:1 solvable:1 eccc:1 scheme:28 improve:3 thaler:2 imply:1 mathieu:1 sn:8 prior:7 understanding:3 literature:1 nice:1 acknowledgement:1 nati:1 relative:1 law:4 embedded:2 fully:11 loss:12 expect:1 highlight:1 interesting:2 limitation:2 srebro:2 versus:2 analogy:1 incurred:1 degree:1 shay:1 exciting:1 austin:2 row:1 supported:1 surprisingly:1 parity:1 free:1 side:2 allow:4 understand:3 formal:1 institute:1 fall:1 deeper:1 face:1 taking:1 livni:1 sparse:10 mikhail:1 feedback:1 depth:5 dimension:17 gram:14 world:2 forward:2 made:4 author:2 bm:4 nguyen:1 transaction:1 approximate:9 bernhard:1 keep:1 clique:2 global:1 xi:16 shwartz:3 additionally:1 kanade:1 learn:3 ca:1 improving:1 williamson:1 excellent:1 poly:4 necessarily:1 domain:2 anna:1 aistats:1 main:7 linearly:2 bounding:3 subsample:4 noise:1 huy:1 x1:3 kuzmin:1 representative:1 tong:1 precision:2 sub:2 wish:2 explicit:1 exponential:11 comput:1 shalevshwartz:1 jmlr:2 sushant:1 learns:1 theorem:23 bad:3 specific:2 showing:1 pac:2 jakub:1 learnable:7 moran:2 decay:65 svm:1 evidence:2 intractable:2 mendelson:3 exists:3 restricting:1 workshop:1 corr:9 ci:1 sigmoids:3 easier:1 intersection:1 highlighting:1 datadependent:1 bo:2 satisfies:9 acm:1 ma:1 succeed:1 goal:3 presentation:1 hard:3 specifically:1 reducing:1 wt:2 total:1 lens:1 m3:1 formally:1 mark:1 support:2 alexander:2 phenomenon:1 srivastava:1 |
6,429 | 6,815 | MMD GAN: Towards Deeper Understanding of
Moment Matching Network
Chun-Liang Li1,? Wei-Cheng Chang1,? Yu Cheng2 Yiming Yang1 Barnab?s P?czos1
1
Carnegie Mellon University, 2 AI Foundations, IBM Research
{chunlial,wchang2,yiming,bapoczos}@cs.cmu.edu [email protected]
(? denotes equal contribution)
Abstract
Generative moment matching network (GMMN) is a deep generative model that
differs from Generative Adversarial Network (GAN) by replacing the discriminator
in GAN with a two-sample test based on kernel maximum mean discrepancy
(MMD). Although some theoretical guarantees of MMD have been studied, the
empirical performance of GMMN is still not as competitive as that of GAN on
challenging and large benchmark datasets. The computational efficiency of GMMN
is also less desirable in comparison with GAN, partially due to its requirement for
a rather large batch size during the training. In this paper, we propose to improve
both the model expressiveness of GMMN and its computational efficiency by
introducing adversarial kernel learning techniques, as the replacement of a fixed
Gaussian kernel in the original GMMN. The new approach combines the key ideas
in both GMMN and GAN, hence we name it MMD GAN. The new distance measure
in MMD GAN is a meaningful loss that enjoys the advantage of weak? topology
and can be optimized via gradient descent with relatively small batch sizes. In our
evaluation on multiple benchmark datasets, including MNIST, CIFAR-10, CelebA
and LSUN, the performance of MMD GAN significantly outperforms GMMN, and
is competitive with other representative GAN works.
1
Introduction
The essence of unsupervised learning models the underlying distribution PX of the data X . Deep
generative model [1, 2] uses deep learning to approximate the distribution of complex datasets with
promising results. However, modeling arbitrary density is a statistically challenging task [3]. In many
applications, such as caption generation [4], accurate density estimation is not even necessary since
we are only interested in sampling from the approximated distribution.
Rather than estimating the density of PX , Generative Adversarial Network (GAN) [5] starts from a
base distribution PZ over Z, such as Gaussian distribution, then trains a transformation network g?
such that P? ? PX , where P? is the underlying distribution of g? (z) and z ? PZ . During the training,
GAN-based algorithms require an auxiliary network f to estimate the distance between PX and P? .
Different probabilistic (pseudo) metrics have been studied [5?8] under GAN framework.
Instead of training an auxiliary network f for measuring the distance between PX and P? , Generative
moment matching network (GMMN) [9, 10] uses kernel maximum mean discrepancy (MMD) [11],
which is the centerpiece of nonparametric two-sample test, to determine the distribution distances.
During the training, g? is trained to pass the hypothesis test (minimize MMD distance). [11] shows
even the simple Gaussian kernel enjoys the strong theoretical guarantees (Theorem 1). However, the
empirical performance of GMMN does not meet its theoretical properties. There is no promising
empirical results comparable with GAN on challenging benchmarks [12, 13]. Computationally,
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
it also requires larger batch size than GAN needs for training, which is considered to be less
efficient [9, 10, 14, 8]
In this work, we try to improve GMMN and consider using MMD with adversarially learned kernels
instead of fixed Gaussian kernels to have better hypothesis testing power. The main contributions of
this work are:
? In Section 2, we prove that training g? via MMD with learned kernels is continuous and differentiable, which guarantees the model can be trained by gradient descent. Second, we prove a new
distance measure via kernel learning, which is a sensitive loss function to the distance between
PX and P? (weak? topology). Empirically, the loss decreases when two distributions get closer.
? In Section 3, we propose a practical realization called MMD GAN that learns generator g? with
the adversarially trained kernel. We further propose a feasible set reduction to speed up and
stabilize the training of MMD GAN.
? In Section 5, we show that MMD GAN is computationally more efficient than GMMN, which can
be trained with much smaller batch size. We also demonstrate that MMD GAN has promising
results on challenging datasets, including CIFAR-10, CelebA and LSUN, where GMMN fails. To
our best knowledge, we are the first MMD based work to achieve comparable results with other
GAN works on these datasets.
Finally, we also study the connection to existing works in Section 4. Interestingly, we show Wasserstein GAN [8] is the special case of the proposed MMD GAN under certain conditions. The unified
view shows more connections between moment matching and GAN, which can potentially inspire
new algorithms based on well-developed tools in statistics [15]. Our experiment code is available at
https://github.com/OctoberChang/MMD-GAN.
2
GAN, Two-Sample Test and GMMN
Assume we are given data {xi }ni=1 , where xi 2 X and xi ? PX . If we are interested in sampling
from PX , it is not necessary to estimate the density of PX . Instead, Generative Adversarial Network
(GAN) [5] trains a generator g? parameterized by ? to transform samples z ? PZ , where z 2 Z,
into g? (z) ? P? such that P? ? PX . To measure the similarity between PX and P? via their
samples {x}ni=1 and {g? (zj )}nj=1 during the training, [5] trains the discriminator f parameterized
by for help. The learning is done by playing a two-player game, where f tries to distinguish xi
and g? (zj ) while g? aims to confuse f by generating g? (zj ) similar to xi .
On the other hand, distinguishing two distributions by finite samples is known as Two-Sample Test in
statistics. One way to conduct two-sample test is via kernel maximum mean discrepancy (MMD) [11].
Given two distributions P and Q, and a kernel k, the square of MMD distance is defined as
Mk (P, Q) = k?P
?Q k2H = EP [k(x, x0 )]
2EP,Q [k(x, y)] + EQ [k(y, y 0 )].
Theorem 1. [11] Given a kernel k, if k is a characteristic kernel, then Mk (P, Q) = 0 iff P = Q.
GMMN: One example of characteristic kernel is Gaussian kernel k(x, x0 ) = exp(kx x0 k2 ). Based
on Theorem 1, [9, 10] propose generative moment-matching network (GMMN), which trains g? by
(1)
min Mk (PX , P? ),
?
with a fixed Gaussian kernel k rather than training an additional discriminator f as GAN.
2.1
MMD with Kernel Learning
In practice we use finite samples from distributions to estimate MMD distance. Given X =
{x1 , ? ? ? , xn } ? P and Y = {y1 , ? ? ? , yn } ? Q, one estimator of Mk (P, Q) is
X
2 X
1 X
? k (X, Y ) = 1
M
k(xi , x0i )
k(xi , yj ) + n
k(yj , yj0 ).
n
n
2
i6=i0
2
i6=j
2
j6=j 0
? (X, Y ) may not be zero even when P = Q. We then conduct
Because of the sampling variance, M
hypothesis test with null hypothesis H0 : P = Q. For a given allowable probability of false rejection ?,
2
? (X, Y ) > c? for some chose threshold c? > 0.
we can only reject H0 , which imply P 6= Q, if M
Otherwise, Q passes the test and Q is indistinguishable from P under this test. Please refer to [11] for
more details.
? k (P, Q) has
Intuitively, if kernel k cannot result in high MMD distance Mk (P, Q) when P 6= Q, M
more chance to be smaller than c? . Then we are unlikely to reject the null hypothesis H0 with finite
samples, which implies Q is not distinguishable from P. Therefore, instead of training g? via (1) with
a pre-specified kernel k as GMMN, we consider training g? via
min max Mk (PX , P? ),
?
k2K
(2)
which takes different possible characteristic kernels k 2 K into account. On the other hand, we
could also view (2) as replacing the fixed kernel k in (1) with the adversarially learned kernel
arg maxk2K Mk (PX , P? ) to have stronger signal where P 6= P? to train g? . We refer interested
readers to [16] for more rigorous discussions about testing power and increasing MMD distances.
However, it is difficult to optimize over all characteristic kernels when we solve (2). By [11, 17] if f
? x0 ) =
is a injective function and k is characteristic, then the resulted kernel k? = k f , where k(x,
0
k(f (x), f (x )) is still characteristic. If we have a family of injective functions parameterized by ,
which is denoted as f , we are able to change the objective to be
min max Mk
?
f
(PX , P? ),
(3)
In this paper, we consider the case that combining Gaussian kernels with injective functions f , where
? x0 ) = exp( kf (x) f (x)0 k2 ). One example function class of f is {f |f (x) = x, > 0},
k(x,
which is equivalent to the kernel bandwidth tuning. A more complicated realization will be discussed
in Section 3. Next, we abuse the notation Mf (P, Q) to be MMD distance given the composition
kernel of Gaussian kernel and f in the following. Note that [18] considers the linear combination
of characteristic kernels, which can also be incorporated into the discussed composition kernels. A
more general kernel is studied in [19].
2.2
Properties of MMD with Kernel Learning
[8] discuss different distances between distributions adopted by existing deep learning algorithms, and
show many of them are discontinuous, such as Jensen-Shannon divergence [5] and Total variation [7],
except for Wasserstein distance. The discontinuity makes the gradient descent infeasible for training.
From (3), we train g? via minimizing max Mf (PX , P? ). Next, we show max Mf (PX , P? ) also
enjoys the advantage of being a continuous and differentiable objective in ? under mild assumptions.
Assumption 2. g : Z ? Rm ! X is locally Lipschitz, where Z ? Rd . We will denote g? (z) the
evaluation on (z, ?) for convenience. Given f and a probability distribution Pz over Z, g satisfies
Assumption 2 if there are local Lipschitz constants L(?, z) for f g, which is independent of , such
that Ez?Pz [L(?, z)] < +1.
Theorem 3. The generator function g? parameterized by ? is under Assumption 2. Let PX be a fixed
distribution over X and Z be a random variable over the space Z. We denote P? the distribution
of g? (Z), then max Mf (PX , P? ) is continuous everywhere and differentiable almost everywhere
in ?.
If g? is parameterized by a feed-forward neural network, it satisfies Assumption 2 and can be trained
via gradient descent as well as propagation, since the objective is continuous and differentiable
followed by Theorem 3. More technical discussions are shown in Appendix B.
Theorem 4. (weak? topology) Let {Pn } be a sequence of distributions. Considering n ! 1,
D
D
under mild Assumption, max Mf (PX , Pn ) ! 0 () Pn ! PX , where ! means converging in
distribution [3].
Theorem 4 shows that max Mf (PX , Pn ) is a sensible cost function to the distance between PX
and Pn . The distance is decreasing when Pn is getting closer to PX , which benefits the supervision
of the improvement during the training. All proofs are omitted to Appendix A. In the next section,
we introduce a practical realization of training g? via optimizing min? max Mf (PX , P? ).
3
3
MMD GAN
To approximate (3), we use neural networks to parameterized g? and f with expressive power.
For g? , the assumption is locally Lipschitz, where commonly used feed-forward neural networks
satisfy this constraint. Also, the gradient 5? (max f g? ) has to be bounded, which can be
done by clipping [8] or gradient penalty [20]. The non-trivial part is f has to be injective.
For an injective function f , there exists an function f 1 such that f 1 (f (x)) = x, 8x 2 X and
f 1 (f (g(z))) = g(z), 8z 2 Z 1 , which can be approximated by an autoencoder. In the following,
we denote = { e , d } to be the parameter of discriminator networks, which consists of an encoder
f e , and train the corresponding decoder f d ? f 1 to regularize f . The objective (3) is relaxed to
be
min max Mf e (P(X ), P(g? (Z)))
Ey2X [g(Z) ky f d (f e (y))k2 .
(4)
?
Note that we ignore the autoencoder objective when we train ?, but we use (4) for a concise
presentation. We note that the empirical study suggests autoencoder objective is not necessary to
lead the successful GAN training as we will show in Section 5, even though the injective property is
required in Theorem 1.
The proposed algorithm is similar to GAN [5], which aims to optimize two neural networks g? and f
in a minmax formulation, while the meaning of the objective is different. In [5], f e is a discriminator
(binary) classifier to distinguish two distributions. In the proposed algorithm, distinguishing two
distribution is still done by two-sample test via MMD, but with an adversarially learned kernel
parametrized by f e . g? is then trained to pass the hypothesis test. More connection and difference
with related works is discussed in Section 4. Because of the similarity of GAN, we call the proposed
algorithm MMD GAN. We present an implementation with the weight clipping in Algorithm 1, but
one can easily extend to other Lipschitz approximations, such as gradient penalty [20].
Algorithm 1: MMD GAN, our proposed algorithm.
input :? the learning rate, c the clipping parameter, B the batch size, nc the number of iterations of
discriminator per generator update.
initialize generator parameter ? and discriminator parameter ;
while ? has not converged do
for t = 1, . . . , nc do
B
Sample a minibatches {xi }B
i=1 ? P(X ) and {zj }j=1 ? P(Z)
g
r Mf e (P(X ), P(g? (Z)))
Ey2X [g(Z) ky f d (f e (y))k2
+ ? ? RMSProp( , g )
clip( , c, c)
B
Sample a minibatches {xi }B
i=1 ? P(X ) and {zj }j=1 ? P(Z)
g?
r? Mf e (P(X ), P(g? (Z)))
?
? ? ? RMSProp(?, g? )
Encoding Perspective of MMD GAN: Besides from using kernel selection to explain MMD GAN,
the other way to see the proposed MMD GAN is viewing f e as a feature transformation function,
and the kernel two-sample test is performed on this transformed feature space (i.e., the code space of
the autoencoder). The optimization is finding a manifold with stronger signals for MMD two-sample
test. From this perspective, [9] is the special case of MMD GAN if f e is the identity mapping
function. In such circumstance, the kernel two-sample test is conducted in the original data space.
3.1
Feasible Set Reduction
Theorem 5. For any f , there exists f 0 such that Mf (Pr , P? ) = Mf 0 (Pr , P? ) and Ex [f (x)] ?
Ez [f 0 (g? (z))].
With Theorem 5, we could reduce the feasible set of
min? max Mf (Pr , P? )
1
during the optimization by solving
s.t. E[f (x)] ? E[f (g? (z))]
Note that injective is not necessary invertible.
4
which the optimal solution is still equivalent to solving (2).
However, it is hard to solve the constrained optimization problem with backpropagation. We relax
the constraint by ordinal regression [21] to be
min max Mf (Pr , P? ) + min E[f (x)] E[f (g? (z))], 0 ,
?
which only penalizes the objective when the constraint is violated. In practice, we observe that
reducing the feasible set makes the training faster and stabler.
4
Related Works
There has been a recent surge on improving GAN [5]. We review some related works here.
Connection with WGAN: If we composite f with linear kernel instead of Gaussian kernel, and
restricting the output dimension h to be 1, we then have the objective
min max kE[f (x)] E[f (g? (z))]k2 .
(5)
?
Parameterizing f and g? with neural networks and assuming 9 0 2 such f 0 = f , 8 , recovers
Wasserstein GAN (WGAN) [8] 2 . If we treat f (x) as the data transform function, WGAN can
be interpreted as first-order moment matching (linear kernel) while MMD GAN aims to match
infinite order of moments with Gaussian kernel form Taylor expansion [9]. Theoretically, Wasserstein
distance has similar theoretically guarantee as Theorem 1, 3 and 4. In practice, [22] show neural
networks does not have enough capacity to approximate Wasserstein distance. In Section 5, we
demonstrate matching high-order moments benefits the results. [23] also propose McGAN that
matches second order moment from the primal-dual norm perspective. However, the proposed
algorithm requires matrix (tensor) decompositions because of exact moment matching [24], which
is hard to scale to higher order moment matching. On the other hand, by giving up exact moment
matching, MMD GAN can match high-order moments with kernel tricks. More detailed discussions
are in Appendix B.3.
Difference from Other Works with Autoencoders: Energy-based GANs [7, 25] also utilizes
the autoencoder (AE) in its discriminator from the energy model perspective, which minimizes
the reconstruction error of real samples x while maximize the reconstruction error of generated
samples g? (z). In contrast, MMD GAN uses AE to approximate invertible functions by minimizing
the reconstruction errors of both real samples x and generated samples g? (z). Also, [8] show EBGAN
approximates total variation, with the drawback of discontinuity, while MMD GAN optimizes MMD
distance. The other line of works [2, 26, 9] aims to match the AE codespace f (x), and utilize the
decoder fdec (?). [2, 26] match the distribution of f (x) and z via different distribution distances and
generate data (e.g. image) by fdec (z). [9] use MMD to match f (x) and g(z), and generate data via
fdec (g(z)). The proposed MMD GAN matches the f (x) and f (g(z)), and generates data via g(z)
directly as GAN. [27] is similar to MMD GAN but it considers KL-divergence without showing
continuity and weak? topology guarantee as we prove in Section 2.
Other GAN Works: In addition to the discussed works, there are several extended works of GAN.
[28] proposes using the linear kernel to match first moment of its discriminator?s latent features. [14]
considers the variance of empirical MMD score during the training. Also, [14] only improves the
latent feature matching in [28] by using kernel MMD, instead of proposing an adversarial training
framework as we studied in Section 2. [29] uses Wasserstein distance to match the distribution of
autoencoder loss instead of data. One can consider to extend [29] to higher order matching based on
the proposed MMD GAN. A parallel work [30] use energy distance, which can be treated as MMD
GAN with different kernel. However, there are some potential problems of its critic. More discussion
can be referred to [31].
5
Experiment
We train MMD GAN for image generation on the MNIST [32], CIFAR-10 [33], CelebA [13],
and LSUN bedrooms [12] datasets, where the size of training instances are 50K, 50K, 160K, 3M
2
Theoretically, they are not equivalent but the practical neural network approximation results in the same
algorithm.
5
respectively. All the samples images are generated from a fixed noise random vectors and are not
cherry-picked.
Network architecture: In our experiments, we follow the architecture of DCGAN [34] to design g?
by its generator and f by its discriminator except for expanding the output layer of f to be h
dimensions.
Kernel designs: The loss function of MMD GAN is implicitly associated with a family of characteristic kernels. Similar to the prior MMD seminal papers [10, 9, 14], we consider a mixture of K RBF
PK
kernels k(x, x0 ) = q=1 k q (x, x0 ) where k q is a Gaussian kernel with bandwidth parameter q .
Tuning kernel bandwidth q optimally still remains an open problem. In this works, we fixed K = 5
and q to be {1, 2, 4, 8, 16} and left the f to learn the kernel (feature representation) under these q .
Hyper-parameters: We use RMSProp [35] with learning rate of 0.00005 for a fair comparison with
WGAN as suggested in its original paper [8]. We ensure the boundedness of model parameters
of discriminator by clipping the weights point-wisely to the range [ 0.01, 0.01] as required by
Assumption 2. The dimensionality h of the latent space is manually set according to the complexity
of the dataset. We thus use h = 16 for MNIST, h = 64 for CelebA, and h = 128 for CIFAR-10 and
LSUN bedrooms. The batch size is set to be B = 64 for all datasets.
5.1
Qualitative Analysis
(a) GMMN-D MNIST
(b) GMMN-C MNIST
(c) MMD GAN MNIST
(d) GMMN-D CIFAR-10
(e) GMMN-C CIFAR-10
(f) MMD GAN CIFAR-10
Figure 1: Generated samples from GMMN-D (Dataspace), GMMN-C (Codespace) and our MMD
GAN with batch size B = 64.
We start with comparing MMD GAN with GMMN on two standard benchmarks, MNIST and CIFAR10. We consider two variants for GMMN. The first one is original GMMN, which trains the generator
by minimizing the MMD distance on the original data space. We call it as GMMN-D. To compare
with MMD GAN, we also pretrain an autoencoder for projecting data to a manifold, then fix the
autoencoder as a feature transformation, and train the generator by minimizing the MMD distance in
the code space. We call it as GMMN-C.
The results are pictured in Figure 1. Both GMMN-D and GMMN-C are able to generate meaningful
digits on MNIST because of the simple data structure. By a closer look, nonetheless, the boundary
and shape of the digits in Figure 1a and 1b are often irregular and non-smooth. In contrast, the sample
6
(a) WGAN MNIST
(b) WGAN CelebA
(c) WGAN LSUN
(d) MMD GAN MNIST
(e) MMD GAN CelebA
(f) MMD GAN LSUN
Figure 2: Generated samples from WGAN and MMD GAN on MNIST, CelebA, and LSUN bedroom
datasets.
digits in Figure 1c are more natural with smooth outline and sharper strike. For CIFAR-10 dataset,
both GMMN variants fail to generate meaningful images, but resulting some low level visual features.
We observe similar cases in other complex large-scale datasets such as CelebA and LSUN bedrooms,
thus results are omitted. On the other hand, the proposed MMD GAN successfully outputs natural
images with sharp boundary and high diversity. The results in Figure 1 confirm the success of the
proposed adversarial learned kernels to enrich statistical testing power, which is the key difference
between GMMN and MMD GAN.
If we increase the batch size of GMMN to 1024, the image quality is improved, however, it is still
not competitive to MMD GAN with B = 64. The images are put in Appendix C. This demonstrates
that the proposed MMD GAN can be trained more efficiently than GMMN with smaller batch size.
Comparisons with GANs: There are several representative extensions of GANs. We consider
recent state-of-art WGAN [8] based on DCGAN structure [34], because of the connection with MMD
GAN discussed in Section 4. The results are shown in Figure 2. For MNIST, the digits generated
from WGAN in Figure 2a are more unnatural with peculiar strikes. In Contrary, the digits from
MMD GAN in Figure 2d enjoy smoother contour. Furthermore, both WGAN and MMD GAN
generate diversified digits, avoiding the mode collapse problems appeared in the literature of training
GANs. For CelebA, we can see the difference of generated samples from WGAN and MMD GAN.
Specifically, we observe varied poses, expressions, genders, skin colors and light exposure in Figure
2b and 2e. By a closer look (view on-screen with zooming in), we observe that faces from WGAN
have higher chances to be blurry and twisted while faces from MMD GAN are more spontaneous with
sharp and acute outline of faces. As for LSUN dataset, we could not distinguish salient differences
between the samples generated from MMD GAN and WGAN.
5.2
Quantitative Analysis
To quantitatively measure the quality and diversity of generated samples, we compute the inception
score [28] on CIFAR-10 images. The inception score is used for GANs to measure samples quality
and diversity on the pretrained inception model [28]. Models that generate collapsed samples have
a relatively low score. Table 1 lists the results for 50K samples generated by various unsupervised
7
generative models trained on CIFAR-10 dataset. The inception scores of [36, 37, 28] are directly
derived from the corresponding references.
Although both WGAN and MMD GAN can generate sharp images as we show in Section 5.1, our
score is better than other GAN techniques except for DFM [36]. This seems to confirm empirically
that higher order of moment matching between the real data and fake sample distribution benefits
generating more diversified sample images. Also note DFM appears compatible with our method and
combing training techniques in DFM is a possible avenue for future work.
Method
Real data
DFM [36]
ALI [37]
Improved GANs [28]
Scores ? std.
11.95 ? .20
7.72
5.34
4.36
MMD GAN
6.17 ? .07
WGAN
5.88 ? .07
GMMN-C
3.94 ? .04
GMMN-D
3.47 ? .03
Table 1: Inception scores
5.3
Figure 3: Computation time
Stability of MMD GAN
We further illustrate how the MMD distance correlates well with the quality of the generated samples.
Figure 4 plots the evolution of the MMD GAN estimate the MMD distance during training for
? f (PX , P? ) with moving
MNIST, CelebA and LSUN datasets. We report the average of the M
average to smooth the graph to reduce the variance caused by mini-batch stochastic training. We
observe during the whole training process, samples generated from the same noise vector across
iterations, remain similar in nature. (e.g., face identity and bedroom style are alike while details and
backgrounds will evolve.) This qualitative observation indicates valuable stability of the training
process. The decreasing curve with the improving quality of images supports the weak? topology
shown in Theorem 4. Also, We can see from the plot that the model converges very quickly. In Figure
4b, for example, it converges shortly after tens of thousands of generator iterations on CelebA dataset.
(a) MNIST
(b) CelebA
(c) LSUN Bedrooms
Figure 4: Training curves and generative samples at different stages of training. We can see a clear
correlation between lower distance and better sample quality.
5.4
Computation Issue
We conduct time complexity analysis with respect to the batch size B. The time complexity of each
iteration is O(B) for WGAN and O(KB 2 ) for our proposed MMD GAN with a mixture of K RBF
kernels. The quadratic complexity O(B 2 ) of MMD GAN is introduced by computing kernel matrix,
which is sometimes criticized for being inapplicable with large batch size in practice. However,
we point that there are several recent works, such as EBGAN [7], also matching pairwise relation
between samples of batch size, leading to O(B 2 ) complexity as well.
8
Empirically, we find that under GPU environment, the highly parallelized matrix operation tremendously alleviated the quadratic time to almost linear time with modest B. Figure 3 compares the
computational time per generator iterations versus different B on Titan X. When B = 64, which
is adapted for training MMD GAN in our experiments setting, the time per iteration of WGAN
and MMD GAN is 0.268 and 0.676 seconds, respectively. When B = 1024, which is used for
training GMMN in its references [9], the time per iteration becomes 4.431 and 8.565 seconds, respectively. This result coheres our argument that the empirical computational time for MMD GAN is not
quadratically expensive compared to WGAN with powerful GPU parallel computation.
5.5
Better Lipschitz Approximation and Necessity of Auto-Encoder
Although we used weight-clipping for Lipschitz constraint in Assumption 2, one can also use other
approximations, such as gradient penalty [20]. On the other hand, in Algorithm 1, we present an
algorithm with auto-encoder to be consistent with the theory that requires f to be injective. However,
we observe that it is not necessary in practice. We show some preliminary results of training MMD
GAN with gradient penalty and without the auto-encoder in Figure 5. The preliminary study indicates
that MMD GAN can generate satisfactory results with other Lipschitz constraint approximation. One
potential future work is conducting more thorough empirical comparison studies between different
approximations.
(a) Cifar10, Giter = 300K
(b) CelebA, Giter = 300K
Figure 5: MMD GAN results using gradient penalty [20] and without auto-encoder reconstruction
loss during training.
6
Discussion
We introduce a new deep generative model trained via MMD with adversarially learned kernels. We
further study its theoretical properties and propose a practical realization MMD GAN, which can be
trained with much smaller batch size than GMMN and has competitive performances with state-of-theart GANs. We can view MMD GAN as the first practical step forward connecting moment matching
network and GAN. One important direction is applying developed tools in moment matching [15] on
general GAN works based the connections shown by MMD GAN. Also, in Section 4, we connect
WGAN and MMD GAN by first-order and infinite-order moment matching. [24] shows finite-order
moment matching (? 5) achieves the best performance on domain adaption. One could extend MMD
GAN to this by using polynomial kernels. Last, in theory, an injective mapping f is necessary for
the theoretical guarantees. However, we observe that it is not mandatory in practice as we show
in Section 5.5. One conjecture is it usually learns the injective mapping with high probability by
parameterizing with neural networks, which worth more study as a future work.
Acknowledgments
We thank the reviewers for their helpful comments. This work is supported in part by the National
Science Foundation (NSF) under grants IIS-1546329 and IIS-1563887.
9
References
[1] Ruslan Salakhutdinov and Geoffrey Hinton. Deep boltzmann machines. In AISTATS, 2009.
[2] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2013.
[3] Larry Wasserman. All of statistics: a concise course in statistical inference. Springer Science &
Business Media, 2013.
[4] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov,
Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with
visual attention. In ICML, 2015.
[5] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
[6] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural
samplers using variational divergence minimization. In NIPS, 2016.
[7] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based Generative Adversarial Network. In ICLR,
2017.
[8] Mart?n Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein GAN. In ICML, 2017.
[9] Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In ICML,
2015.
[10] Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative
neural networks via maximum mean discrepancy optimization. In UAI, 2015.
[11] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch?lkopf, and Alexander
Smola. A kernel two-sample test. JMLR, 2012.
[12] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun:
Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv
preprint arXiv:1506.03365, 2015.
[13] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the
wild. In CVPR, 2015.
[14] Dougal J. Sutherland, Hsiao-Yu Fish Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas,
Alexander J. Smola, and Arthur Gretton. Generative models and model criticism via optimized
maximum mean discrepancy. In ICLR, 2017.
[15] Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, and Bernhard Sch?lkopf. Kernel
mean embedding of distributions: A review and beyonds. arXiv preprint arXiv:1605.09522,
2016.
[16] Kenji Fukumizu, Arthur Gretton, Gert R Lanckriet, Bernhard Sch?lkopf, and Bharath K Sriperumbudur. Kernel choice and classifiability for rkhs embeddings of probability distributions. In
NIPS, 2009.
[17] A. Gretton, B. Sriperumbudur, D. Sejdinovic, H. Strathmann, S. Balakrishnan, M. Pontil, and
K. Fukumizu. Optimal kernel choice for large-scale two-sample tests. In NIPS, 2012.
[18] Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano
Pontil, Kenji Fukumizu, and Bharath K Sriperumbudur. Optimal kernel choice for large-scale
two-sample tests. In NIPS, 2012.
[19] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. Deep kernel
learning. In AISTATS, 2016.
[20] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville.
Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.
10
[21] Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Support vector learning for ordinal
regression. 1999.
[22] Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and
equilibrium in generative adversarial nets (gans). arXiv preprint arXiv:1703.00573, 2017.
[23] Youssef Mroueh, Tom Sercu, and Vaibhava Goel. Mcgan: Mean and covariance feature
matching gan. arxiv pre-print 1702.08398, 2017.
[24] Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschl?ger, and Susanne
Saminger-Platz. Central moment discrepancy (cmd) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811, 2017.
[25] Shuangfei Zhai, Yu Cheng, Rog?rio Schmidt Feris, and Zhongfei Zhang. Generative adversarial
networks as variational training of energy based models. CoRR, abs/1611.01799, 2016.
[26] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
[27] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Adversarial generator-encoder networks. arXiv preprint arXiv:1704.02304, 2017.
[28] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training gans. In NIPS, 2016.
[29] David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative
adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
[30] Marc G Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan,
Stephan Hoyer, and R?mi Munos. The cramer distance as a solution to biased wasserstein
gradients. arXiv preprint arXiv:1705.10743, 2017.
[31] Arthur Gretton. Notes on the cramer gan. https://medium.com/towards-data-science/
notes-on-the-cramer-gan-752abd505c00, 2017. Accessed: 2017-11-2.
[32] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 1998.
[33] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images.
2009.
[34] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with
deep convolutional generative adversarial networks. In ICLR, 2016.
[35] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running
average of its recent magnitude. COURSERA: Neural networks for machine learning, 2012.
[36] D Warde-Farley and Y Bengio. Improving generative adversarial networks with denoising
feature matching. In ICLR, 2017.
[37] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. In ICLR, 2017.
[38] Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Sch?lkopf, and Gert R.G.
Lanckriet. Hilbert space embeddings and metrics on probability measures. JMLR, 2010.
11
| 6815 |@word mild:2 polynomial:1 seems:1 stronger:2 norm:1 open:1 hu:1 decomposition:1 covariance:1 concise:2 boundedness:1 reduction:2 moment:21 minmax:1 liu:1 score:8 necessity:1 daniel:1 rkhs:1 interestingly:1 rog:1 document:1 outperforms:1 existing:2 com:3 comparing:1 luo:1 diederik:1 gpu:2 shape:1 krikamol:1 plot:2 yinda:1 update:1 generative:22 alec:2 ivo:1 feris:1 herbrich:1 zhang:3 accessed:1 qualitative:2 prove:3 consists:1 combine:1 wild:1 yingyu:1 introduce:2 classifiability:1 theoretically:3 pairwise:1 x0:7 karsten:1 andrea:1 surge:1 kiros:1 salakhutdinov:2 decreasing:2 soumith:2 considering:1 increasing:1 becomes:1 estimating:1 underlying:2 notation:1 bounded:1 twisted:1 medium:2 null:2 gintare:1 interpreted:1 zhongfei:1 minimizes:1 developed:2 proposing:1 unified:1 finding:1 transformation:3 nj:1 guarantee:6 pseudo:1 thorough:1 quantitative:1 zaremba:1 k2:5 rm:1 classifier:1 demonstrates:1 sherjil:1 grant:1 enjoy:1 yn:1 kelvin:1 faruk:1 danihelka:1 sutherland:1 attend:1 local:1 treat:1 frey:1 ulyanov:1 karolina:1 encoding:2 meet:1 abuse:1 hsiao:1 chose:1 studied:4 suggests:1 challenging:4 luke:2 collapse:1 range:1 statistically:1 practical:5 acknowledgment:1 lecun:2 testing:3 cheng2:1 practice:6 yj:2 cmd:1 differs:1 backpropagation:1 digit:6 pontil:2 empirical:7 significantly:1 reject:2 matching:21 composite:1 pre:2 alleviated:1 vedaldi:1 zoubin:1 get:1 cannot:1 convenience:1 selection:1 put:1 collapsed:1 applying:1 seminal:1 bellemare:1 optimize:2 equivalent:3 reviewer:1 exposure:1 attention:1 jimmy:1 ke:1 wasserman:1 pouget:1 estimator:1 parameterizing:2 shlens:1 regularize:1 ralf:1 stability:2 embedding:1 gert:2 variation:2 sercu:1 spontaneous:1 yj0:1 construction:1 caption:2 exact:2 olivier:1 us:4 distinguishing:2 hypothesis:6 goodfellow:3 lanckriet:2 trick:1 jaitly:1 roy:1 approximated:2 expensive:1 recognition:1 std:1 balaji:1 ep:2 preprint:9 tung:1 wang:1 thousand:1 coursera:1 decrease:1 valuable:1 environment:1 rmsprop:4 complexity:5 warde:2 trained:10 solving:2 ali:1 inapplicable:1 efficiency:2 eric:1 edwin:1 easily:1 xiaoou:1 various:1 train:11 massimiliano:1 zemel:2 tell:1 klaus:1 hyper:1 kevin:1 h0:3 youssef:1 vicki:1 jean:1 larger:1 solve:2 cvpr:1 relax:1 otherwise:1 encoder:6 statistic:3 transform:2 shakir:1 advantage:2 differentiable:4 sequence:1 net:2 propose:6 reconstruction:4 combining:1 realization:4 loop:1 iff:1 achieve:1 ky:2 getting:1 requirement:1 strathmann:3 generating:2 converges:2 yiming:2 ben:1 help:1 illustrate:1 andrew:1 tim:1 pose:1 x0i:1 eq:1 strong:1 auxiliary:2 c:1 kenji:4 implies:1 rasch:1 direction:1 drawback:1 discontinuous:1 attribute:1 stochastic:1 kb:1 human:1 jonathon:1 viewing:1 larry:1 require:1 vaibhava:1 barnab:1 fix:1 generalization:1 preliminary:2 ryan:1 extension:1 rong:1 considered:1 cramer:3 exp:2 k2h:1 equilibrium:2 mapping:3 achieves:1 salakhudinov:1 omitted:2 estimation:1 ruslan:3 sivaraman:1 sensitive:1 successfully:1 tool:2 minimization:1 fukumizu:5 gaussian:11 aim:4 heiko:2 rather:3 pn:6 wilson:1 cseke:1 derived:1 improvement:1 zellinger:1 indicates:2 pretrain:1 contrast:2 adversarial:15 rigorous:1 tremendously:1 brendan:1 criticism:1 helpful:1 inference:2 rio:1 i0:1 unlikely:1 chang1:1 relation:1 dfm:4 transformed:1 interested:3 issue:1 arg:1 dual:1 denoted:1 proposes:1 enrich:1 art:1 constrained:1 special:2 initialize:1 equal:1 beach:1 sampling:3 manually:1 adversarially:6 mcgan:2 yu:4 unsupervised:3 look:2 theart:1 icml:3 celeba:13 discrepancy:6 future:3 report:1 yoshua:3 quantitatively:1 mirza:1 richard:1 gordon:1 resulted:1 divergence:3 wgan:20 national:1 replacement:1 ab:1 dougal:1 highly:1 evaluation:2 mixture:2 farley:2 light:1 primal:1 cherry:1 accurate:1 peculiar:1 closer:4 necessary:6 injective:10 cifar10:2 shuran:1 arthur:6 modest:1 conduct:3 taylor:1 divide:1 penalizes:1 theoretical:5 mk:8 criticized:1 instance:1 modeling:1 fdec:3 measuring:1 ishmael:1 werner:1 clipping:5 cost:1 introducing:1 krizhevsky:1 successful:1 lsun:12 conducted:1 optimally:1 connect:1 cho:1 muandet:1 st:1 density:4 borgwardt:1 probabilistic:1 invertible:2 connecting:1 quickly:1 gans:10 sanjeev:1 central:1 zhao:1 style:1 leading:1 wojciech:1 combing:1 li:1 account:1 potential:2 diversity:3 de:1 stabilize:1 lakshminarayanan:1 titan:1 satisfy:1 caused:1 performed:1 try:2 view:4 picked:1 dumoulin:2 competitive:4 start:2 bayes:1 complicated:1 parallel:2 xing:1 metz:2 contribution:2 minimize:1 square:1 ni:2 botond:1 convolutional:1 variance:3 characteristic:8 efficiently:1 conducting:1 weak:5 lkopf:4 vincent:2 worth:1 j6:1 converged:1 bharath:4 explain:1 ping:1 sebastian:1 sriperumbudur:5 energy:5 nonetheless:1 mohamed:1 chintala:2 proof:1 associated:1 recovers:1 mi:1 dataset:6 knowledge:1 color:1 improves:1 dimensionality:1 graepel:1 hilbert:1 appears:1 feed:2 higher:4 follow:1 tom:2 wei:1 inspire:1 improved:4 formulation:1 done:3 though:1 furthermore:1 inception:5 stage:1 smola:2 autoencoders:2 correlation:1 hand:5 replacing:2 expressive:1 mehdi:1 propagation:1 continuity:1 mode:1 quality:6 gulrajani:1 dabney:1 thore:1 name:1 usa:1 dziugaite:1 evolution:1 hence:1 kyunghyun:1 satisfactory:1 funkhouser:1 indistinguishable:1 during:10 game:1 please:1 essence:1 seff:1 allowable:1 outline:2 demonstrate:2 aaditya:1 meaning:1 image:14 variational:3 ari:1 began:1 empirically:3 discussed:5 extend:3 approximates:1 berthelot:1 mellon:1 refer:2 composition:2 ishaan:1 ai:1 tuning:2 rd:1 mroueh:1 i6:2 dino:1 moving:1 similarity:2 supervision:1 acute:1 base:1 patrick:1 recent:4 perspective:4 optimizing:1 optimizes:1 mandatory:1 certain:1 binary:1 success:1 yi:1 victor:1 arjovsky:3 wasserstein:9 additional:1 relaxed:1 goel:1 parallelized:1 determine:1 maximize:1 strike:2 signal:2 ii:2 smoother:1 multiple:2 desirable:1 gretton:7 smooth:3 technical:1 faster:1 match:9 ahmed:1 long:1 cifar:10 converging:1 variant:2 regression:2 ae:3 circumstance:1 cmu:1 navdeep:1 metric:2 arxiv:19 iteration:7 kernel:62 sometimes:1 mmd:90 sejdinovic:2 alireza:1 irregular:1 addition:1 background:1 sch:4 biased:1 natschl:1 pass:1 comment:1 lughofer:1 contrary:1 balakrishnan:2 call:3 bengio:4 enough:1 embeddings:2 stephan:1 mastropietro:1 bedroom:6 li1:1 topology:5 bandwidth:3 architecture:2 reduce:2 idea:1 avenue:1 haffner:1 expression:1 unnatural:1 penalty:5 song:1 deep:10 fake:1 detailed:1 clear:1 nonparametric:1 locally:2 ten:1 clip:1 http:2 generate:8 zj:5 wisely:1 nsf:1 fish:1 per:4 carnegie:1 key:2 salient:1 threshold:1 utilize:1 schumm:1 graph:1 parameterized:6 everywhere:2 powerful:1 swersky:1 family:2 reader:1 almost:2 yann:1 lamb:1 utilizes:1 appendix:4 comparable:2 layer:2 followed:1 distinguish:3 courville:4 cheng:2 quadratic:2 adapted:1 xiaogang:1 constraint:5 gmmn:37 alex:2 grubinger:1 generates:1 speed:1 argument:1 min:9 tengyu:1 relatively:2 px:26 conjecture:1 martin:2 according:1 combination:1 smaller:4 across:1 remain:1 alike:1 intuitively:1 projecting:1 pr:4 invariant:1 bapoczos:1 computationally:2 remains:1 bing:1 discus:1 fail:1 ordinal:2 ge:1 dataspace:1 adopted:1 available:1 operation:1 observe:7 salimans:1 blurry:1 batch:14 schmidt:1 shortly:1 original:5 thomas:3 denotes:1 running:1 ensure:1 gan:96 codespace:2 giving:1 ghahramani:1 tensor:1 objective:9 skin:1 print:1 makhzani:1 obermayer:1 hoyer:1 gradient:13 iclr:6 distance:28 thank:1 zooming:1 capacity:1 decoder:2 sensible:1 parametrized:1 manifold:2 considers:3 trivial:1 ozair:1 assuming:1 code:3 besides:1 mini:1 zhai:1 minimizing:4 tijmen:1 liang:2 difficult:1 nc:2 potentially:1 sharper:1 ryota:1 ba:1 ziwei:1 implementation:1 design:2 susanne:1 boltzmann:1 observation:1 datasets:10 benchmark:4 finite:4 descent:4 extended:1 incorporated:1 hinton:3 y1:1 varied:1 arbitrary:1 sharp:3 expressiveness:1 introduced:1 david:2 required:2 specified:1 kl:1 optimized:2 discriminator:11 connection:6 learned:7 quadratically:1 kingma:1 nip:7 discontinuity:2 able:2 suggested:1 poole:1 usually:1 yujia:1 appeared:1 including:2 max:14 zhiting:1 power:4 malte:1 treated:1 natural:2 business:1 pictured:1 improve:2 github:1 imply:1 mathieu:1 arora:1 autoencoder:8 auto:5 review:2 understanding:1 prior:1 literature:1 kf:1 evolve:1 loss:6 lecture:1 generation:3 ger:1 versus:1 geoffrey:3 generator:11 foundation:2 consistent:1 xiao:1 tiny:1 playing:1 critic:1 ibm:2 nowozin:1 compatible:1 course:1 supported:1 last:1 infeasible:1 enjoys:3 deeper:1 face:5 munos:1 benefit:3 boundary:3 dimension:2 xn:1 curve:2 contour:1 rich:1 forward:3 commonly:1 welling:1 correlate:1 approximate:4 ignore:1 implicitly:1 bernhard:4 dmitry:1 belghazi:1 confirm:2 uai:1 xi:10 continuous:4 latent:3 table:2 promising:3 nature:1 learn:1 ca:1 expanding:1 improving:3 expansion:1 bottou:2 complex:2 domain:2 marc:1 aistats:2 pk:1 main:1 ebgan:2 k2k:1 whole:1 noise:2 ramdas:1 fair:1 x1:1 xu:2 representative:2 referred:1 centerpiece:1 screen:1 tomioka:1 fails:1 jmlr:2 learns:2 ian:3 tang:1 theorem:12 showing:1 jensen:1 list:1 pz:5 abadie:1 chun:1 exists:2 mnist:14 false:1 restricting:1 corr:1 magnitude:1 confuse:1 kx:1 chen:1 rejection:1 yang1:1 mf:14 distinguishable:1 ez:2 visual:2 dcgan:2 diversified:2 partially:1 pretrained:1 springer:1 gender:1 radford:2 tieleman:1 chance:2 satisfies:2 adaption:1 minibatches:2 mart:1 ma:1 lempitsky:1 identity:2 presentation:1 cheung:1 rbf:2 towards:2 lipschitz:7 fisher:1 feasible:4 change:1 hard:2 infinite:2 except:3 reducing:1 specifically:1 sampler:1 denoising:1 called:1 total:2 pas:2 player:1 shannon:1 meaningful:3 aaron:4 support:2 jianxiong:1 alexander:2 violated:1 avoiding:1 ex:1 |
6,430 | 6,816 | The Reversible Residual Network:
Backpropagation Without Storing Activations
Aidan N. Gomez? 1 , Mengye Ren? 1,2,3 , Raquel Urtasun1,2,3 , Roger B. Grosse1,2
University of Toronto1
Vector Institute for Artificial Intelligence2
Uber Advanced Technologies Group3
{aidan, mren, urtasun, rgrosse}@cs.toronto.edu
Abstract
Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both
deeper and wider. However, memory consumption becomes a bottleneck, as one
needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets
where each layer?s activations can be reconstructed exactly from the next layer?s.
Therefore, the activations for most layers need not be stored in memory during
backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10,
CIFAR-100, and ImageNet, establishing nearly identical classification accuracy
to equally-sized ResNets, even though the activation storage requirements are
independent of depth.
1
Introduction
Over the last five years, deep convolutional neural networks have enabled rapid performance improvements across a wide range of visual processing tasks [19, 26, 20]. For the most part, the
state-of-the-art networks have been growing deeper. For instance, deep residual networks (ResNets)
[13] are the state-of-the-art architecture across multiple computer vision tasks [19, 26, 20]. The
key architectural innovation behind ResNets was the residual block, which allows information to be
passed directly through, making the backpropagated error signals less prone to exploding or vanishing.
This made it possible to train networks with hundreds of layers, and this vastly increased depth led to
significant performance gains.
Nearly all modern neural networks are trained using backpropagation. Since backpropagation
requires storing the network?s activations in memory, the memory cost is proportional to the number
of units in the network. Unfortunately, this means that as networks grow wider and deeper, storing
the activations imposes an increasing memory burden, which has become a bottleneck for many
applications [34, 37]. Graphics processing units (GPUs) have limited memory capacity, leading to
constraints often exceeded by state-of-the-art architectures, some of which reach over one thousand
layers [13]. Training large networks may require parallelization across multiple GPUs [7, 28], which
is both expensive and complicated to implement. Due to memory constraints, modern architectures
are often trained with a mini-batch size of 1 (e.g. [34, 37]), which is inefficient for stochastic gradient
methods [11]. Reducing the memory cost of storing activations would significantly improve our
ability to efficiently train wider and deeper networks.
?
These authors contributed equally.
Code available at https://github.com/renmengye/revnet-public
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: (left) A traditional residual block as in Equation 2. (right-top) A basic residual function.
(right-bottom) A bottleneck residual function.
We present Reversible Residual Networks (RevNets), a variant of ResNets which is reversible in the
sense that each layer?s activations can be computed from the subsequent reversible layer?s activations.
This enables us to perform backpropagation without storing the activations in memory, with the
exception of a handful of non-reversible layers. The result is a network architecture whose activation
storage requirements are independent of depth, and typically at least an order of magnitude smaller
compared with equally sized ResNets. Surprisingly, constraining the architecture to be reversible
incurs no noticeable loss in performance: in our experiments, RevNets achieved nearly identical
classification accuracy to standard ResNets on CIFAR-10, CIFAR-100, and ImageNet, with only a
modest increase in the training time.
2
2.1
Background
Backpropagation
Backpropagation [25] is a classic algorithm for computing the gradient of a cost function with respect
to the parameters of a neural network. It is used in nearly all neural network algorithms, and is now
taken for granted in light of neural network frameworks which implement automatic differentiation
[1, 2]. Because achieving the memory savings of our method requires manual implementation of part
of the backprop computations, we briefly review the algorithm.
We treat backprop as an instance of reverse mode automatic differentiation [24]. Let v1 , . . . , vK
denote a topological ordering of the nodes in the network?s computation graph G, where vK denotes
the cost function C. Each node is defined as a function fi of its parents in G. Backprop computes
the total derivative dC/dvi for each node in the computation graph. This total derivative defines the
the effect on C of an infinitesimal change to vi , taking into account the indirect effects through the
descendants of vk in the computation graph. Note that the total derivative is distinct from the partial
derivative ?f /?xi of a function f with respect to one of its arguments xi , which does not take into
account the effect of changes to xi on the other arguments. To avoid using a small typographical
difference to represent a significant conceptual difference, we will denote total derivatives using
vi = dC/dvi .
Backprop iterates over the nodes in the computation graph in reverse topological order. For each
node vi , it computes the total derivative vi using the following rule:
X ?fj >
vi =
vj ,
(1)
?vi
j?Child(i)
where Child(i) denotes the children of node vi in G and ?fj /?vi denotes the Jacobian matrix.
2.2
Deep Residual Networks
One of the main difficulties in training very deep networks is the problem of exploding and vanishing
gradients, first observed in the context of recurrent neural networks [3]. In particular, because a deep
network is a composition of many nonlinear functions, the dependencies across distant layers can be
highly complex, making the gradient computations unstable. Highway networks [29] circumvented
this problem by introducing skip connections. Similarly, deep residual networks (ResNets) [13] use
2
a functional form which allows information to pass directly through the network, thereby keeping
the computations stable. ResNets currently represent the state-of-the-art in object recognition [13],
semantic segmentation [35] and image generation [32]. Outside of vision, residuals have displayed
impressive performance in audio generation [31] and neural machine translation [16],
ResNets are built out of modules called residual blocks, which have the following form:
y = x + F(x),
(2)
where F, a function called the residual function, is typically a shallow neural net. ResNets are robust
to exploding and vanishing gradients because each residual block is able to pass signals directly
through, allowing the signals to be propagated faithfully across many layers. As displayed in Figure
1, residual functions for image recognition generally consist of stacked batch normalization ("BN")
[14], rectified linear activation ("ReLU") [23] and convolution layers (with filters of shape three "C3"
and one "C1").
As in He et al. [13], we use two residual block architectures: the basic residual function (Figure
1 right-top) and the bottleneck residual function (Figure 1 right-bottom). The bottleneck residual
consists of three convolutions, the first is a point-wise convolution which reduces the dimensionality of
the feature dimension, the second is a standard convolution with filter size 3, and the final point-wise
convolution projects into the desired output feature depth.
a(x) = ReLU(BN(x)))
ck (x) = Convk?k (a(x))
(3)
Basic(x) = c3 (c3 (x))
Bottleneck(x) = c1 (c3 (c1 (x)))
2.3
Reversible Architectures
Various reversible neural net architectures have been proposed, though for motivations distinct from
our own. Deco and Brauer [8] develop a similar reversible architecture to ensure the preservation
of information in unsupervised learning contexts. The proposed architecture is indeed residual and
constructed to produce a lower triangular Jacobian matrix with ones along the diagonal. In Deco and
Brauer [8], the residual connections are composed of all ?prior? neurons in the layer, while NICE
and our own architecture segments a layer into pairs of neurons and additively connect one with a
residual function of the other. Maclaurin et al. [21] made use of the reversible nature of stochastic
gradient descent to tune hyperparameters via gradient descent. Our proposed method is inspired by
nonlinear independent components estimation (NICE) [9, 10], an approach to unsupervised generative
modeling. NICE is based on learning a non-linear bijective transformation between the data space
and a latent space. The architecture is composed of a series of blocks defined as follows, where x1
and x2 are a partition of the units in each layer:
y1 = x1
y2 = x2 + F(x1 )
(4)
Because the model is invertible and its Jacobian has unit determinant, the log-likelihood and its
gradients can be tractably computed. This architecture imposes some constraints on the functions the
network can represent; for instance, it can only represent volume-preserving mappings. Follow-up
work by Dinh et al. [10] addressed this limitation by introducing a new reversible transformation:
y1 = x1
y2 = x2 exp(F(x1 )) + G(x1 ).
(5)
Here, represents the Hadamard or element-wise product. This transformation has a non-unit
Jacobian determinant due to multiplication by exp (F(x1 )).
3
(a)
(b)
Figure 2: (a) the forward, and (b) the reverse computations of a residual block, as in Equation 8.
3
Methods
We now introduce Reversible Residual Networks (RevNets), a variant of Residual Networks which is
reversible in the sense that each layer?s activations can be computed from the next layer?s activations.
We discuss how to reconstruct the activations online during backprop, eliminating the need to store
the activations in memory.
3.1
Reversible Residual Networks
RevNets are composed of a series of reversible blocks, which we now define. We must partition the
units in each layer into two groups, denoted x1 and x2 ; for the remainder of the paper, we assume
this is done by partitioning the channels, since we found this to work the best in our experiments.2
Each reversible block takes inputs (x1 , x2 ) and produces outputs (y1 , y2 ) according to the following
additive coupling rules ? inspired by NICE?s [9] transformation in Equation 4 ? and residual functions
F and G analogous to those in standard ResNets:
y1 = x1 + F(x2 )
y2 = x2 + G(y1 )
(6)
Each layer?s activations can be reconstructed from the next layer?s activations as follows:
x2 = y2 ? G(y1 )
x1 = y1 ? F(x2 )
(7)
Note that unlike residual blocks, reversible blocks must have a stride of 1 because otherwise the layer
discards information, and therefore cannot be reversible. Standard ResNet architectures typically
have a handful of layers with a larger stride. If we define a RevNet architecture analogously, the
activations must be stored explicitly for all non-reversible layers.
3.2
Backpropagation Without Storing Activations
To derive the backprop procedure, it is helpful to rewrite the forward (left) and reverse (right)
computations in the following way:
z1 = x1 + F(x2 )
y2 = x2 + G(z1 )
y1 = z1
z1 = y1
x2 = y2 ? G(z1 )
x1 = z1 ? F(x2 )
(8)
Even though z1 = y1 , the two variables represent distinct nodes of the computation graph, so the
total derivatives z1 and y1 are different. In particular, z1 includes the indirect effect through y2 , while
y1 does not. This splitting lets us implement the forward and backward passes for reversible blocks
in a modular fashion. In the backwards pass, we are given the activations (y1 , y2 ) and their total
derivatives (y1 , y2 ) and wish to compute the inputs (x1 , x2 ), their total derivatives (x1 , x2 ), and the
total derivatives for any parameters associated with F and G. (See Section 2.1 for our backprop
2
The possibilities we explored included columns, checkerboard, rows and channels, as done by [10]. We
found that performance was consistently superior using the channel-wise partitioning scheme and comparable
across the remaining options. We note that channel-wise partitioning has also been explored in the context of
multi-GPU training via ?grouped? convolutions [18], and more recently, convolutional neural networks have
seen significant success by way of ?separable? convolutions [27, 6].
4
notation.) We do this by combining the reconstruction formulas (Eqn. 8) with the backprop rule
(Eqn. 1). The resulting algorithm is given as Algorithm 1.3
By applying Algorithm 1 repeatedly, one can perform backprop on a sequence of reversible blocks if
one is given simply the activations and their derivatives for the top layer in the sequence. In general,
a practical architecture would likely also include non-reversible layers, such as subsampling layers;
the inputs to these layers would need to be stored explicitly during backprop. However, a typical
ResNet architecture involves long sequences of residual blocks and only a handful of subsampling
layers; if we mirror the architecture of a ResNet, there would be only a handful of non-reversible
layers, and the number would not grow with the depth of the network. In this case, the storage cost of
the activations would be small, and independent of the depth of the network.
Computational overhead. In general, for a network with N connections, the forward and backward
passes of backprop require approximately N and 2N add-multiply operations, respectively. For a
RevNet, the residual functions each must be recomputed during the backward pass. Therefore, the
number of operations required for reversible backprop is approximately 4N , or roughly 33% more
than ordinary backprop. (This is the same as the overhead introduced by checkpointing [22].) In
practice, we have found the forward and backward passes to be about equally expensive on GPU
architectures; if this is the case, then the computational overhead of RevNets is closer to 50%.
Algorithm 1 Reversible Residual Block Backprop
1: function B LOCK R EVERSE((y1 , y2 ), (y 1 , y 2 ))
2:
z1 ? y1
3:
x2 ? y2 ? G(z1 )
4:
x1 ? z1 ? F(x2 )
>
?G
z 1 ? y 1 + ?z
y2
5:
1
>
?F
6:
x2 ? y 2 + ?x
z1
2
x1 ? z 1
7:
>
?F
8:
wF ? ?w
z1
F
>
?G
9:
wG ? ?w
y2
G
. ordinary backprop
. ordinary backprop
. ordinary backprop
. ordinary backprop
10:
return (x1 , x2 ) and (x1 , x2 ) and (wF , wG )
11: end function
Modularity. Note that Algorithm 1 is agnostic to the form of the residual functions F and G. The
steps which use the Jacobians of these functions are implemented in terms of ordinary backprop, which
can be achieved by calling automatic differentiation routines (e.g. tf.gradients or Theano.grad).
Therefore, even though implementing our algorithm requires some amount of manual implementation
of backprop, one does not need to modify the implementation in order to change the residual functions.
Numerical error. While Eqn. 8 reconstructs the activations exactly when done in exact arithmetic,
practical float32 implementations may accumulate numerical error during backprop. We study the
effect of numerical error in Section 5.2; while the error is noticeable in our experiments, it does not
significantly affect final performance. We note that if numerical error becomes a significant issue,
one could use fixed-point arithmetic on the x?s and y?s (but ordinary floating point to compute F and
G), analogously to [21]. In principle, this would enable exact reconstruction while introducing little
overhead, since the computation of the residual functions and their derivatives (which dominate the
computational cost) would be unchanged.
4
Related Work
A number of steps have been taken towards reducing the storage requirements of extremely deep
neural networks. Much of this work has focused on the modification of memory allocation within the
training algorithms themselves [1, 2]. Checkpointing [22, 5, 12] is one well-known technique which
3
We assume for notational clarity that the residual functions do not share parameters, but Algorithm 1 can be
trivially extended to a network with weight sharing, such as a recurrent neural net.
5
Table 1: Computational and spatial complexity comparisons. L denotes the number of layers.
Technique
Spatial Complexity
(Activations)
Computational
Complexity
Naive
Checkpointing [22]
Recursive Checkpointing [5]
Reversible Networks (Ours)
O(L)
?
O( L)
O(log L)
O(1)
O(L)
O(L)
O(L log L)
O(L)
trades off spatial and temporal complexity; during backprop, one stores a subset of the activations
(called checkpoints) and recomputes the remaining activations as required. Martens and Sutskever
[22] adopted this technique in the context of training recurrent neural
? networks on a sequence of
length T using backpropagation through time [33], storing every d T e layers and recomputing the
intermediate activations between each during the backward pass. Chen et al. [5] later proposed to
recursively apply this strategy on the sub-graph between checkpoints. Gruslys et al. [12] extended
this approach by applying dynamic programming to determine a storage strategy which minimizes
the computational cost for a given memory budget.
To analyze the computational and memory complexity of these alternatives, assume for simplicity a
feed-forward network consisting of L identical layers. Again, for simplicity, assume the units are
chosen such that the cost of forward propagation or backpropagation through a single layer is 1, and
the memory cost of storing a single layer?s activations is 1. In this case, ordinary backpropagation has
computational?cost 2L and storage cost L for the activations. The method of Martens and Sutskever
[22] requres 2 L storage, and it demands an additional forward computation for each layer, leading
to a total computational cost of 3L. The recursive algorithm of Chen et al. [5] reduces the required
memory to O(log L), while increasing the computational cost to O(L log L). In comparison to these,
our method incurs O(1) storage cost ? as only a single block must be stored ? and computational
cost of 3L. The time and space complexities of these methods are summarized in Table 1.
Another approach to saving memory is to replace backprop itself. The decoupled neural interface [15]
updates each weight matrix using a gradient approximation, termed the synthetic gradient, computed
based on only the node?s activations instead of the global network error. This removes any long-range
gradient computation dependencies in the computation graph, leading to O(1) activation storage
requirements. However, these savings are achieved only after the synthetic gradient estimators have
been trained; that training requires all the activations to be stored.
5
Experiments
We experimented with RevNets on three standard image classification benchmarks: CIFAR-10,
CIFAR-100, [17] and ImageNet [26]. In order to make our results directly comparable with standard
ResNets, we tried to match both the computational depth and the number of parameters as closely as
possible. We observed that each reversible block has a computation depth of two original residual
blocks. Therefore, we reduced the total number of residual blocks by approximately half, while
approximately doubling the number of channels per block, since they are partitioned into two. Table 2
shows the details of the RevNets and their corresponding traditional ResNet. In all of our experiments,
we were interested in whether our RevNet architectures (which are far more memory efficient) were
able to match the classification accuracy of ResNets of the same size.
5.1
Implementation
We implemented the RevNets using the TensorFlow library [1]. We manually make calls to TensorFlow?s automatic differentiation method (i.e. tf.gradients) to construct the backward-pass
computation graph without referencing activations computed in the forward pass. While building the
backward graph, we reconstruct the input activations (?
x1 , x
?2 ) for each block (Equation 8); Second, we
apply tf.stop_gradient on the reconstructed inputs to prevent auto-diff from traversing into the reconstructions? computation graph, then call the forward functions again to compute (?
y1 , y?2 ) (Equation
8). Lastly, we use auto-diff to traverse from (?
y1 , y?2 ) to (?
x1 , x
?2 ) and the parameters (wF , wG ). This
6
Table 2: Architectural details. ?Bottleneck? indicates whether the residual unit type used was the
Bottleneck or Basic variant (see Equation 3). ?Units? indicates the number of residual units in each
group. ?Channels? indicates the number of filters used in each unit in each group. ?Params? indicates
the number of parameters, in millions, each network uses.
Dataset
Version
Bottleneck
Units
Channels
Params (M)
CIFAR-10 (100)
CIFAR-10 (100)
ResNet-32
RevNet-38
No
No
5-5-5
3-3-3
16-16-32-64
32-32-64-112
0.46 (0.47)
0.46 (0.48)
CIFAR-10 (100)
CIFAR-10 (100)
ResNet-110
RevNet-110
No
No
18-18-18
9-9-9
16-16-32-64
32-32-64-128
1.73 (1.73)
1.73 (1.74)
CIFAR-10 (100)
CIFAR-10 (100)
ResNet-164
RevNet-164
Yes
Yes
18-18-18
9-9-9
16-16-32-64
32-32-64-128
1.70 (1.73)
1.75 (1.79)
ImageNet
ImageNet
ResNet-101
RevNet-104
Yes
Yes
3-4-23-3
2-2-11-2
64-128-256-512
128-256-512-832
44.5
45.2
Table 3: Classification error on CIFAR
Architecture
32 (38)
110
164
CIFAR-10 [17]
CIFAR-100 [17]
ResNet
RevNet
ResNet
RevNet
7.14%
5.74%
5.24%
7.24%
5.76%
5.17%
29.95%
26.44%
23.37%
28.96%
25.40%
23.69%
implementation leverages the convenience of the auto-diff functionality to avoid manually deriving
gradients; however the computational cost becomes 5N , compared with 4N for Algorithm 1, and
3N for ordinary backpropagation (see Section 3.2). The full theoretical efficiency can be realized by
reusing the F and G graphs? activations that were computed in the reconstruction steps (lines 3 and 4
of Algorithm 1).
Table 4: Top-1 classification error on ImageNet (single crop)
5.2
ResNet-101
RevNet-104
23.01%
23.10%
RevNet performance
Our ResNet implementation roughly matches the previously reported classification error rates [13].
As shown in Table 3, our RevNets roughly matched the error rates of traditional ResNets (of roughly
equal computational depth and number of parameters) on CIFAR-10 & 100 as well as ImageNet
(Table 4). In no condition did the RevNet underperform the ResNet by more than 0.5%, and in some
cases, RevNets achieved slightly better performance. Furthermore, Figure 3 compares ImageNet
training curves of the ResNet and RevNet architectures; reversibility did not lead to any noticeable
per-iteration slowdown in training. (As discussed above, each RevNet update is about 1.5-2? more
expensive, depending on the implementation.) We found it surprising that the performance matched
so closely, because reversibility would appear to be a significant constraint on the architecture, and
one might expect large memory savings to come at the expense of classification error.
Impact of numerical error. As described in Section 3.2, reconstructing the activations over many
layers causes numerical errors to accumulate. In order to measure the magnitude of this effect, we
computed the angle between the gradients computed using stored and reconstructed activations over
the course of training. Figure 4 shows how this angle evolved over the course of training for a
CIFAR-10 RevNet; while the angle increased during training, it remained small in magnitude.
7
Table 5: Comparison of parameter and activation storage costs for ResNet and RevNet.
Task
Parameter Cost
Activation Cost
ResNet-101
RevNet-104
? 178MB
? 180MB
? 5250MB
? 1440MB
ImageNet Train Loss
ImageNet Top-1 Error (Single Crop)
Original ResNet-101
RevNet-104
Original ResNet-101
RevNet-104
Train Loss
Classification error
50.00%
10 0
40.00%
30.00%
20.00%
10.00%
0
20
40
60
No. epochs
80
100
0.00%0
120
20
40
60
No. epochs
80
100
120
Figure 3: Training curves for ResNet-101 vs. RevNet-104 on ImageNet, with both networks having
approximately the same depth and number of free parameters. Left: training cross entropy; Right:
classification error, where dotted lines indicate training, and solid lines validation.
RevNet-164 CIFAR-10 Gradient Error
10 0
6
10 -1
4
2
0
10 -4
10
No. epochs
15
20
0
2
4
6
8
10
No. epochs
12
14
RevNet-164 CIFAR-10 Top-1 Error
Stored Activations
Reconstructed Activations
40.00%
10 -2
10 -3
5
50.00%
Stored Activations
Reconstructed Activations
Classification error
8
0
RevNet-164 CIFAR-10 Train Loss
10 1
Train Loss
Angle (degrees)
10
16
30.00%
20.00%
10.00%
0.00%0
2
4
6
8
10
No. epochs
12
14
16
Figure 4: Left: angle (degrees) between the gradient computed using stored and reconstructed
activations throughout training. While the angle grows during training, it remains small in magnitude.
We measured 4 more epochs after regular training length and did not observe any instability. Middle:
training cross entropy; Right: classification error, where dotted lines indicate training, and solid
lines validation; No meaningful difference in training efficiency or final performance was observed
between stored and reconstructed activations.
Figure 4 also shows training curves for CIFAR-10 networks trained using both methods of computing
gradients. Despite the numerical error from reconstructing activations, both methods performed
almost indistinguishably in terms of the training efficiency and the final performance.
6
Conclusion and Future Work
We introduced RevNets, a neural network architecture where the activations for most layers need not
be stored in memory. We found that RevNets provide considerable reduction in the memory footprint
at little or no cost to performance. As future work, we are currently working on applying RevNets to
the task of semantic segmentation, the performance of which is limited by a critical memory bottleneck
? the input image patch needs to be large enough to process high resolution images; meanwhile, the
batch size also needs to be large enough to perform effective batch normalization (e.g. [36]). We also
intend to develop reversible recurrent neural net architectures; this is a particularly interesting use
case, because weight sharing implies that most of the memory cost is due to storing the activations
(rather than parameters). Another interesting direction is predicting the activations of previous layers?
activation, similar to synthetic gradients. We envision our reversible block as a module which will
soon enable training larger and more powerful networks with limited computational resources.
8
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis,
J. Dean, M. Devin, et al. TensorFlow: Large-scale machine learning on heterogeneous distributed
systems. arXiv preprint arXiv:1603.04467, 2016.
[2] R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien,
J. Bayer, A. Belikov, A. Belopolsky, et al. Theano: A Python framework for fast computation
of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016.
[3] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent
is difficult. IEEE transactions on neural networks, 5(2):157?166, 1994.
[4] J. Chen, R. Monga, S. Bengio, and R. Jozefowicz. Revisiting distributed synchronous sgd.
arXiv preprint arXiv:1604.00981, 2016.
[5] T. Chen, B. Xu, C. Zhang, and C. Guestrin. Training deep nets with sublinear memory cost.
arXiv preprint arXiv:1604.06174, 2016.
[6] F. Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint
arXiv:1610.02357, 2016.
[7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang,
Q. V. Le, et al. Large scale distributed deep networks. In NIPS, 2012.
[8] G. Deco and W. Brauer. Higher order statistical decorrelation without information loss. In
G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing
Systems 7, pages 247?254. MIT Press, 1995. URL http://papers.nips.cc/paper/
901-higher-order-statistical-decorrelation-without-information-loss.
pdf.
[9] L. Dinh, D. Krueger, and Y. Bengio. NICE: Non-linear independent components estimation.
2015.
[10] L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using real NVP. In ICLR, 2017.
[11] P. Goyal, P. Doll?r, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia,
and K. He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint
arXiv:1706.02677, 2017.
[12] A. Gruslys, R. Munos, I. Danihelka, M. Lanctot, and A. Graves. Memory-efficient backpropagation through time. In NIPS, 2016.
[13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
770?778, 2016.
[14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In ICML, 2015.
[15] M. Jaderberg, W. M. Czarnecki, S. Osindero, O. Vinyals, A. Graves, and K. Kavukcuoglu.
Decoupled neural interfaces using synthetic gradients. arXiv preprint arXiv:1608.05343, 2016.
[16] N. Kalchbrenner, L. Espeholt, K. Simonyan, A. v. d. Oord, A. Graves, and K. Kavukcuoglu.
Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016.
[17] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical
report, University of Toronto, Department of Computer Science, 2009.
[18] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional
neural networks. In NIPS, 2012.
[19] Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D.
Jackel. Handwritten digit recognition with a back-propagation network. In Advances in neural
information processing systems, pages 396?404, 1990.
9
[20] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick.
Microsoft COCO: Common objects in context. In ECCV, 2014.
[21] D. Maclaurin, D. K. Duvenaud, and R. P. Adams. Gradient-based hyperparameter optimization
through reversible learning. In ICML, 2015.
[22] J. Martens and I. Sutskever. Training deep and recurrent networks with Hessian-free optimization. In Neural networks: Tricks of the trade, pages 479?535. Springer, 2012.
[23] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In
ICML, 2010.
[24] L. B. Rall. Automatic differentiation: Techniques and applications. 1981.
[25] D. Rumelhart, G. Hinton, and R. Williams. Learning representations by back-propagating errors.
Lett. Nat., 323:533?536, 1986.
[26] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy,
A. Khosla, M. Bernstein, et al. ImageNet large scale visual recognition challenge. IJCV, 115
(3):211?252, 2015.
[27] L. Sifre. Rigid-motion scattering for image classification. PhD thesis, Ph. D. thesis, 2014.
[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556, 2014.
[29] R. K. Srivastava, K. Greff, and J. Schmidhuber.
arXiv:1505.00387, 2015.
Highway networks.
arXiv preprint
[30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
[31] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR
abs/1609.03499, 2016.
[32] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, et al. Conditional image
generation with pixelCNN decoders. In NIPS, 2016.
[33] R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural
networks. Neural computation, 1(2):270?280, 1989.
[34] Z. Wu, C. Shen, and A. v. d. Hengel. High-performance semantic segmentation using very deep
fully convolutional networks. arXiv preprint arXiv:1604.04339, 2016.
[35] Z. Wu, C. Shen, and A. v. d. Hengel. Wider or deeper: Revisiting the ResNet model for visual
recognition. arXiv preprint arXiv:1611.10080, 2016.
[36] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
[37] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using
cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
10
7
7.1
Appendix
Experiment details
For our CIFAR-10/100 experiments, we fixed the mini-batch size to be 100. The learning rate was
initialized to 0.1 and decayed by a factor of 10 at 40K and 60K training steps, training for a total of
80K steps. The weight decay constant was set to 2 ? 10?4 and the momentum was set to 0.9. We
subtracted the mean image, and augmented the dataset with random cropping and random horizontal
flipping.
For our ImageNet experiments, we fixed the mini-batch size to be 256, split across 4 Titan X GPUs
with data parallelism [28]. We employed synchronous SGD [4] with momentum of 0.9. The model
was trained for 600K steps, with factor-of-10 learning rate decays scheduled at 160K, 320K, and
480K steps. Weight decay was set to 1 ? 10?4 . We applied standard input preprocessing and
data augmentation used in training Inception networks [30]: pixel intensity rescaled to within [0,
1], random cropping of size 224 ? 224 around object bounding boxes, random scaling, random
horizontal flipping, and color distortion, all of which are available in TensorFlow. For the original
ResNet-101, We were unable to fit a mini-batch size of 256 on 4 GPUs, so we instead averaged the
gradients from two serial runs with mini-batch size 128 (32 per GPU). For the RevNet, we were able
to fit a mini-batch size of 256 on 4 GPUs (i.e. 64 per GPU).
7.2
Memory savings
Fully realizing the theoretical gains of RevNets can be a non-trivial task and require precise low-level
GPU memory management. We experimented with two different implementations within TensorFlow:
With the first, we were able to reach reasonable spatial gains using ?Tensor Handles? provided by
TensorFlow, which preserve the activations of graph nodes between calls to session.run. Multiple
session.run calls ensures that TensorFlow frees up activations that will not be referenced later. We
segment our computation graph into separate sections and save the bordering activations and gradients
into the persistent Tensor Handles. During the forward pass of the backpropagation algorithm, each
section of the graph is executed sequentially with the input tensors being reloaded from the previous
section and the output tensors being saved for use in the subsequent section. We empirically verified
the memory gain by fitting at least twice the number of examples while training ImageNet. Each
GPU can now fit a mini-batch size of 128 images, compared the original ResNet, which can only fit a
mini-batch size of 32. The graph splitting trick brings only a small computational overhead (around
10%).
The second and most significant spatial gains were made by implementing each residual stack as a
tf.while_loop with the back_prop parameter set to False. This setting ensures that activations
of each layer in the residual stack (aside from the last) are discarded from memory immediately
after their utility expires. We use the tf.while_loops for both the forward and backward passes of
the layers, ensuring both efficiently discard activations. Using this implementation we were able to
train a 600-layer RevNet on the ImageNet image classification challenge on a single GPU; despite
being prohibitively slow to train this demonstrates the potential for massive savings in spatial costs of
training extremely deep networks.
11
| 6816 |@word determinant:2 version:1 briefly:1 eliminating:1 middle:1 underperform:1 additively:1 tried:1 bn:2 incurs:2 sgd:3 mengye:1 thereby:1 solid:2 recursively:1 reduction:1 liu:1 series:2 ours:1 envision:1 com:1 surprising:1 activation:57 must:5 gpu:7 parsing:1 devin:2 subsequent:2 distant:1 partition:2 additive:1 shape:1 enables:1 numerical:7 remove:1 indistinguishably:1 update:2 v:1 aside:1 generative:2 half:1 vanishing:3 realizing:1 iterates:1 node:9 toronto:2 traverse:1 zhang:2 five:1 mathematical:1 along:1 constructed:1 become:1 persistent:1 descendant:1 consists:1 abadi:1 ijcv:1 overhead:5 fitting:1 introduce:1 indeed:1 rapid:1 roughly:4 themselves:1 growing:1 multi:1 wavenet:1 inspired:2 rall:1 little:2 increasing:3 becomes:3 project:1 provided:1 notation:1 matched:2 agnostic:1 evolved:1 minimizes:1 transformation:4 differentiation:5 temporal:1 every:1 exactly:2 prohibitively:1 demonstrates:1 partitioning:3 unit:13 ramanan:1 appear:1 continually:1 danihelka:1 mren:1 referenced:1 treat:1 modify:1 despite:2 establishing:1 approximately:5 might:1 twice:1 limited:3 range:2 averaged:1 practical:2 lecun:1 practice:1 block:22 implement:3 recursive:2 backpropagation:15 gruslys:2 goyal:1 footprint:1 procedure:1 digit:1 maire:1 significantly:3 regular:1 cannot:1 convenience:1 storage:10 context:5 applying:3 instability:1 dean:2 marten:3 shi:1 williams:2 toronto1:1 focused:1 resolution:1 shen:2 simplicity:2 splitting:2 checkpointing:4 immediately:1 rule:3 estimator:1 dominate:1 deriving:1 enabled:1 classic:1 handle:2 analogous:1 massive:1 exact:2 programming:1 us:1 trick:2 element:1 rumelhart:1 expensive:3 recognition:8 particularly:1 bottom:2 observed:3 module:2 preprint:13 wang:1 calculate:1 thousand:1 revisiting:2 ensures:2 cycle:1 sun:1 ordering:1 trade:2 rescaled:1 complexity:6 angermueller:1 brauer:3 dynamic:1 trained:5 rewrite:1 segment:2 efficiency:3 czarnecki:1 expires:1 indirect:2 various:1 train:8 stacked:1 distinct:3 recomputes:1 effective:1 fast:1 artificial:1 outside:1 kalchbrenner:3 whose:1 modular:1 larger:2 cvpr:2 distortion:1 reconstruct:2 otherwise:1 triangular:1 ability:1 wg:3 simonyan:3 itself:1 final:4 online:1 sequence:4 net:5 reconstruction:4 product:1 mb:4 remainder:1 hadamard:1 combining:1 sutskever:4 parent:1 xception:1 requirement:4 cropping:2 produce:2 adam:1 depthwise:1 object:3 wider:4 coupling:1 recurrent:6 develop:2 propagating:1 resnet:22 measured:1 derive:1 depending:1 noticeable:3 implemented:2 c:1 skip:1 involves:1 come:1 indicate:2 implies:1 direction:1 closely:2 saved:1 functionality:1 filter:3 stochastic:2 enable:2 public:1 implementing:2 backprop:23 require:3 espeholt:2 around:2 duvenaud:1 exp:2 maclaurin:2 rfou:1 mapping:1 dieleman:1 efros:1 estimation:3 currently:2 jackel:1 highway:2 grouped:1 almahairi:1 hubbard:1 faithfully:1 tf:5 mit:1 ck:1 rather:1 avoid:2 improvement:1 vk:3 consistently:1 likelihood:1 notational:1 indicates:4 kyrola:1 adversarial:1 sense:2 wf:3 helpful:1 rigid:1 typically:3 perona:1 going:1 interested:1 pixel:1 issue:1 classification:16 denoted:1 art:5 spatial:6 equal:1 construct:1 saving:6 having:1 beach:1 reversibility:2 manually:2 identical:3 represents:1 park:1 frasconi:1 unsupervised:2 nearly:4 icml:3 future:2 report:1 modern:2 composed:3 preserve:1 floating:1 consisting:1 microsoft:1 ab:1 highly:1 possibility:1 multiply:1 henderson:1 light:1 behind:1 accurate:1 bayer:1 closer:1 partial:1 typographical:1 rgrosse:1 modest:1 decoupled:2 traversing:1 initialized:1 desired:1 girshick:1 theoretical:2 instance:3 increased:2 modeling:1 column:1 recomputing:1 rabinovich:1 ordinary:9 cost:23 introducing:3 subset:1 hundred:1 krizhevsky:2 osindero:1 graphic:1 stored:11 reported:1 dependency:3 connect:1 params:2 synthetic:4 st:1 density:1 decayed:1 oord:3 off:1 invertible:1 analogously:2 nvp:1 vastly:1 deco:3 again:2 thesis:2 reconstructs:1 huang:1 zen:1 augmentation:1 management:1 inefficient:1 leading:3 derivative:12 return:1 checkerboard:1 jacobians:1 account:2 reusing:1 simard:1 szegedy:2 potential:1 stride:2 summarized:1 includes:1 titan:1 explicitly:2 vi:8 later:2 performed:1 analyze:1 option:1 complicated:1 jia:3 accuracy:3 convolutional:5 efficiently:2 ofthe:1 yes:4 handwritten:1 raw:1 kavukcuoglu:3 ren:2 rectified:2 cc:1 russakovsky:1 reach:2 touretzky:1 manual:2 sharing:2 infinitesimal:1 tucker:1 associated:1 propagated:1 gain:5 dataset:2 color:1 dimensionality:1 segmentation:3 routine:1 back:2 exceeded:1 feed:1 higher:2 scattering:1 follow:1 zisserman:1 leen:1 done:3 though:4 box:1 furthermore:1 roger:1 inception:1 lastly:1 working:1 eqn:3 horizontal:2 su:1 nonlinear:2 reversible:31 propagation:2 bordering:1 minibatch:1 defines:1 mode:1 brings:1 scheduled:1 grows:1 usa:1 effect:6 building:1 y2:14 semantic:3 during:10 davis:1 pdf:1 bijective:1 demonstrate:1 motion:1 interface:2 fj:2 greff:1 image:16 wise:5 fi:1 recently:1 krueger:1 superior:1 common:1 zhao:1 functional:1 empirically:1 ballas:1 volume:1 million:1 discussed:1 he:3 accumulate:2 significant:6 composition:1 dinh:3 jozefowicz:1 anguelov:1 automatic:5 trivially:1 similarly:1 session:2 pixelcnn:1 stable:1 impressive:1 add:1 own:2 reverse:4 discard:2 store:3 termed:1 tesauro:1 hay:1 coco:1 schmidhuber:1 success:1 preserving:1 seen:1 additional:1 guestrin:1 isola:1 deng:1 employed:1 determine:1 corrado:2 signal:3 exploding:3 preservation:1 multiple:4 arithmetic:2 full:1 reduces:2 technical:1 match:3 cross:2 long:4 cifar:22 lin:1 serial:1 equally:4 impact:1 qi:1 variant:4 basic:4 crop:2 heterogeneous:1 vision:3 ensuring:1 arxiv:26 resnets:16 represent:5 normalization:3 iteration:1 agarwal:1 achieved:4 monga:2 c1:3 pyramid:1 background:1 krause:1 addressed:1 grow:3 parallelization:1 unlike:1 pass:4 bahdanau:1 effectiveness:1 call:4 zipser:1 leverage:1 backwards:1 constraining:1 intermediate:1 enough:2 bengio:4 yang:1 bernstein:1 affect:1 relu:2 split:1 fit:4 architecture:25 barham:1 grad:1 shift:1 bottleneck:10 whether:2 expression:1 synchronous:2 utility:1 url:1 passed:1 granted:1 accelerating:1 hessian:1 cause:1 repeatedly:1 deep:18 generally:1 tune:1 karpathy:1 amount:1 backpropagated:1 ph:1 unpaired:1 reduced:1 http:2 aidan:2 dotted:2 per:4 hyperparameter:1 dickstein:1 group:3 key:1 recomputed:1 achieving:1 clarity:1 prevent:1 verified:1 backward:8 v1:1 graph:15 chollet:1 year:1 run:3 angle:6 powerful:1 raquel:1 throughout:1 almost:1 reasonable:1 architectural:2 wu:2 patch:1 lanctot:1 appendix:1 scaling:1 comparable:2 pushed:1 layer:41 gomez:1 topological:2 constraint:4 handful:4 x2:20 scene:1 calling:1 argument:2 extremely:2 separable:2 gpus:5 circumvented:1 department:1 according:1 across:7 smaller:1 slightly:1 reconstructing:2 partitioned:1 shallow:1 making:2 modification:1 den:2 restricted:1 theano:2 referencing:1 taken:2 equation:6 resource:1 previously:1 remains:1 discus:1 end:1 adopted:1 available:2 operation:2 brevdo:1 doll:2 apply:2 observe:1 denker:1 subtracted:1 batch:12 alternative:1 save:1 original:5 top:6 denotes:4 ensure:1 remaining:2 include:1 subsampling:2 lock:1 running:1 unchanged:1 tensor:4 intend:1 realized:1 flipping:2 strategy:2 traditional:3 diagonal:1 gradient:26 iclr:1 unable:1 separate:1 capacity:1 decoder:1 consumption:1 unstable:1 urtasun:1 trivial:1 stop_gradient:1 code:1 length:2 reed:1 mini:8 sermanet:1 innovation:1 difficult:1 unfortunately:1 executed:1 expense:1 implementation:10 boltzmann:1 satheesh:1 contributed:1 perform:3 allowing:1 convolution:9 neuron:2 discarded:1 benchmark:1 howard:1 descent:3 displayed:2 extended:2 hinton:4 precise:1 dc:2 y1:18 stack:2 intensity:1 introduced:2 pair:1 required:3 c3:4 connection:3 imagenet:17 z1:14 boser:1 tensorflow:7 hour:1 nip:6 tractably:1 able:5 parallelism:1 pattern:1 challenge:2 built:1 memory:29 critical:1 decorrelation:2 difficulty:1 predicting:1 residual:43 advanced:1 zhu:1 scheme:1 improve:2 github:1 technology:1 library:1 naive:1 auto:3 review:1 prior:1 nice:5 epoch:6 python:1 multiplication:1 graf:5 loss:7 expect:1 fully:3 sublinear:1 generation:3 limitation:1 proportional:1 allocation:1 interesting:2 validation:2 degree:2 vanhoucke:1 consistent:1 imposes:2 principle:1 dvi:2 editor:1 tiny:1 storing:9 share:1 translation:3 eccv:1 row:1 prone:1 course:2 surprisingly:1 last:2 keeping:1 slowdown:1 free:3 soon:1 alain:1 senior:2 deeper:6 institute:1 wide:1 taking:1 munos:1 distributed:3 van:2 curve:3 depth:10 dimension:1 lett:1 hengel:2 computes:2 forward:13 made:3 author:1 preprocessing:1 sifre:1 far:1 erhan:1 transaction:1 reconstructed:8 jaderberg:1 global:1 sequentially:1 ioffe:1 conceptual:1 belongie:1 xi:3 latent:1 modularity:1 khosla:1 table:9 nature:1 channel:7 robust:1 ca:1 complex:1 meanwhile:1 zitnick:1 vj:1 did:3 main:1 motivation:1 bounding:1 hyperparameters:1 child:3 x1:21 xu:1 augmented:1 fashion:1 slow:1 sub:1 mao:1 momentum:2 wish:1 float32:1 jacobian:4 reloaded:1 formula:1 remained:1 bastien:1 covariate:1 explored:2 experimented:2 decay:3 burden:1 consist:1 false:1 sohl:1 corr:1 mirror:1 phd:1 magnitude:4 nat:1 budget:1 demand:1 chen:6 entropy:2 led:1 simply:1 likely:1 visual:3 vinyals:3 doubling:1 springer:1 ma:1 nair:1 conditional:1 sized:2 towards:1 replace:1 considerable:1 change:3 included:1 typical:1 checkpoint:2 reducing:3 diff:3 total:12 called:3 pas:8 tulloch:1 uber:1 meaningful:1 citro:1 exception:1 internal:1 audio:2 srivastava:1 |
6,431 | 6,817 | Fast Rates for Bandit Optimization with
Upper-Confidence Frank-Wolfe
Quentin Berthet ?
University of Cambridge
[email protected]
Vianney Perchet ?
ENS Paris-Saclay & Criteo Research, Paris
[email protected]
Abstract
We consider the problem of bandit optimization, inspired by stochastic optimization and online learning problems with bandit feedback. In this problem, the objective is to minimize a global loss function of all the actions, not necessarily a
cumulative loss. This framework allows us to study a very general class of problems, with applications in statistics, machine learning, and other fields. To solve
this problem, we analyze the Upper-Confidence Frank-Wolfe algorithm, inspired
by techniques for bandits and convex optimization. We give theoretical guarantees for the performance of this algorithm over various classes of functions, and
discuss the optimality of these results.
Introduction
In online optimization problems, a decision maker choses at each round t ? 1 an action ?t from
some given action space, observes some information through a feedback mechanism in order to
minimize a loss, function of the set
Pof actions {?1 , . . . , ?T }. Traditionally, this objective is computed
as a cumulative loss of the form t `t (?t ) [20, 34], or as a function thereof [2, 3, 16, 32].
Examples include classical multi-armed bandit problems where the action space is finite with K
elements, in stochastic or adversarial settings [9]. In these problems, the loss at round t can be
written as `t (e?t ) for a linear form `t on RK , and basis vectors ei . More generally, this includes
also bandit problems over a convex body C, where the action at each round consists in picking xt ? C
and where the loss `t (xt ) is for some convex function `t [see, e.g. 9, 12, 19, 10].
In this work, we consider the online learning problem of bandit optimization. Similarly to other
problems of this type, a decision maker chooses at each round an action ?t from a set of size K, and
observes information about an unknown convex loss
function L. The difference is that the objective
PT
is to minimize a global convex loss L T1 t=1 e?t , not a cumulative one. At each round, choosing
the i-th action increases the information about the local dependency of L on its i-th coefficient. This
problem can be contrasted with the objective of minimizing the average pseudo-regret in a stochastic
PT
bandit problem, i.e. of minimizing T1 t=1 L(e?t ) with observation `t (e?t ), a noisy estimate of
L(e?t ). At the intersection of these frameworks, when L is a linear form, is the stochastic multiarmed bandit problem. Our problem is also related to maximization of known convex objectives
[2, 3]. We compare our framework to these settings in Section 1.4.
Bandit optimization shares some similarities with stochastic optimization problems, where the objective is to minimize f (xT ) for an unknown function f , while choosing at each round a variable xt
?
Supported by an Isaac Newton Trust Early Career Support Scheme and by The Alan Turing Institute under
the EPSRC grant EP/N510129/1.
?
Supported by the ANR (grant ANR- 13-JS01-0004-01), and the FMJH Program Gaspard Monge in Optimization and operations research (supported in part by EDF) and from the Labex LMH.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
and observing some noisy information about the function f . Our problem can be seen as a stochastic
optimization problem over
Ptthe simplex, with the caveat that the list of actions ?1 , . . . , ?T determines
the variable, as xt = 1t s=1 e?s , as well as the manner in which additional information about the
function can be gathered. This setting allows us to study a more general class of problems than
multi-armed bandits, and to cover examples where there is not one optimal action, but rather an optimal global strategy, that is an optimal mix of actions. We describe several natural problems from
machine learning, statistics, or economics that are cases of bandit optimization.
This problem draws inspiration from the world of multi-armed bandit problems and that of stochastic convex optimization, and our solution to it does as well. We analyze the Upper-Confidence
Frank-Wolfe algorithm, a modification of the Frank-Wolfe algorithm [17] and of the UCB algorithm
for bandits [5]. The link with Frank-Wolfe is related to the choice of one action, and encourages
exploitation, while the link with UCB encourages to chose rarely picked actions in order to increase
knowledge about the function, encouraging exploration. This algorithm can be used for all convex
functions L, and performs in a near-optimal manner over various classes of functions. Indeed, if
it has been already
? proved that it achieves slow rates of convergence in some cases, i.e., the error
decreases as 1/ T , we are able to exhibit fast rates decreasing in 1/T , up to logarithmic terms.
These fast rates are surprising, as they sometimes even hold for non-strongly convex functions, and
in many problems with bandit feedback they cannot be reached [23, 35]. As shown in our lower
bounds, the main complexity of this problem is statistical and comes from the limited information
available about the unknown function L. Usual results in optimization with a known function are
not necessarily relevant to our problem. As an example, while linear rates in e?cT are possible
in deterministic settings with variants in the Frank-Wolfe algorithm, we are limited to fast rates in
1/T under similar assumptions. Interestingly, while linear functions are one of the settings in which
the deterministic Frank-Wolfe algorithm is the most efficient, it is among the most complicated for
bandit optimization, and only slow rates are possible (see theorems 2 and 6).
Our work is organized in the following manner: we describe in Section 1 the problem of bandit
optimization. The main algorithm is introduced in Section 2, and its performance in various settings
is studied in Section 3, 4, and 5. All proofs of the main results are in the supplementary material.
Notations: Forany positive integer n, denote
P by [n] the set {1, . . . , n} and, forKany positive integer
K, by ?K := p ? RK : pi ? 0 and
i?[K] pi = 1 the unit simplex of R . Finally, ei stands
for the i-th vector of the canonical basis of RK . Notice that ?K is their convex hull.
1
Bandit Optimization
We describe the bandit optimization problem, generalizing multi-armed bandits. This stochastic
optimization problem is doubly related to bandits: The decision variable cannot be chosen freely but
is tied to the past actions, and information about the function is obtained via a bandit feedback.
1.1
Problem description
A each time step t ? 1, a decision maker chooses an action ?t ? [K] from K different actions with
the objective of minimizing an unknown convex loss function L : ?K ? R. Unlike in traditional
online learning problems,
we do not assume that the overall objective of the agent is to minimize
P
a cumulative loss t L(e?t ) but rather to minimize the global loss L(pT ), where pt ? ?K is the
vector of proportions of each action (also called occupation measure), i.e.,
Pt
pt = T1 (t)/t, . . . , TK (t)/t with Ti (t) = s=1 1{?s = i} .
Pt
Alternatively, pt = 1t i=1 e?s . As usual in stochastic optimization, the performance of a policy is
evaluated by controlling the difference
r(T ) := E[L(pT )] ? min L(p) .
p??K
The information available to the policy is a feedback of bandit type: given the choice ?t = i, it is
an estimate g?t of ?L(pt ). Its precision, with respect to each coefficient i ? [K], is specified by a
deviation function ?t,i , meaning that for all ? ? (0, 1), it holds with probability 1 ? ? that
|?
gt,i ? ?i L(pt )| ? ?t,i (Ti (t), ?) .
2
At each round, it is possible to improve the precision for one of the coefficients of the gradient but
possibly at a cost of increasing
p the global loss. The most typical case, described in the following
section, is of ?t,i (Ti , ?) = 2 log(t/?)/Ti , when the information consists of observations from
different distributions. In general, this type of feedback mechanism is indicative of a bandit feedback
(and not of a full information setting), as motivated by the following parametric setting.
1.2
Bandit feedback and parametric setting
One of the motivations is the minimization of a loss function L belonging to a known class
{L(?, ?), ? ? RK } with an unknown parameter ?. Choosing the i-th action provides information
about ?i , through an observation of some auxiliary distribution ?i .
As an example, the classical stochastic multi-armed bandit problem [9] falls within our framework.
? can be expressed as
Denoting by ?i the expected loss of arm i ? [K], the average pseudo-regret R
t
K
X
X Ti (t)
?>
? =1
??s ? ?? =
? ?? = p>
?,
R(t)
?i
t ??p
t s=1
t
i=1
with p? = ei? ,
Hence the choice of L(?, p) = ?> p corresponds the problem of multi-armed bandits. Since
?L(?, p) = ?, the feedback mechanism for g?t is induced by having a sample Xt from ??t at
? t,i , the empirical mean of the Ti (t) observations ?i . In this case, if ?i is
time step t, taking g?t,i = X
p
sub-Gaussian with parameter 1, we have ?t,i (Ti , ?) = 2 2 log(t/?)/Ti .
More generally, for any parametric model, we can consider the following observation setting: For
all i ? [K], let ?i be a sub-Gaussian distribution with mean ?i and tail parameter ? 2 . At time t,
for an action ?t ? [K], we observe a realization from ??t . We estimate ?i by the empirical mean
?
?t,i of the Ti (t) draws from ?i , and g?t = ?p L(?
?t , pt ) as an estimate of the gradient of L = L(?, ?)
at pt . The following bound on ?i under smoothness conditions on the parametric model is a direct
application of Hoeffding?s inequality.
Proposition 1. Let L = L(?, ?) for some ? ? RK being ?-gradient-Lipschitz, i.e., such that
?p L(?, p) i ? ?p L(?0 , p) i ? |?i ? ?0i | , ?p ? ?([K]).
Under the sub-Gaussian observationpsetting above, g?t = ?p L(?
?t , pt ) is a valid gradient feedback
with deviation bounds ?t,i (Ti , ?) = 2? 2 log(t/?)/Ti .
This Lipschitz condition on the parameter ? gives a motivation for our gradient bandit feedback.
1.3
Examples
Stochastic multi-armed bandit:
As noted above, the stochastic multi-armed bandit problem is a special case of our setting for a loss
L(p) = ?> p, and the
? bandit feedback allows to construct a proxy for the gradient g?t with deviations
?i decaying in 1/ Ti . The UCB algorithm used to solve this problem inspires our algorithm that
generalizes to any loss function L, as discussed in Section 2.
Online experimental design: In the context of statistical estimation with heterogenous data sources
[8], consider the problem of allocating samples in order to minimize the variance of the final estimate. At time t, it is possible to sample from one of K distributions N (?i , ?i2 ) for i ? [K], the
objective being to minimize the average variance of the simple unbiased estimator
P
P
E[k?? ? ?k22 ] = i?[K] ?i2 /Ti equivalent to L(p) = i?[K] ?i2 /pi .
For unknown ?i , this problem falls within our framework and the gradient with coordinates ??i2 /p2i
can be estimated by using the Ti draws from N (?i , ?i2 ) to construct ?
?i2 . This function is only defined
on the interior of the simplex and is unbounded, matters that we discuss further in Section 4.3. Other
objective functions than the expected `2 norm of the error can be used, as in [11], who consider the
`? norm of the actual estimated deviations, not its expectation.
3
Utility maximization: A classical model to describe the utility of an agent purchasing xi units of
K different goods is the Cobb-Douglas utility (see e.g. [27]) defined for parameters ?i ? (0, 1) by
U (x1 , . . . , xK ) =
?i
i?[K] xi
Q
.
Maximizing this utility for unknown ?i under a budget constraint - where each price is assumed
to be 1 for ease of notations - by buying one unit of one of K goods at each round,
is therefore
P
equivalent to minimizing in pi (the proportion of good i in the basket) L(p) = ? i?[K] ?i log(pi ).
Other examples: More generally, the notion of bandit optimization can be applied to any situation
where one optimizes a strategy through actions that are taken sequentially, with information gained
at each round, and where the objective depends only on the proportions of actions. Other examples
include a problem inspired by online Markovitz portfolio optimization, where the goal is to minimize
L(p) = p> ?p ? ??> p, with a known covariance matrix ? P
and unknown returns ?, or several
generalizations of bandit problems such as minimizing L(p) = i?[K] fi (?i )pi when observations
are drawn from a distribution with mean ?i , for known fi .
1.4
Comparison with other problems
As mentioned in the introduction, the problem of bandit optimization is different from online learning problems related to regret minimization [21, 1, 10], even
P in a stochastic setting. While
P the usual
objective is to minimize a cumulative regret related to T1 t `t (xt ), we focus on L( T1 t e?t ).
Problems related to online optimization of global costs or objectives have been studied in similar
settings [2, 3, 16, 32]. They are equivalent to minimizing a loss L(p>
T V ) where V is a K ? d
unknown matrix and L(?) : Rd ? R is known. The feedback at stage t is a noisy evaluations of
V?t . In the stochastic case [2, 3], this is close to our setting - even though
none of them subsumes
?
directly the other one. Only slow rates of convergence of order 1/ T are derived for the variant
of Frank-Wolfe, while we aim at fast rates, which are optimal. In contrast, in the adversarial case
[16, 32], there are instances of the problem where the average regret cannot decrease to zero [26].
Using the Frank-Wolfe algorithm in a stochastic optimization problem has also already been considered, particularly in [25], where the estimates of the gradients are increasingly precise in t, independently of the actions of the decision maker. This setting, where the action at each round is to pick
xt in the domain in order to minimize f (xT ) is therefore closer to classical stochastic optimization
than online learning problems related to bandits [9, 19, 10].
2
Upper-Confidence Frank-Wolfe algorithm
With linear functions, as in multi-armed bandits, an estimate of the gradient can be?established by
using the past observations, as well as confidence intervals on each coefficient in 1/ Ti . The UCB
algorithm instructs to pick the action with the smallest lower confidence estimate ?t,i for the loss.
This is equivalent to making a step of size 1/(t + 1) in the direction of the corner of the simplex e
that minimizes e> ?t . Following this intuition, we introduce the UCB Frank-Wolfe algorithm that
uses a proxy of the gradient, penalized by the size of confidence intervals.
Algorithm 0: UCB Frank-Wolfe algorithm
Input: K, p0 = 1[K] /K, sequence (?t )t?0 ;
for t ? 0 do
Observe g?t , noisy estimate of ?L(pt );
for i ? [K] do
?t,i = g?t ? ?t,i (Ti (t), ?t )
U
i
end
?t,i ;
Select ?t+1 ? argmini?[K] U
1
Update pt+1 = pt + t+1 (e?t+1 ? pt )
end
4
Notice that for any algorithm, the selection of an action ?t+1 ? [K] at time step t + 1 updates the
variable p with respect to the following dynamics
1
1
1
pt +
pt+1 = 1 ?
e?
= pt +
(e? ? pt ) .
(1)
t+1
t + 1 t+1
t + 1 t+1
This is implied by the mechanism of the problem, and is not dependent on the choice of an algorithm.
If the choice of e?t+1 is e?t+1 , the minimizer of s> ?L(pt ) over all s ? ?K , this would precisely
be the Frank-Wolfe algorithm with step size 1/(t + 1). Inspired by this similarity, our selection rule
?t for ?L(pt ) based on the information up to time t.
is driven by the same principle, using a proxy U
Our selection rule is therefore driven by two principles, borrowing from tools in convex optimization
(the Frank-Wolfe algorithm) and classical bandit problems (Upper-confidence bounds).
?t . The computational
The choice of action ?t+1 is equivalent to taking e?t+1 ? argmins??K s> U
cost of this procedure is very light, and apart from gradient computations, it is linear in K at each
iteration, leading to a global cost of order KT .
3
Slow rates
?
In this section we show that when ?i is of order 1/ Ti , as motivated
by the parametric model of
p
Section 1.2, our algorithm has an approximation error of order log(T )/T over the very general
class of smooth convex functions. We refer to this as the slow rate. Our analysis is based on the
classical study of the Frank-Wolfe algorithm [see, e.g. 22, and references therein]. We consider the
case of C-smooth convex functions on the unit simplex, for which we recall the definition.
Definition (Smooth functions). For a set D ? Rn , a function f : D ? R is said to be a C-smooth
function if it is differentiable and if its gradient is C-Lipshitz continuous, i.e. the following holds
k?f (x) ? ?f (y)k2 ? Ckx ? yk2 , ?x, y ? D .
We denote by FC,K the set of C-smooth convex functions. They attain their minimum at a point
p? ? ?K and their Hessian is uniformly bounded, ?2 L(p) CIK , if they
? are twice differentiable.
We establish in this general setting a slow rate when ?i decreases like 1/ Ti .
Theorem 2 (Slow rate). Let L be a C-smooth convex function over the unit simplex ?K . For
any T ? 1, after
p T steps of the UCB Frank-Wolfe algorithm with a bandit feedback such that
?t,i (Ti , ?) = 2 log(t/?)/Ti and the choice ?t = 1/t2 , it holds that
r
?2
2k?Lk + kLk
3K log(T ) C log(eT )
?
?
E L(pT ) ? L(p? ) ? 4
+
+
+K
.
T
T
6
T
The proof draws inspiration from the analysis of the Frank-Wolfe algorithm with stepsize of 1/(t+1)
and of the UCB algorithm. Notice that our algorithm is adaptive to the gradient Lipschitz constant
C, and that the ?
leading term of the error does not depend on it. We also emphasize the fact that the
dependency in K is expected, and optimal, in bandit setting.
For linear mappings L(p) = p> ?, our analysis is equivalent to studying the UCB
p algorithm in
multi-armed bandits. The slow rate in Theorem 2 corresponds to a regret of order KT log(T
p ), the
distribution-independent
p (or worst case) performance of UCB. The extra dependency in log(T )
could be reduced to log(K) or even optimally to 1 by using confidence intervals more carefully
tailored, for instance by replacing the log(t) term appearing in the definition of the estimated gradients by log(T /Ti (t)) or log(T /KTi (t)) if the horizon T is known in advance as in the algorithms
MOSS or ETC (see [4, 29, 30]), but at the cost of a more involved analysis.
Thus, p
multi-armed bandits provide a lower bound for the approximation error E[L(pT )] ? L(p? ) of
order K/T for smooth convex functions. We discuss lower bounds further in Section 5.
p
For the sake of clarity, we state all our results when ?t,i (Ti , ?) = 2 log(t/?)/Ti , but our techniques
?
handle more general deviations as ?t,i (Ti , ?) = ? log(t/?)/Ti where ? ? R and ? > 0 are some
known parameters. More general results can be found in the supplementary material.
5
4
Fast rates
In this section, we describe situations where the approximation error rate can be improved to a fast
rate of order log(T )/T , when we consider various classes of functions, with additional assumptions.
4.1
Stochastic multi-armed bandits and functions minimized on vertices
A very natural and well-known - yet illustrative - example of such a restricted class of functions is
simply the case of classical bandits where ?(i) := ?i ? ?? is bounded away from 0 for i 6= ?. Our
analysis of the algorithm can be adapted to this special case with the following result.
Proposition 3. Let L be the linear function p 7? p> ?. p
After T steps of the UCB Frank-Wolfe
algorithm with a bandit feedback such that ?t,i (Ti , ?) = 2 log(t/?)/Ti , the choices of ?t = 1/t2
hold the following
?2
?Kk?k
48 log(T ) X 1
?
E[L(pT )] ? L(p? ) ?
+3
+K
.
T
3
T
?(i)
i6=?
The constants of this proposition are sub-optimal (for instance the 48 can be reduced up to 2 using more careful but involved analysis). It is provided here to show that this classical bound on
the pseudo-regret in stochastic multi-armed bandits [see e.g. 9, and references therein] can be recovered with Frank-Wolfe type of techniques illustrating further the links between bandit problems
and convex optimization [20, 34]. This result can actually be generalized to any convex functions
which is minimized on a vertex of the simplex with a gradient whose component-wise differences
are bounded away from 0.
Proposition 4. Let L be a convex mapping that attains its minimum on ?K at a vertex p? = ei?
and such that ?(i) (L) := ?i L(p? ) ? ?i? L(p? ) > 0 for all i 6= i? . Then,
pafter T steps of the UCB
Frank-Wolfe algorithm with a bandit feedback such that ?t,i (Ti , ?) = 2 log(t/?)/Ti , the choices
of ?t = 1/t2 hold the following
48 log(T ) X
1
C log(eT )
?2
2k?Lk? + kLk?
E[L(pT )]?L(p? ) ? ?(L)
+
+( +K)
,
(i)
T
T
6
T
? (L)
i6=?
CK
(i)
where ?(L) = 1 + ?min
(L) and ?min (L) = mini6=i? ? (L).
The KKT conditions imply that ?(i) (L) ? 0 whenever p? is in a corner, but the strict inequality is
not always guaranteed. In particular, this result may not hold if p? is the global minimum of L over
RK . This type of condition has also been linked with rates of convergence in stochastic optimization
problems [15].
The extra multiplicative factor ?(L) can be large, but it would be of the order of 1 + o(1) using variants of our algorithmspwith results that holds only with great probability (typically with confidence
bounds of the form 2 log(1/?)/Ti ).
4.2
Strongly convex functions
Another classical assumption in convex optimization is strong convexity, as recalled below. We
denote by S?,K the set of ?-strongly convex functions of ?K . This assumption usually improves
the rates in errors of approximation in many settings, even in stochastic optimization or some settings
of online learning [see, e.g. 31, 14, 33, 6, 18, 19, 7]. Interestingly enough though, strong convexity
cannot?be leveraged to improve rates of convergence in online convex optimization [35, 23], where
the 1/ T rate of convergence cannot be improved. Moreover, leveraging strong convexity usually
implies to adapt step size of gradient descents or with linear search and/or away steps for classical
Frank-Wolfe methods. Those techniques cannot be adapted to our setting where step sizes are fixed.
Definition (Strongly convex functions). . For a set D ? Rn , a function f : D ? R is said to be a
?-strongly convex if for all x, y ? D, we have
?
f (x) ? f (y) + ?f (x)> (x ? y) + kx ? yk22 .
2
6
We already covered the case where the convex functions are minimized outside the simplex. We will
now assume that the minimum lies in its relative interior.
Theorem 5. Let L : ?K ? R be a C-smooth, ?-strongly convex function such that its minimum
p? satisfies dist(p? , ??K ) ? ?, for some ? ? (0, 1/K]. After
p T steps of the UCB Frank-Wolfe
algorithm with a bandit feedback such that ?t,i (Ti , ?) = 2 log(t/?)/Ti , it holds that, with the
choice of ?t = 1/t2 ,
E[L(pT )] ? L(p? ) ? c1
for constants c1 =
96K
?? 2 , c2
=
24
?? 3
log2 (T )
log(T )
1
+ c2
+ c3 ,
T
T
T
20 2
+ C and c3 = 24( ??
2) K +
?? 2
2
+ C.
The proof is based on an improvement in the analysis of the UCB Frank-Wolfe algorithm, based on
a better control on the duality gap, possible in the strongly convex case. It is a consequence of an
inequality due to Lacoste-Julien and Jaggi [24, Lemma 2]. In order to get the result, we adapt these
ideas to a case of unknown gradient, with bandit feedback. We note that this approach is similar to
the one in [25] that focuses on stochastic optimization problems, as discussed in Section 1.4.
Our framework is more complicated in some aspects than typical settings in stochastic optimization,
where strong assumptions can usually be made over the noisy gradient feedback. These include
stochastic gradients that are independent unbiased estimates of the true gradient, or with error terms
that are decreasing in t. Here, such properties do not hold: as an example, in a parametric setting,
information is only obtained about one of the coefficients, and there are strong dependencies between
successive gradients feedbacks. Dealing with these aspects, as well as the fact that our gradient proxy
is penalized by the size of the confidence intervals, are some of the main challenges of the proof.
4.3
Interior-smooth functions
Many interesting examples of bandit optimization are not exactly covered by the case of functions
that are C-smooth on the whole unit simplex. In particular, for several applications, the function
diverges at its boundary, as in the examples of Cobb-Douglas utility maximization and variance
minimization from Section 1.3. Recall the the loss was defined by
E[k?? ? ?k22 ] =
?i2
i?[K] Ti
P
=
1
T
L(p) =
1
T
?i2
i?[K] pi
P
.
The gradient Lipschitz constant is infinite but if we knew for instance that ?i ? P
[? i , ? i ], we could
safely sample first each arm i a linear number of time because p?i ? pi := ? i / j ? j . We would
P
2
( j ? j )3 /? 3min .
have (pt )i ? pi at all stages and our analysis holds with the constant C = 2?max
Even without knowledge on ?i2 , it is possible to quickly have rough estimates, as illustrated by
Lemma 2 in the appendix. Only a logarithmic number of sample of each action are needed. Once
they are gathered, one can keep sampling each arm a linear number of times, as suggested
P when the
lower/upper bounds are known beforehand. This leads to a Lipchitz constant C = (9 j ?j )3 /?min ,
which is, up to to a multiplicative factor, the gradient Lipschitz constant at the minimum.
5
Lower bounds
The results shown in Sections 3 and 4 exhibit different theoretical guarantees for our algorithm
depending on the class of function considered. We discuss here the optimality of these results.
5.1
Slow rate lower bound
p
In Theorem 2, we show a slow rate of order K log(T )/T for the error approximation of our algorithm over the class of C-smooth convex functions of RK . Up to the logarithmic term, this result is
optimal: no algorithm based on the same feedback can significantly improve the rate of approximation. This is a consequence of the following theorem, a direct corollary of a result by [5].
7
p
Theorem 6. For any algorithm based on a bandit feedback such that ?t,i (Ti , ?) = 2 log(t/?)/Ti
and that outputs p?T , we have over the class of linear forms LK that for some constant c > 0
n
o
p
inf sup E[L(?
pT )] ? L(p? ) ? c K/T .
p?T L?LK
This result is established over the class of linear functions over the simplex (for which C = 0), when
the feedback consists of a draw from a distribution with mean ?i . As mentioned in Section 3, the
extra logarithmic term in our upper bound comes from our algorithm, which has the same behavior as
UCB. Nevertheless, as mentioned before, modifying our algorithm to recover the behavior of MOSS
[4], or even ETC, [see e.g. 29, 30], would improve the upper bound and remove the logarithmic term.
5.2
Fast rate lower bound
We have shown that in the case of strongly convex smooth functions, there is an approximation error
upper bound of order (K/? 4 ) log(T )/T for the performance of our algorithm, where ? < 1/K. We
provide a lower bound over this class of functions in the following theorem.
p
Theorem 7. For any algorithm with a bandit feedback such that ?t,i (Ti , ?) = 2 log(t/?)/Ti and
output p?T , we have over the class S1,K of 1-strongly convex functions that for some constant c > 0
n
o
inf sup E[L(?
pT )] ? L(p? ) ? c K 2 /T .
p? L?S1,K
The proof relies on the complexity of minimizing quadratic functions 12 kp ? ?k22 when observing a
draw from distribution with mean ?i . Our upper bound is in the best case of order K 5 log(T )/T , as
? ? 1/K. Understanding more precisely the optimal rate is an interesting venue for future research.
5.3
Mixed feedbacks lower bound
In our analysis of this problem, we have only considered settings where the feedback upon choosing
action i gives information about the i-th coefficient of the gradient. The two following cases show
that even in simple settings, our upper bounds will not hold if the relationship between action and
feedback is different, when the feedback corresponds to another coefficient.
Proposition 8. For L in the class of 1-strongly convex functions on ?3 , we have in the case of a
mixed bandit feedback that
n
o
inf sup E[L(?
pT )] ? L(p? ) ? c/T 2/3 .
p? L?S1,3
For strongly convex functions, even with K = 3, there are therefore pathological mixed feedback
settings where the error is at least of order 1/T 2/3 instead of 1/T . The case of smooth convex
functions is covered by the existing lower bounds?for the problem of partial monitoring [13], and
gives a lower bound of order 1/T 1/3 instead of 1/ T .
Proposition 9. For L in the class of linear forms F3 on ?3 , with a mixed bandit feedback we have
n
o
inf sup E[L(?
pT )] ? L?(p? ) ? c/T 1/3 .
p? L?F3
6
Discussion
We study the online minimization of stochastic global loss with a bandit feedback. This is naturally
motivated by many applications with a parametric setting, and tradeoffs between exploration and
exploitation. The UCB Frank-Wolfe algorithm performs optimally in a generic setting.
The fast rates of convergence obtained for some clases of functions are a significant improvement
over the slow rates that hold for smooth convex functions. In bandit-type problems similar to our
problem, it is not always possible to leverage additional assumptions such as strong convexity: It has
been proved impossible in the closely related setting of online convex optimization [23, 35]. When it
8
is possible, step sizes must usually depend on the strong convexity parameter, as in gradient descent
[28]. This is not the case here, where the step size is fixed by the mechanics of the problem. We
have also shown that fast rates are possible without requiring strong convexity, with a gap condition
on the gradient at an extreme point, more commonly associated with bandit problems.
We mention that several extensions of our models, motivated by heterogenous estimations, are quite
interesting but out of scope. For instance, assume an experimentalist can chose one of K known
covariates Xi in order to estimate an unknown ? ? RK , and observes yt = X?>t (? + ?t ), where
?t ? N (0, ?). Variants of that problem with covariates or contexts [29] can also be considered.
Assume for instance that ?i (.) and ?i2 (.) are regular functions of covariates ? ? Rd . The objective
is to estimate all the functions ?i (.).
References
[1] A. Agarwal, D. P. Foster, D. Hsu, S. Kakade, and A. Rakhlin. Stochastic convex optimization with bandit feedback. In Proceedings of the 24th International Conference on Neural
Information Processing Systems, 2011.
[2] S. Agrawal and N. Devanur. Bandits with concave rewards and convex knapsacks. In Proceedings of the Fifteenth ACM Conference on Economics and Computation, EC ?14, pages
989?1006, New York, NY, USA, 2014. ACM.
[3] S. Agrawal, N. Devanur, and L. Li. An efficient algorithm for contextual bandits with knapsacks, and an extension to concave objectives. Proceedings of the Annual Conference on
Learning Theory (COLT), 2016.
[4] J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. Proceedings of the Annual Conference on Learning Theory (COLT), 2009.
[5] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The non-stochastic multi-armed bandit
problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[6] F. Bach and E. Moulines. Non-strongly-convex smooth stochastic approximation with convergence rate o(1/n). In Adv. NIPS, 2013.
[7] F. Bach and V. Perchet. Highly-smooth zero-th order online optimization. COLT 2016, 2016.
[8] Q. Berthet and V. Chandrasekaran. Resource allocation for statistical estimation. Proceedings
of the IEEE, 104(1):115?125, 2016.
[9] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Machine Learning, 5(1):1?122, 2012.
[10] S. Bubeck, R. Eldan, and Y. T. Lee. Kernel-based methods for bandit convex optimization.
CoRR, abs/1607.03084, 2016.
[11] A. Carpentier, A. Lazaric, M. Ghavamzadeh, R. Munos, and A. Antos. Upper-confidencebound algorithms for active learning in multi-armed bandits. Preprint, 2015.
[12] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University
Press, 2006.
[13] N. Cesa-Bianchi, G. Lugosi, and G. Stoltz. Regret minimization under partial monitoring.
Math. Oper. Res., 31(3):562?580, August 2006.
[14] J. Dippon. Accelerated randomized stochastic optimization. Ann. Statist., 31(4):1260?1281,
08 2003.
[15] J. Duchi and F. Ruan. Local asymptotics for some stochastic optimization problems: Optimality, constraint identification, and dual averaging. Arxiv Preprint, 2016.
[16] E. Even-Dar, R. Kleinberg, S. Mannor, and Y. Mansour. Online learning for global cost functions. In Proceedings of COLT, 2009.
9
[17] M Frank and P. Wolfe. An algorithm for quadratic programming. Naval Res. Logis. Quart.,
3:95?110, 1956.
[18] E. Hazan, T. Koren, and K. Levy. Logistic regression: Tight bounds for stochastic and online
optimization. In Proc. Conference On Learning Theory (COLT), 2014.
[19] E. Hazan and K. Levy. Bandit convex optimization: Towards tight bounds. In Adv. NIPS, 2014.
[20] E. Hazan. The convex optimization approach to regret minimization. Optimization for machine
learning, pages 287?303, 2012.
[21] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Mach. Learn., 69(2-3):169?192, 2007.
[22] M. Jaggi. Sparse Convex Optimization Methods for Machine Learning. PhD thesis, ETH
Zurich, 2011.
[23] K. Jamieson, R. Nowak, and B. Recht. Query complexity of derivative-free optimization.
Advances in Neural Information Processing Systems, 2012.
[24] S. Lacoste-Julien and M. Jaggi. An affine invariant linear convergence analysis for frank-wolfe
algorithms. NIPS 2013, 2013.
[25] J. Lafond, H.-T. Wai, and E. Moulines. On the online frank-wolfe algorithms for convex and
non-convex optimizations. Arxiv Preprint, 2015.
[26] S. Mannor, V. Perchet, and G. Stoltz. Approachability in unknown games: Online learning
meets multi-objective optimization. In Proceedings of COLT, 2014.
[27] A. Mas-Colell, M.D. Whinston, and J. Green. Microeconomic theory. Oxford University press,
New York, 1995.
[28] Y. Nesterov. Introductory Lectures on Convex Optimization. Springer, 2003.
[29] V. Perchet and P. Rigollet. The multi-armed bandit problem with covariates. Ann. Statist..,
41:693?721, 2013.
[30] V. Perchet, P. Rigollet, S. Chassang, and E. Snowberg. Batched bandit problems. Ann. Statist.,
44(2):660?681, 04 2016.
[31] B. T. Polyak and A. B. Tsybakov. Optimal order of accuracy of search algorithms in stochastic
optimization. Problemy Peredachi Informatsii, 26(2):45?53, 1990.
[32] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Beyond regret. In Sham M. Kakade
and Ulrike von Luxburg, editors, Proceedings of the 24th Annual Conference on Learning
Theory, volume 19 of Proceedings of Machine Learning Research, pages 559?594, Budapest,
Hungary, 09?11 Jun 2011. PMLR.
[33] A. Saha and A. Tewari. Improved regret guarantees for online smooth convex optimization
with bandit feedback. In Proc. International Conference on Artificial Intelligence and Statistics
(AISTATS), 2011.
[34] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends
in Machine Learning, 4(2):107?194, 2011.
[35] O. Shamir. On the complexity of bandit and derivative-free stochastic convex optimization. In
Proc. Conference on Learning Theory, 2013.
10
| 6817 |@word exploitation:2 illustrating:1 norm:2 proportion:3 gaspard:1 approachability:1 covariance:1 p0:1 pick:2 mention:1 klk:2 cobb:2 denoting:1 interestingly:2 past:2 existing:1 recovered:1 contextual:1 surprising:1 yet:1 written:1 must:1 remove:1 update:2 intelligence:1 indicative:1 xk:1 caveat:1 provides:1 math:1 mannor:2 successive:1 lipchitz:1 org:1 instructs:1 unbounded:1 c2:2 direct:2 consists:3 doubly:1 introductory:1 introduce:1 manner:3 expected:3 indeed:1 behavior:2 dist:1 mechanic:1 multi:18 moulines:2 inspired:4 buying:1 decreasing:2 encouraging:1 armed:17 actual:1 increasing:1 provided:1 pof:1 notation:2 bounded:3 moreover:1 minimizes:1 guarantee:3 pseudo:3 safely:1 ti:37 concave:2 exactly:1 k2:1 uk:1 control:1 unit:6 grant:2 lipshitz:1 jamieson:1 t1:5 positive:2 before:1 local:2 consequence:2 mach:1 oxford:1 meet:1 lugosi:2 chose:2 twice:1 therein:2 studied:2 ease:1 limited:2 regret:13 procedure:1 asymptotics:1 empirical:2 eth:1 attain:1 significantly:1 confidence:11 regular:1 get:1 cannot:6 interior:3 close:1 selection:3 context:2 impossible:1 equivalent:6 deterministic:2 yt:1 maximizing:1 kale:1 economics:2 independently:1 convex:51 devanur:2 estimator:1 rule:2 quentin:1 handle:1 notion:1 traditionally:1 coordinate:1 pt:34 controlling:1 shamir:1 programming:1 us:1 wolfe:27 element:1 trend:1 particularly:1 perchet:6 ep:1 epsrc:1 preprint:3 worst:1 adv:2 decrease:3 observes:3 mentioned:3 intuition:1 convexity:6 complexity:4 covariates:4 reward:1 nesterov:1 cam:1 dynamic:1 ghavamzadeh:1 depend:2 tight:2 upon:1 basis:2 microeconomic:1 various:4 fast:10 describe:5 kp:1 query:1 artificial:1 choosing:4 outside:1 shalev:1 whose:1 quite:1 supplementary:2 solve:2 anr:2 statistic:3 noisy:5 final:1 online:23 sequence:1 differentiable:2 agrawal:2 relevant:1 realization:1 budapest:1 hungary:1 description:1 convergence:8 diverges:1 normalesup:1 tk:1 depending:1 ac:1 strong:8 auxiliary:1 come:2 implies:1 direction:1 closely:1 mini6:1 stochastic:34 hull:1 exploration:2 modifying:1 material:2 generalization:1 proposition:6 extension:2 hold:13 considered:4 great:1 mapping:2 scope:1 achieves:1 early:1 smallest:1 estimation:3 proc:3 maker:4 tool:1 minimization:6 rough:1 gaussian:3 always:2 aim:1 rather:2 ck:1 clases:1 corollary:1 derived:1 focus:2 markovitz:1 naval:1 improvement:2 contrast:1 adversarial:3 criteo:1 attains:1 problemy:1 dependent:1 typically:1 borrowing:1 bandit:70 overall:1 among:1 colt:6 dual:1 special:2 ruan:1 field:1 construct:2 once:1 having:1 beach:1 sampling:1 f3:2 choses:1 future:1 simplex:10 t2:4 minimized:3 saha:1 pathological:1 ab:1 highly:1 evaluation:1 extreme:1 light:1 antos:1 allocating:1 kt:2 beforehand:1 closer:1 partial:2 nowak:1 stoltz:2 re:2 theoretical:2 instance:6 cover:1 maximization:3 cost:6 deviation:5 vertex:3 colell:1 inspires:1 optimally:2 dependency:4 chooses:2 st:1 venue:1 international:2 siam:1 randomized:1 recht:1 lee:1 picking:1 quickly:1 thesis:1 von:1 cesa:4 leveraged:1 possibly:1 hoeffding:1 corner:2 derivative:2 leading:2 return:1 li:1 oper:1 subsumes:1 includes:1 coefficient:7 matter:1 audibert:1 depends:1 experimentalist:1 multiplicative:2 picked:1 hazan:4 sup:4 ulrike:1 linked:1 recover:1 analyze:2 observing:2 reached:1 decaying:1 complicated:2 minimize:11 accuracy:1 variance:3 who:1 gathered:2 ckx:1 identification:1 none:1 monitoring:2 chassang:1 basket:1 whenever:1 wai:1 definition:4 involved:2 thereof:1 isaac:1 naturally:1 proof:5 associated:1 hsu:1 proved:2 recall:2 knowledge:2 improves:1 organized:1 carefully:1 actually:1 cik:1 auer:1 improved:3 evaluated:1 though:2 strongly:12 stage:2 ei:4 trust:1 replacing:1 logistic:1 snowberg:1 usa:2 k22:3 requiring:1 unbiased:2 true:1 hence:1 inspiration:2 i2:10 statslab:1 illustrated:1 round:10 game:2 encourages:2 noted:1 illustrative:1 generalized:1 performs:2 duchi:1 meaning:1 wise:1 fi:2 rigollet:2 volume:1 tail:1 discussed:2 refer:1 multiarmed:1 significant:1 cambridge:2 smoothness:1 rd:2 lafond:1 similarly:1 i6:2 portfolio:1 similarity:2 yk2:1 gt:1 etc:2 jaggi:3 optimizes:1 driven:2 apart:1 inf:4 inequality:3 seen:1 minimum:6 additional:3 freely:1 full:1 mix:1 sham:1 alan:1 smooth:17 adapt:2 bach:2 long:1 prediction:1 variant:4 regression:1 expectation:1 fifteenth:1 arxiv:2 iteration:1 sometimes:1 tailored:1 kernel:1 agarwal:2 c1:2 interval:4 logis:1 source:1 extra:3 unlike:1 strict:1 induced:1 leveraging:1 sridharan:1 integer:2 near:1 leverage:1 yk22:1 enough:1 nonstochastic:1 polyak:1 idea:1 tradeoff:1 motivated:4 utility:5 hessian:1 york:2 action:29 dar:1 generally:3 n510129:1 covered:3 quart:1 tewari:2 tsybakov:1 statist:3 reduced:2 schapire:1 canonical:1 notice:3 estimated:3 lazaric:1 nevertheless:1 drawn:1 clarity:1 douglas:2 carpentier:1 lacoste:2 luxburg:1 turing:1 chandrasekaran:1 draw:6 decision:5 appendix:1 whinston:1 bound:23 ct:1 guaranteed:1 koren:1 quadratic:2 annual:3 adapted:2 constraint:2 precisely:2 informatsii:1 sake:1 kleinberg:1 aspect:2 optimality:3 min:5 belonging:1 increasingly:1 kakade:2 modification:1 making:1 s1:3 restricted:1 invariant:1 taken:1 resource:1 zurich:1 discus:4 mechanism:4 needed:1 end:2 studying:1 available:2 operation:1 generalizes:1 observe:2 away:3 generic:1 appearing:1 stepsize:1 pmlr:1 vianney:2 knapsack:2 include:3 log2:1 newton:1 establish:1 classical:10 implied:1 objective:16 already:3 strategy:2 parametric:7 usual:3 traditional:1 said:2 exhibit:2 gradient:27 link:3 lmh:1 argmins:1 relationship:1 kk:1 minimizing:7 frank:27 design:1 policy:3 unknown:12 bianchi:4 upper:12 observation:7 finite:1 descent:2 situation:2 precise:1 rn:2 mansour:1 august:1 introduced:1 paris:2 specified:1 c3:2 recalled:1 established:2 heterogenous:2 nip:4 able:1 suggested:1 beyond:1 below:1 usually:4 challenge:1 saclay:1 program:1 max:1 green:1 natural:2 arm:3 minimax:1 scheme:1 improve:4 imply:1 julien:2 lk:4 jun:1 moss:2 understanding:1 occupation:1 relative:1 freund:1 loss:20 lecture:1 mixed:4 interesting:3 allocation:1 monge:1 foundation:1 labex:1 kti:1 agent:2 purchasing:1 affine:1 proxy:4 edf:1 principle:2 foster:1 editor:1 share:1 pi:9 penalized:2 eldan:1 supported:3 free:2 institute:1 fall:2 taking:2 munos:1 sparse:1 peredachi:1 feedback:34 boundary:1 world:1 cumulative:5 stand:1 valid:1 berthet:3 made:1 adaptive:1 commonly:1 ec:1 emphasize:1 keep:1 dealing:1 global:10 sequentially:1 kkt:1 active:1 assumed:1 knew:1 xi:3 shwartz:1 alternatively:1 continuous:1 search:2 learn:1 ca:1 career:1 necessarily:2 domain:1 aistats:1 main:4 motivation:2 whole:1 p2i:1 body:1 x1:1 en:1 batched:1 slow:11 ny:1 precision:2 sub:4 lie:1 tied:1 levy:2 rk:8 theorem:9 xt:9 list:1 rakhlin:2 corr:1 gained:1 phd:1 budget:1 horizon:1 kx:1 gap:2 intersection:1 logarithmic:6 generalizing:1 fc:1 simply:1 bubeck:3 expressed:1 springer:1 corresponds:3 minimizer:1 determines:1 satisfies:1 relies:1 acm:2 ma:1 goal:1 ann:3 careful:1 towards:1 lipschitz:5 price:1 argmini:1 typical:2 infinite:1 contrasted:1 uniformly:1 averaging:1 lemma:2 called:1 duality:1 experimental:1 ucb:16 rarely:1 select:1 support:1 accelerated:1 |
6,432 | 6,818 | Zap Q-Learning
Adithya M. Devraj
Sean P. Meyn
Department of Electrical and Computer Engineering,
University of Florida,
Gainesville, FL 32608.
[email protected], [email protected]
Abstract
The Zap Q-learning algorithm introduced in this paper is an improvement of
Watkins? original algorithm and recent competitors in several respects. It is a
matrix-gain algorithm designed so that its asymptotic variance is optimal. Moreover, an ODE analysis suggests that the transient behavior is a close match to a
deterministic Newton-Raphson implementation. This is made possible by a two
time-scale update equation for the matrix gain sequence. The analysis suggests
that the approach will lead to stable and efficient computation even for non-ideal
parameterized settings. Numerical experiments confirm the quick convergence,
even in such non-ideal cases.
1
Introduction
It is recognized that algorithms for reinforcement learning such as TD- and Q-learning can be slow
to converge. The poor performance of Watkins? Q-learning algorithm was first quantified in [25],
and since then many papers have appeared with proposed improvements, such as [9, 1].
An emphasis in much of the literature is computation of finite-time PAC (probably almost correct)
bounds as a metric for performance. Explicit bounds were obtained in [25] for Watkins? algorithm,
and in [1] for the ?speedy? Q-learning algorithm that was introduced by these authors. A general
theory is presented in [18] for stochastic approximation algorithms.
In each of the models considered in prior work, the update equation for the parameter estimates can
be expressed
?n+1 = ?n + ?n [f (?n ) + ?n+1 ] , n ? 0 ,
(1)
in which {?n } is a positive gain sequence, and {?n } is a martingale difference sequence. This
representation is critical in analysis, but unfortunately is not typical in reinforcement learning applications outside of these versions of Q-learning. For Markovian models, the usual transformation
used to obtain a representation similar to (1) results in an error sequence {?n } that is the sum of a
martingale difference sequence and a telescoping sequence [15]. It is the telescoping sequence that
prevents easy analysis of Markovian models.
This gap in the research literature carries over to the general theory of Markov chains. Examples of
concentration bounds for i.i.d. sequences or martingale-difference sequences include the finite-time
bounds of Hoeffding and Bennett. Extensions to Markovian models either offer very crude bounds
[17], or restrictive assumptions [14, 11]; this remains an active area of research [20].
In contrast, asymptotic theory for stochastic approximation (as well as general state space Markov
chains) is mature. Large Deviations or Central Limit Theorem (CLT) limits hold under very general
assumptions [3, 13, 4]. The CLT will be a guide to algorithm design in the present paper. For a
typical stochastic approximation algorithm, this takes the following form: denoting {?en := ?n ? ?? :
?
n ? 0} to be the error sequence, under general conditions the scaled sequence { n?en : n ? 1}
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
converges in distribution to a Gaussian distribution, N (0, ?? ). Typically, the scaled covariance is
also convergent to the limit, which is known as the asymptotic covariance:
?? = lim nE[?en ?enT ] .
(2)
n??
An asymptotic bound such as (2) may not be satisfying for practitioners of stochastic optimization
or reinforcement learning, given the success of finite-n performance bounds in prior research. However, the fact that the asymptotic covariance ?? has a simple representation, and can therefore be
easily improved or optimized, makes it a compelling tool to consider. Moreover, as the examples in
this paper suggest, the asymptotic covariance is often a good predictor of finite-time performance,
since the CLT approximation is accurate for reasonable values of n.
Two approaches are known for optimizing the asymptotic covariance. First is the remarkable averaging technique introduced in [21, 22, 24] (also see [12]). Second is Stochastic Newton-Raphson,
based on a special choice of matrix gain for the algorithm [13, 23]. The algorithms proposed here
use the second approach.
Matrix gain variants of TD-learning [10, 19, 29, 30] and Q-learning [27] are available in the literature, but none are based on optimizing the asymptotic variance. It is a fortunate coincidence that
LSTD(?) of [6] achieves this goal [8].
In addition to accelerating the convergence rate of the standard Q-learning algorithm, it is hoped
that this paper will lead to entirely new algorithms. In particular, there is little theory to support
Q-learning in non-ideal settings in which the optimal ?Q-function? does not lie in the parameterized
function class. Convergence results have been obtained for a class of optimal stopping problems
[31], and for deterministic models [16]. There is now intense practical interest, despite an incomplete
theory. A stronger supporting theory will surely lead to more efficient algorithms.
Contributions A new class of Q-learning algorithms is proposed, called Zap Q-learning, designed
to more accurately mimic the classical Newton-Raphson algorithm. It is based on a two time-scale
stochastic approximation algorithm, constructed so that the matrix gain tracks the gain that would
be used in a deterministic Newton-Raphson method.
A full analysis is presented for the special case of a complete parameterization (similar to the setting
of Watkins? algorithm [28]). It is found that the associated ODE has a remarkable and simple representation, which implies consistency under suitable assumptions. Extensions to non-ideal parameterized settings are also proposed, and numerical experiments show dramatic variance reductions.
Moreover, results obtained from finite-n experiments show close solidarity with asymptotic theory.
The remainder of the paper is organized as follows. The new Zap Q-learning algorithm is introduced
in Section 2, which contains a summary of the theory from extended version of this paper [8].
Numerical results are surveyed in Section 3, and conclusions are contained in Section 4.
2
Zap Q-Learning
Consider an MDP model with state space X, action space U, cost function c : X ? U ? R, and
discount factor ? ? (0, 1). It is assumed that the state and action space are finite: denote ` = |X|,
`u = |U|, and Pu the ` ? ` conditional transition probability matrix, conditioned on u ? U. The
state-action process (X, U ) is adapted to a filtration {Fn : n ? 0}, and Q1 is assumed throughout:
Q1: The joint process (X, U ) is an irreducible Markov chain, with unique invariant pmf $.
The minimal value function is the unique solution to the discounted-cost optimality equation:
X
h? (x) = min Q? (x, u) := min{c(x, u) + ?
Pu (x, x0 )h? (x0 )} , x ? X.
u?U
u?U
x0 ?X
The ?Q-function? solves a similar fixed point equation:
X
Q? (x, u) = c(x, u) + ?
Pu (x, x0 )Q? (x0 ) ,
x0 ?X
x ? X, u ? U,
(3)
in which Q(x) := minu?U Q(x, u) for any function Q : X ? U ? R.
Given any function ? : X ? U ? R, let Q(?) denote the corresponding solution to the fixed point
equation (3), with c replaced by ?: The function q = Q(?) is the solution to the fixed point equation,
X
q(x, u) = ?(x, u) + ?
Pu (x, x0 ) min
q(x0 , u0 ) , x ? X, u ? U.
0
u
x0
2
The mapping Q is a bijection on the set of real-valued functions on X ? U. It is also piecewise linear,
concave and monotone (See [8] for proofs and discussions).
It is known that Watkins? Q-learning algorithm can be regarded as a stochastic approximation
method [26, 5] to obtain the solution ?? ? Rd to the steady-state mean equations,
?
?
E c(Xn , Un ) + ?Q? (Xn+1 ) ? Q? (Xn , Un ) ?n (i) = 0, 1 ? i ? d
(4)
where {?n } are d-dimensional Fn -measurable functions and Q? = ?T ? for basis functions {?i :
1 ? i ? d}. In Watkins? algorithm ?n = ?(Xn , Un ), and the basis functions are indicator functions:
?k (x, u) = I{x = xk , u = uk }, 1 ? k ? d, with d = ` ? `u the total number of state-action pairs
?
[26]. In this special case we identify Q? = Q? , and the parameter ? is identified with the estimate
Q? . A stochastic approximation algorithm to solve (4) coincides with Watkins? algorithm [28]:
(5)
?n+1 = ?n + ?n+1 c(Xn , Un ) + ??n (Xn+1 ) ? ?n (Xn , Un ) ?(Xn , Un )
One very general technique that is used to analyze convergence of stochastic approximation algorithms is to consider the associated limiting ODE, which is the continuous-time, deterministic
approximation of the original recursion [4, 5]. For (5), denoting the continuous time approximation
of {?n } to be {qt }, and under standard assumptions on the gain sequence {?n }, the associated ODE
is of the form
n
o
X
d
Pu (x, x0 ) min
q(x0 , u0 ) ? qt (x, u) .
(6)
dt qt (x, u) = $(x, u) c(x, u) + ?
0
x0
u
?
Under Q1, {qt } converges to Q : A key step in the proof of convergence of {?n } to the same limit.
While Watkins? Q-learning (5) is consistent, it is argued in [8] that the asymptotic covariance of this
algorithm is typically infinite. This conclusion is complementary to the finite-n analysis of [25]:
Theorem 2.1. Watkins? Q-learning algorithm with step-size ?n ? 1/n is consistent under Assumption Q1. Suppose that in addition max $(x, u) ? 21 (1 ? ?)?1 , and the conditional variance of
x,u
h? (Xt ) is positive:
X
x,x0 ,u
$(x, u)Pu (x, x0 )[h? (x0 ) ? Pu h? (x)]2 > 0
Then the asymptotic covariance is infinite: lim nE[k?n ? ?? k2 ] = ?.
n??
The assumption maxx,u $(x, u) ?
1
2 (1
? ?)?1 is satisfied whenever ? ? 12 .
Matrix-gain stochastic approximation algorithms have appeared in previous literature. In particular,
matrix gain techniques have been used to speed-up the rate of convergence of Q-learning (see [7]
and the second example in Section 3). The general G-Q(?) algorithm is described as follows, based
on a sequence of d ? d matrices G = {Gn } and ? ? [0, 1]: For initialization ?0 , ?0 ? Rd , the
sequence of estimates are defined recursively:
?n+1 = ?n + ?n+1 Gn+1 ?n dn+1
dn+1 = c(Xn , Un ) + ?Q?n (Xn+1 ) ? Q?n (Xn , Un )
(7)
?n+1 = ???n + ?(Xn+1 , Un+1 )
The special case based on stochastic Newton-Raphson is Zap Q(?)-learning:
Algorithm 1 Zap Q(?)-learning
b0 ? Rd?d , n = 0, T ? Z +
Input: ?0 ? Rd , ?0 = ?(X0 , U0 ), A
. Initialization
1: repeat
2:
?n (Xn+1 ) := arg minu Q?n (Xn+1 , u);
3:
dn+1 := c(Xn , Un ) + ?Q?n (Xn+1 , ?n (Xn+1 )) ? Q?n (Xn , Un ); . Temporal difference
T
4:
An+1 := ?n ??(Xn+1 , ?n (Xn+1 )) ? ?(Xn , Un ) ;
bn+1 = A
bn + ?n+1 An+1 ? A
bn ;
5:
A
. Matrix gain update rule
?1
b
6:
?n+1 = ?n ? ?n+1 An+1 ?n dn+1 ;
. Zap-Q update rule
7:
?n+1 := ???n + ?(Xn+1 , Un+1 );
. Eligibility vector update rule
8:
n=n+1
9: until n ? T
3
A special case is considered in the analysis here: the basis is chosen as in Watkins? algorithm, ? = 0,
and ?n ? 1/n. An equivalent representation for the parameter recursion is thus
b?1 ?n c + An+1 ?n ,
?n+1 = ?n ? ?n+1 A
n+1
in which c and ?n are treated as d-dimensional vectors rather than functions on X ? U, and ?n =
?(Xn , Un )?(Xn , Un )T .
Part of the analysis is based on a recursion for the following d-dimensional sequence:
bn = ???1 A
bn ?n , n ? 1 ,
C
where ? is the d ? d diagonal matrix with entries $ (the steady-state distribution of (X, U )). The
bn } admits a very simple recursion in the special case ? ? ?:
sequence {C
bn+1 = C
bn + ?n+1 [??1 ?n c ? C
bn ] .
C
(8)
b
It follows that Cn converges to c as n ? ?, since (8) is essentially a Monte-Carlo average of
bn is obtained as a uniform average
{??1 ?n c : n ? 0}. Analysis for this case is complicated since A
of {An }.
The main contributions of this paper concern a two time-scale implementation for which
X
X
?n
= 0.
(9)
?n = ?
?n2 < ? and
lim
n?? ?n
In our analysis, we restrict to ?n ? 1/n? , for some fixed ? ? ( 12 , 1). Through ODE analysis, it is
argued that the Zap Q-learning algorithm closely resembles an implementation of Newton-Raphson
bn } more closely tracks the mean of {An }. Theorem 2.2
in this case. This analysis suggests that {A
summarizes the main results under Q1, and the following additional assumptions:
Q2: The optimal policy ?? is unique.
Q3: The sequence of policies {?n } satisfy
?
X
n=1
?n I{?n+1 6= ?n } < ? ,
a.s..
bn } resulting from the
The assumption Q3 is used to address the discontinuity in the recursion for {A
dependence of An+1 on ?n .
Theorem 2.2. Suppose that Assumptions Q1?Q3 hold, and the gain sequences ? and ? satisfy:
?n = n?1 , ?n = n?? ,
n ? 1,
1
for some fixed ? ? ( 2 , 1). Then,
(i) The parameter sequence {?n } obtained using the Zap-Q algorithm converges to Q? a.s..
(ii) The asymptotic covariance (2) is minimized over all G-Q(0) matrix gain versions of
Watkins? Q-learning algorithm.
bn }, by continuous functions (q, ?)
(iii) An ODE approximation holds for the sequence {?n , C
satisfying
d
qt = Q(?t ) , dt
?t = ??t + c
(10)
This ODE approximation is exponentially asymptotically stable, with lim qt = Q? .
t??
The ODE result (10) is an important aspect of this work. It says that the sequence {qt }, a continuous
time approximation of the parameter estimates {?n } that are obtained using the Zap Q-learning
algorithm, evolves as the Q-function of some time-varying cost function ?t . Furthermore, this timevarying cost function ?t has dynamics independent of qt , and converges to c; the cost function defined
in the MDP model. Convergence follows from the continuity of the mapping Q:
lim ?n = lim qt = lim Q(?t ) = Q(c) = Q? .
n??
t??
t??
The reader is referred to [8] for complete proofs and technical details.
3
Numerical Results
Results from numerical experiments are surveyed here to illustrate the performance of the Zap Qlearning algorithm.
4
1
4
5
3
2
6
Figure 1: Graph for MDP
Finite state-action MDP Consider first a simple path-finding problem.
The state space X = {1, . . . , 6} coincides with the six nodes on the undirected graph shown in Fig. 1. The action space U = {ex,x0 }, x, x0 ? X,
consists of all feasible edges along which the agent can travel, including
each ?self-loop?, u = ex,x . The goal is to reach the state x? = 6 and
maximize the time spent there. The reader is referred to [8] for details
on the cost function and other modeling assumptions.
Six variants of Q-learning were tested: Watkins? algorithm (5), Watkins?
algorithm with Ruppert-Polyak-Juditsky (RPJ) averaging [21, 22, 24],
Watkins? algorithm with a ?polynomial learning rate? ?n ? n?0.6 [9],
Speedy Q-learning [1], and two versions of Zap Q-learning: ?n ? ?n ?
n?1 , and ?n ? ?n0.85 ? n?0.85 .
Fig. 2 shows the normalized trace of the asymptotic covariance of Watkins? algorithm with stepsize ?n = g/n, as a function of g > 0. Based on this observation or on Theorem 2.1, it follows
that the asymptotic covariance is not finite for the standard Watkins? algorithm with ?n ? 1/n. In
simulations it was found that the parameter estimates are not close to ?? even after many millions of
iterations.
10
It was also found that Watkins? algorithm performed
poorly in practice for any scalar gain. For example, more
than half of the 103 experiments using ? = 0.8 and
g = 70 resulted in values of ?n (15) exceeding ?? (15)
by 104 (with ?? (15) ? 500), even with n = 106 . The
algorithm performed well with the introduction of projection (to ensure that the parameter estimates evolve on
a bounded set) in the case ? = 0.8. With ? = 0.99, the
performance was unacceptable for any scalar gain, even
with projection.
? = 0.8
8
? = 0.99
6
4
2
0
50
60
70
80
90
1000
2000
3000
g
Fig. 3 shows normalized histograms of {Wni (k) = Figure 2: Normalized trace of the asymp? i
n(?n (k) ? ?n (k)) : 1 ? i ? N } for the projected totic covariance
Watkins Q-learning with gain g = 70, and the Zap algorithm, ?n ? ?n0.85 . The theoretical predictions were based on the solution to a Lyapunov equation
[8]. Results for ? = 0.99 contained in [8] show similar solidarity with asymptotic theory.
10
7
-3
6
5
2
10
Experimental histogram
Theoritical pdf
Experimental pdf
4
Zap-Q
-3
1
3
2
1
0
10
-3
7
-500 -400 -300 -200 -100
0
100 200
-150 -100
-50
0
50
100
0
150
-600 -400 -200
0
200
400
600
-600 -400 -200
0
200 400 600
1.5
10
6
5
Scalar
Gain
-3
1
4
3
0.5
2
1
0
-1000 -500
0
500 1000 1500
(a) Wn (18) with n = 104
-400
-200
0
200
400
0
600
-1000
0
1000
2000
(c) Wn (10) with n = 104
(b) Wn (18) with n = 106
-1000
-500
0
500
1000
(d) Wn (10) with n = 106
Figure 3: Asymptotic variance for Watkins? g = 70 and Zap Q-learning, ?n ? ?n0.85 ; ? = 0.8
Bellman Error The Bellman error at iteration n is denoted:
X
Bn (x, u) = ?n (x, u) ? r(x, u) ? ?
Pu (x, x0 ) max
?n (x0 , u0 ) .
0
x0 ?X
u ?U
This is identically zero if and only if ?n = Q? . Fig. 4 contains plots of the maximal error B n =
maxx,u |Bn (x, u)| for the six algorithms.
Though all six algorithms perform reasonably well when ? = 0.8, Zap Q-learning is the only one
that achieves near zero Bellman error within n = 106 iterations in the case ? = 0.99. Moreover, the
5
performance of the two time-scale algorithm is clearly superior to the one time-scale algorithm. It
is also observed that the Watkins algorithm with an optimized scalar gain (i.e., step-size ?n ? g ? /n
with g ? chosen so that the asymptotic variance is minimized) has the best performance among scalargain algorithms.
20
B n 18
Speedy
Polynomial
Watkins
g = 70
16
Bellman Error
120
? = 0.8
14
RPJ
Zap-Q: ?n ? ?n
0.85
Zap-Q: ?n ? ?n
12
? = 0.99
100
g = 1500
80
10
60
8
40
6
4
20
2
0
0
1
2
3
4
5
6
7
8
9
0
0
10
1
2
3
4
5
6
7
8
9
10
nx 105
Figure 4: Maximum Bellman error {Bn : n ? 0} for the six Q-learning algorithms
Fig. 4 shows only the typical behavior ? repeated trials were run to investigate the range of possible
outcomes. Plots of the mean and 2? confidence intervals of B n are shown in Fig. 5 for ? = 0.99.
4
10
3
Bellman Error
B n 10
Speedy
Poly
RPJ
Zap-Q: ?n ? ?n
0 85
Zap-Q: ?n ? ?n
g = 500
g = 1500
g = 5000
10 2
10 1
10 0
3
10
Normalized number of
of Observations
0.5
0
0
10
4
10
5
10
6
10
2
Bellman Error
Speedy
Poly
n = 106
3
10
4
10
Bellman Error
g = 500
g = 1500
g = 5000
5
10
6
n
RPJ
Zap-Q: ?n ? ?n
0 85
Zap-Q: ?n ? ?n
n = 106
1
20
40
60
80
100
120
140
160
0
0
10
20
30
40
50
B
Figure 5: Simulation-based 2? confidence intervals for the six Q-learning algorithms for the case ? = 0.99
Finance model The next example is taken from [27, 7]. The reader is referred to these references
for complete details of the problem set-up and the reinforcement learning architecture used in this
prior work. The example is of interest because it shows how the Zap Q-learning algorithm can be
used with a more general basis, and also how the technique can be extended to optimal stopping
time problems.
The Markovian state process for the model evolves in X = R100 . The ?time to exercise? is modeled
as a discrete valued stopping time ? . The associated expected reward is defined as E[? ? r(X? )],
where ? ? (0, 1), r(Xn ) := Xn (100) = pen /e
pn?100 , and {e
pt : t ? R} is a geometric Brownian
motion (derived from an exogenous price-process). The objective of finding a policy that maximizes
the expected reward is modeled as an optimal stopping time problem.
The value function is defined to be the supremum over all stopping times:
h? (x) = sup E[? ? r(X? ) | X0 = x].
? >0
This solves the Bellman equation: For each x ? X,
h? (x) = max r(x), ?E[h? (Xn+1 ) | Xn = x] .
The associated Q-function is denoted Q? (x) := ?E[h? (Xn+1 ) | Xn = x], and solves a similar fixed
point equation:
Q? (x) = ?E[max(r(Xn+1 ), Q? (Xn+1 )) | Xn = x].
6
The Q(0)-learning algorithm considered in [27] is defined as follows:
h
i
?n+1 = ?n + ?n+1 ?(Xn ) ? max Xn+1 (100), Q?n (Xn+1 ) ? Q?n (Xn ) ,
n ? 0.
In [7] the authors attempt to improve the performance of the Q(0) algorithm through the use of a
sequence of matrix gains, which can be regarded as an instance of the G-Q(0)-learning algorithm
defined in (7). For details see this prior work as well as the extended version of this paper [8].
A gain sequence {Gn } was introduced in [7] to improve performance. Denoting G and A to be
the steady state means of {Gn } and {An } respectively, the eigenvalues corresponding to the matrix
GA are shown on the right hand side of Fig. 6. It is observed that the sufficient condition for a
finite asymptotic covariance are ?just? satisfied in this algorithm: the maximum eigenvalue of GA
is approximately ? ? ?0.525 < ? 21 (see Theorem 2.1 of [8]). It is worth stressing that the finite
asymptotic covariance was not a design goal in this prior work. It is only now on revisiting this
paper that we find that the sufficient condition ? < ? 21 is satisfied.
The Zap Q-learning algorithm for this example is defined by the following recursion:
i
h
b?1 ?(Xn ) ? max Xn+1 (100), Q?n (Xn+1 ) ? Q?n (Xn ) ,
?n+1 = ?n ? ?n+1 A
n+1
bn+1 = A
bn + ?n [An+1 ? A
bn ],
A
An+1 = ?(Xn )?T (?n , Xn+1 ) ,
?(?n , Xn+1 ) = ??(Xn+1 )I{Q?n (Xn+1 ) ? Xn+1 (100)} ? ?(Xn ).
High performance despite ill-conditioned matrix gain The real part of the eigenvalues of A are
shown on a logarithmic scale on the left-hand side of Fig. 6. These eigenvalues have a wide spread:
the ratio of the largest to the smallest real parts of the eigenvalues is of the order 104 . This presents a
challenge in applying any method. In particular, it was found that the performance of any scalar-gain
algorithm was extremely poor, even with projection of parameter estimates.
-10-6
10
Real ?i (A)
-5
Co (?(GA))
-10
-10-4
-10-3
-10-2
?i (GA)
0
-5
-10-1
-100
5
0
1
2
3
4
i
5
6
7
8
9
-10
-30
10
-25
-20
-15
-10
-5
Re (?(GA))
-0.525
Figure 6: Eigenvalues of A and GA for the finance example
0.08
0.07
0.06
0.016
Experimental histogram
Theoritical pdf
Experimental pdf
Zap-Q
0.05
0.014
0.012
0.01
0.04
0.008
0.03
0.006
0.02
0.004
0.01
0
-200 -150 -100 -50
0
50 100 150 200 250
Wn (1) with n = 2 ? 104
-250 -200 -150 -100 -50
0
0.002
0
50 100
Wn (1) with n = 2 ? 106
-200 -100
0
100 200 300 400 500 600
Wn (7) with n = 2 ? 104
-200
-100
0
100
200
300
Wn (7) with n = 2 ? 106
Figure 7: Theoretical and empirical variance for the finance example
bn } defined in the
In applying the Zap Q-learning algorithm it was found that the estimates {A
above recursion are nearly singular. Despite the unfavorable setting for this approach, the performance of the algorithm was?better than any alternative that was tested. Fig. 7 contains normalized
histograms of {Wni (k) = n(?ni (k) ? ?n (k)) : 1 ? i ? N } for the Zap-Q algorithm, with
?n ? ?n0.85 ? n?0.85 . The variance for finite n is close to the theoretical predictions based on the
optimal asymptotic covariance. The histograms were generated for two values of n, and k = 1, 7.
Of the d = 10 possibilities, the histogram for k = 1 had the worst match with theoretical predictions, and k = 7 was the closest. The histograms for the G-Q(0) algorithm contained in [8] showed
extremely high variance, and the experimental results did not match theoretical predictions.
7
35
30
n = 2 ? 104
100
80
25
n = 2 ? 105
600
500
15
0
40
200
20
1
1.05
1.1
1.15
1.2
1.25
0
1
Zap-Q ? = 1.0
Zap-Q ? = 0.8
Zap-Q ? = 0.85
300
10
5
G-Q(0) g = 100
G-Q(0) g = 200
400
60
20
n = 2 ? 106
100
1.05
1.1
1.15
1.2
1.25
0
1
1.05
1.1
1.15
Figure 8: Histograms of average reward: G-Q(0) learning and Zap-Q-learning, ?n ?
?
?n
1.2
?n
n
2e4
2e5
2e6
2e4
2e5
2e6
2e4
2e5
2e6
G-Q(0) g = 100
G-Q(0) g = 200
Zap-Q ? = 1.0
Zap-Q ? = 0.8
Zap-Q ? = 0.85
82.7
82.4
35.7
0.17
0.13
77.5
72.5
0
0.03
0.03
68
55.9
0
0
0
81.1
80.6
0.55
0
0
75.5
70.6
0
0
0
65.4
53.7
0
0
0
54.5
64.1
0
0
0
49.7
51.8
0
0
0
39.5
39
0
0
0
(a) Percentage of runs with h?n (x) ? 0.999
(b) h?n (x) ? 0.95
1.25
??
(c) h?n (x) ? 0.5
Table 1: Percentage of outliers observed in N = 1000 runs. Each table represents the percentage of runs
which resulted in an average reward below a certain value
Histograms of the average reward h?n (x) obtained from N = 1000 simulations is contained in
Fig. 8, for n = 2 ? 104 , 2 ? 105 and 2 ? 106 , and x(i) = 1, 1 ? i ? 100. Omitted in this figure are
outliers: values of the reward in the interval [0, 1). Table 1 lists the number of outliers for each run.
The asymptotic covariance of the G-Q(0) algorithm was not far from optimal (its trace is about 15
times larger than obtained using Zap Q-learning). However, it is observed that this algorithm suffers
from much larger outliers.
4
Conclusions
Watkins? Q-learning algorithm is elegant, but subject to two common and valid complaints: it can
be very slow to converge, and it is not obvious how to extend this approach to obtain a stable
algorithm in non-trivial parameterized settings (i.e., without a look-up table representation for the Qfunction). This paper addresses both concerns with the new Zap Q(?) algorithms that are motivated
by asymptotic theory of stochastic approximation.
The potential complexity introduced by the matrix gain is not of great concern in many cases, because of the dramatic acceleration in the rate of convergence. Moreover, the main contribution of
this paper is not a single algorithm but a class of algorithms, wherein the computational complexity
can be dealt with separately. For example, in a parameterized setting, the basis functions can be
intelligently pruned via random projection [2].
There are many avenues for future research. It would be valuable to find an alternative to Assumption
Q3 that is readily verified. Based on the ODE analysis, it seems likely that the conclusions of
Theorem 2.2 hold without this additional assumption. No theory has been presented here for nonideal parameterized settings. It is conjectured that conditions for stability of Zap Q(?)-learning will
hold under general conditions. Consistency is a more challenging problem.
In terms of algorithm design, it is remarkable to see how well the scalar-gain algorithms perform,
provided projection is employed and the ratio of largest to smallest real parts of the eigenvalues of A
is not too large. It is possible to estimate the optimal scalar gain based on estimates of the matrix A
that is central to this paper. How to do so without introducing high complexity is an open question.
On the other hand, the performance of RPJ averaging is unpredictable. In many experiments it is
found that the asymptotic covariance is a poor indicator of finite-n performance. There are many
suggestions in the literature for improving this technique. The results in this paper suggest new
approaches that we hope will simultaneously
(i) Reduce complexity and potential numerical instability of matrix inversion,
(ii) Improve transient performance, and
(iii) Maintain optimality of the asymptotic covariance
Acknowledgments: This research was supported by the National Science Foundation under grants
EPCN-1609131 and CPS-1259040.
8
References
[1] M. G. Azar, R. Munos, M. Ghavamzadeh, and H. Kappen. Speedy Q-learning. In Advances in
Neural Information Processing Systems, 2011.
[2] K. Barman and V. S. Borkar. A note on linear function approximation using random projections. Systems & Control Letters, 57(9):784?786, 2008.
[3] A. Benveniste, M. M?etivier, and P. Priouret. Adaptive algorithms and stochastic approximations, volume 22 of Applications of Mathematics (New York). Springer-Verlag, Berlin, 1990.
Translated from the French by Stephen S. Wilson.
[4] V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. Hindustan Book
Agency and Cambridge University Press (jointly), Delhi, India and Cambridge, UK, 2008.
[5] V. S. Borkar and S. P. Meyn. The ODE method for convergence of stochastic approximation
and reinforcement learning. SIAM J. Control Optim., 38(2):447?469, 2000. (also presented at
the IEEE CDC, December, 1998).
[6] J. A. Boyan. Technical update: Least-squares temporal difference learning. Mach. Learn.,
49(2-3):233?246, 2002.
[7] D. Choi and B. Van Roy. A generalized Kalman filter for fixed point approximation and
efficient temporal-difference learning. Discrete Event Dynamic Systems: Theory and Applications, 16(2):207?239, 2006.
[8] A. M. Devraj and S. P. Meyn. Fastest Convergence for Q-learning. ArXiv e-prints, July 2017.
[9] E. Even-Dar and Y. Mansour. Learning rates for Q-learning. Journal of Machine Learning
Research, 5(Dec):1?25, 2003.
[10] A. Givchi and M. Palhang. Quasi Newton temporal difference learning. In Asian Conference
on Machine Learning, pages 159?172, 2015.
[11] P. W. Glynn and D. Ormoneit. Hoeffding?s inequality for uniformly ergodic Markov chains.
Statistics and Probability Letters, 56:143?146, 2002.
[12] V. R. Konda and J. N. Tsitsiklis. Convergence rate of linear two-time-scale stochastic approximation. Ann. Appl. Probab., 14(2):796?819, 2004.
[13] H. J. Kushner and G. G. Yin. Stochastic approximation algorithms and applications, volume 35
of Applications of Mathematics (New York). Springer-Verlag, New York, 1997.
[14] R. B. Lund, S. P. Meyn, and R. L. Tweedie. Computable exponential convergence rates for
stochastically ordered Markov processes. Ann. Appl. Probab., 6(1):218?237, 1996.
[15] D.-J. Ma, A. M. Makowski, and A. Shwartz. Stochastic approximations for finite-state Markov
chains. Stochastic Process. Appl., 35(1):27?45, 1990.
[16] P. G. Mehta and S. P. Meyn. Q-learning and Pontryagin?s minimum principle. In IEEE Conference on Decision and Control, pages 3598?3605, Dec. 2009.
[17] S. P. Meyn and R. L. Tweedie. Computable bounds for convergence rates of Markov chains.
Ann. Appl. Probab., 4:981?1011, 1994.
[18] E. Moulines and F. R. Bach. Non-asymptotic analysis of stochastic approximation algorithms
for machine learning. In Advances in Neural Information Processing Systems 24, pages 451?
459. Curran Associates, Inc., 2011.
[19] Y. Pan, A. M. White, and M. White. Accelerated gradient temporal difference learning. In
AAAI, pages 2464?2470, 2017.
[20] D. Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral
methods. Electron. J. Probab., 20:32 pp., 2015.
9
[21] B. T. Polyak. A new method of stochastic approximation type. Avtomatika i telemekhanika (in
Russian). translated in Automat. Remote Control, 51 (1991), pages 98?107, 1990.
[22] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM
J. Control Optim., 30(4):838?855, 1992.
[23] D. Ruppert. A Newton-Raphson version of the multivariate Robbins-Monro procedure. The
Annals of Statistics, 13(1):236?245, 1985.
[24] D. Ruppert. Efficient estimators from a slowly convergent Robbins-Monro processes. Technical Report Tech. Rept. No. 781, Cornell University, School of Operations Research and Industrial Engineering, Ithaca, NY, 1988.
[25] C. Szepesv?ari. The asymptotic convergence-rate of Q-learning. In Proceedings of the 10th
International Conference on Neural Information Processing Systems, pages 1064?1070. MIT
Press, 1997.
[26] J. Tsitsiklis. Asynchronous stochastic approximation and Q-learning. Machine Learning,
16:185?202, 1994.
[27] J. N. Tsitsiklis and B. Van Roy. Optimal stopping of Markov processes: Hilbert space theory,
approximation algorithms, and an application to pricing high-dimensional financial derivatives.
IEEE Trans. Automat. Control, 44(10):1840?1851, 1999.
[28] C. J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, King?s College, Cambridge,
Cambridge, UK, 1989.
[29] H. Yao, S. Bhatnagar, and C. Szepesv?ari. LMS-2: Towards an algorithm that is as cheap as
LMS and almost as efficient as RLS. In Decision and Control, 2009 held jointly with the 2009
28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference
on, pages 1181?1188. IEEE, 2009.
[30] H. Yao and Z.-Q. Liu. Preconditioned temporal difference learning. In Proceedings of the 25th
international conference on Machine learning, pages 1208?1215. ACM, 2008.
[31] H. Yu and D. P. Bertsekas. Q-learning and policy iteration algorithms for stochastic shortest
path problems. Annals of Operations Research, 208(1):95?132, 2013.
10
| 6818 |@word trial:1 version:6 inversion:1 polynomial:2 stronger:1 seems:1 open:1 mehta:1 simulation:3 gainesville:1 covariance:17 bn:20 q1:6 automat:2 dramatic:2 recursively:1 carry:1 kappen:1 reduction:1 liu:1 contains:3 denoting:3 optim:2 readily:1 fn:2 numerical:6 cheap:1 zap:38 designed:2 plot:2 update:6 juditsky:2 n0:4 half:1 parameterization:1 xk:1 paulin:1 bijection:1 node:1 dn:4 constructed:1 along:1 unacceptable:1 consists:1 x0:22 expected:2 behavior:2 bellman:9 moulines:1 discounted:1 td:2 little:1 unpredictable:1 provided:1 moreover:5 bounded:1 maximizes:1 q2:1 finding:2 transformation:1 temporal:6 concave:1 finance:3 complaint:1 scaled:2 k2:1 uk:3 control:8 grant:1 bertsekas:1 positive:2 engineering:2 rept:1 limit:4 despite:3 mach:1 path:2 approximately:1 emphasis:1 initialization:2 resembles:1 quantified:1 suggests:3 challenging:1 appl:4 co:1 fastest:1 range:1 practical:1 unique:3 acknowledgment:1 practice:1 procedure:1 area:1 empirical:1 maxx:2 projection:6 confidence:2 suggest:2 close:4 ga:6 applying:2 instability:1 measurable:1 deterministic:4 quick:1 equivalent:1 ergodic:1 rule:3 estimator:1 meyn:7 regarded:2 financial:1 stability:1 limiting:1 annals:2 pt:1 suppose:2 curran:1 associate:1 roy:2 satisfying:2 totic:1 observed:4 coincidence:1 electrical:1 worst:1 revisiting:1 remote:1 valuable:1 agency:1 complexity:4 reward:7 dynamic:2 ghavamzadeh:1 basis:5 r100:1 rpj:5 translated:2 easily:1 joint:1 monte:1 outside:1 outcome:1 larger:2 valued:2 solve:1 say:1 statistic:2 theoritical:2 jointly:2 sequence:23 eigenvalue:7 intelligently:1 maximal:1 remainder:1 loop:1 poorly:1 ent:1 convergence:14 converges:5 spent:1 illustrate:1 coupling:1 school:1 qt:9 b0:1 solves:3 implies:1 lyapunov:1 closely:2 correct:1 filter:1 stochastic:24 transient:2 argued:2 extension:2 hold:5 considered:3 great:1 minu:2 mapping:2 electron:1 lm:2 achieves:2 smallest:2 omitted:1 travel:1 robbins:2 largest:2 tool:1 hope:1 mit:1 clearly:1 stressing:1 gaussian:1 rather:1 pn:1 cornell:1 varying:1 timevarying:1 wilson:1 q3:4 derived:1 improvement:2 tech:1 contrast:1 industrial:1 stopping:6 typically:2 quasi:1 arg:1 among:1 ill:1 denoted:2 special:6 beach:1 represents:1 look:1 rls:1 nearly:1 yu:1 mimic:1 minimized:2 future:1 report:1 piecewise:1 irreducible:1 simultaneously:1 resulted:2 national:1 asian:1 delayed:1 replaced:1 maintain:1 attempt:1 interest:2 investigate:1 possibility:1 held:1 chain:7 accurate:1 edge:1 intense:1 asymp:1 tweedie:2 incomplete:1 pmf:1 re:1 theoretical:5 minimal:1 instance:1 modeling:1 compelling:1 markovian:4 gn:4 cost:6 introducing:1 deviation:1 entry:1 predictor:1 uniform:1 too:1 st:1 international:2 siam:2 yao:2 thesis:1 central:2 satisfied:3 aaai:1 slowly:1 hoeffding:2 stochastically:1 book:1 derivative:1 potential:2 inc:1 satisfy:2 performed:2 exogenous:1 analyze:1 sup:1 complicated:1 monro:2 contribution:3 square:1 ni:1 variance:9 identify:1 dealt:1 accurately:1 none:1 carlo:1 bhatnagar:1 worth:1 reach:1 suffers:1 whenever:1 competitor:1 pp:1 glynn:1 obvious:1 associated:5 proof:3 gain:25 lim:7 organized:1 hilbert:1 sean:1 telemekhanika:1 dt:2 wherein:1 improved:1 though:1 furthermore:1 just:1 until:1 hand:3 continuity:1 french:1 pricing:1 mdp:4 russian:1 usa:1 normalized:5 white:2 self:1 eligibility:1 hindustan:1 steady:3 coincides:2 generalized:1 pdf:4 complete:3 motion:1 ari:2 superior:1 common:1 exponentially:1 volume:2 million:1 extend:1 cambridge:4 rd:4 consistency:2 mathematics:2 had:1 stable:3 pu:8 brownian:1 closest:1 recent:1 showed:1 multivariate:1 optimizing:2 conjectured:1 certain:1 verlag:2 inequality:2 success:1 minimum:1 additional:2 employed:1 recognized:1 converge:2 shortest:1 surely:1 maximize:1 clt:3 u0:4 ii:2 full:1 stephen:1 july:1 technical:3 match:3 offer:1 raphson:7 long:1 bach:1 prediction:4 variant:2 essentially:1 metric:1 arxiv:1 iteration:4 histogram:9 dec:2 cps:1 addition:2 szepesv:2 separately:1 ode:10 interval:3 singular:1 ithaca:1 probably:1 subject:1 ufl:2 elegant:1 mature:1 undirected:1 december:1 practitioner:1 near:1 ideal:4 iii:2 easy:1 wn:8 identically:1 architecture:1 marton:1 identified:1 restrict:1 polyak:3 reduce:1 cn:1 qfunction:1 avenue:1 computable:2 six:6 motivated:1 accelerating:1 york:3 action:6 dar:1 discount:1 percentage:3 track:2 discrete:2 key:1 verified:1 asymptotically:1 graph:2 monotone:1 sum:1 run:5 parameterized:6 letter:2 almost:2 reasonable:1 throughout:1 reader:3 decision:2 summarizes:1 entirely:1 fl:1 bound:8 convergent:2 adapted:1 aspect:1 speed:1 optimality:2 min:4 extremely:2 pruned:1 department:1 poor:3 pan:1 evolves:2 outlier:4 invariant:1 taken:1 equation:10 remains:1 available:1 operation:2 spectral:1 stepsize:1 alternative:2 florida:1 original:2 include:1 ensure:1 kushner:1 newton:8 konda:1 restrictive:1 chinese:1 classical:1 objective:1 avtomatika:1 question:1 print:1 concentration:2 dependence:1 usual:1 diagonal:1 ccc:1 gradient:1 berlin:1 nx:1 trivial:1 preconditioned:1 kalman:1 modeled:2 priouret:1 ratio:2 unfortunately:1 trace:3 filtration:1 implementation:3 design:3 policy:4 perform:2 observation:2 markov:9 finite:14 supporting:1 extended:3 mansour:1 introduced:6 pair:1 optimized:2 delhi:1 nip:1 discontinuity:1 address:2 trans:1 below:1 dynamical:1 lund:1 appeared:2 challenge:1 max:6 including:1 critical:1 suitable:1 treated:1 boyan:1 event:1 indicator:2 ormoneit:1 recursion:7 telescoping:2 improve:3 barman:1 ne:2 prior:5 literature:5 geometric:1 probab:4 evolve:1 asymptotic:26 cdc:2 suggestion:1 remarkable:3 foundation:1 agent:1 sufficient:2 consistent:2 principle:1 viewpoint:1 benveniste:1 summary:1 repeat:1 supported:1 asynchronous:1 tsitsiklis:3 guide:1 side:2 india:1 wide:1 munos:1 van:2 xn:48 transition:1 valid:1 author:2 made:1 reinforcement:5 projected:1 adaptive:1 far:1 qlearning:1 supremum:1 confirm:1 active:1 assumed:2 shwartz:1 un:15 continuous:4 pen:1 table:4 learn:1 reasonably:1 ca:1 improving:1 e5:3 poly:2 did:1 main:3 spread:1 azar:1 wni:2 n2:1 repeated:1 complementary:1 fig:10 referred:3 en:3 martingale:3 slow:2 ny:1 surveyed:2 explicit:1 exceeding:1 exponential:1 fortunate:1 lie:1 crude:1 exercise:1 watkins:23 theorem:7 e4:3 choi:1 xt:1 pac:1 list:1 admits:1 concern:3 phd:1 hoped:1 nonideal:1 conditioned:2 gap:1 logarithmic:1 borkar:3 yin:1 likely:1 prevents:1 expressed:1 contained:4 ordered:1 scalar:7 lstd:1 springer:2 acm:1 ma:1 conditional:2 goal:3 king:1 acceleration:2 ann:3 towards:1 price:1 bennett:1 feasible:1 ruppert:3 typical:3 infinite:2 uniformly:1 averaging:4 called:1 total:1 ece:1 experimental:5 pontryagin:1 unfavorable:1 college:1 speedy:6 support:1 e6:3 accelerated:1 tested:2 ex:2 |
6,433 | 6,819 | Expectation Propagation for t-Exponential Family
Using q-Algebra
Futoshi Futami
The University of Tokyo, RIKEN
[email protected]
Issei Sato
The University of Tokyo, RIKEN
[email protected]
Masashi Sugiyama
RIKEN, The University of Tokyo
[email protected]
Abstract
Exponential family distributions are highly useful in machine learning since their
calculation can be performed e?ciently through natural parameters. The exponential family has recently been extended to the t-exponential family, which contains
Student-t distributions as family members and thus allows us to handle noisy data
well. However, since the t-exponential family is defined by the deformed exponential, an e?cient learning algorithm for the t-exponential family such as expectation
propagation (EP) cannot be derived in the same way as the ordinary exponential
family. In this paper, we borrow the mathematical tools of q-algebra from statistical physics and show that the pseudo additivity of distributions allows us to perform
calculation of t-exponential family distributions through natural parameters. We
then develop an expectation propagation (EP) algorithm for the t-exponential family, which provides a deterministic approximation to the posterior or predictive
distribution with simple moment matching. We finally apply the proposed EP
algorithm to the Bayes point machine and Student-t process classification, and
demonstrate their performance numerically.
1
Introduction
Exponential family distributions play an important role in machine learning, due to the fact that their
calculation can be performed e?ciently and analytically through natural parameters or expected
su?cient statistics [1]. This property is particularly useful in the Bayesian framework since a
conjugate prior always exists for an exponential family likelihood and the prior and posterior are
often in the same exponential family. Moreover, parameters of the posterior distribution can be
evaluated only through natural parameters.
As exponential family members, Gaussian distributions are most commonly used because their
moments, conditional distribution, and joint distribution can be computed analytically. Gaussian
processes are a typical Bayesian method based on Gaussian distributions, which are used for various
purposes such as regression, classification, and optimization [8]. However, Gaussian distributions are
sensitive to outliers and heavier-tailed distributions are often more preferred in practice. For example,
Student-t distributions and Student-t processes are good alternatives to Gaussian distributions [4]
and Gaussian processes [10], respectively.
A technical problem of the Student-t distribution is that it does not belong to the exponential family
unlike the Gaussian distribution and thus cannot enjoy good properties of the exponential family. To
cope with this problem, the exponential family was recently generalized to the t-exponential family [3],
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
which contains Student-t distributions as family members. Following this line, the Kullback-Leibler
divergence was generalized to the t-divergence, and approximation methods based on t-divergence
minimization have been explored [2]. However, the t-exponential family does not allow us to employ
standard useful mathematical tricks, e.g., logarithmic transformation does not reduce the product of texponential family functions into summation. For this reason, the t-exponential family unfortunately
does not inherit an important property of the original exponential family, that is, calculation can be
performed through natural parameters. Furthermore, while the dimensionality of su?cient statistics
is the same as that of the natural parameters in the exponential family and thus there is no need to
increase the parameter size to incorporate new information [9], this useful property does not hold in
the t-exponential family.
The purpose of this paper is to further explore mathematical properties of natural parameters of the
t-exponential family through pseudo additivity of distributions based on q-algebra used in statistical
physics [7, 11]. More specifically, our contributions in this paper are three-fold:
1. We show that, in the same way as ordinary exponential family distributions, q-algebra allows us
to handle the calculation of t-exponential family distributions through natural parameters.
2. Our q-algebra based method enables us to extend assumed density filtering (ADF) [2] and develop
an algorithm of expectation propagation (EP) [6] for the t-exponential family. In the same way as
the original EP algorithm for ordinary exponential family distributions, our EP algorithm provides
a deterministic approximation to the posterior or predictive distribution for t-exponential family
distributions with simple moment matching.
3. We apply the proposed EP algorithm to the Bayes point machine [6] and Student-t process classification, and we demonstrate their usefulness as alternatives to the Gaussian approaches numerically.
2
t-exponential Family
In this section, we review the t-exponential family [3, 2], which is a generalization of the exponential
family.
The t-exponential family is defined as,
p(x; ?) = expt (??(x), ?? ? gt (?)),
where expt (x) is the deformed exponential function defined as
{
exp(x)
if t = 1,
1
expt (x) =
[1 + (1 ? t)x] 1?t otherwise,
(1)
(2)
and gt (?) is the log-partition function that satisfies
?? gt (?) = Epes [?(x)].
(3)
The notation Epes denotes the expectation over p (x), where p (x) is the escort distribution of p(x)
defined as
p(x)t
pes (x) = ?
.
(4)
p(x)t dx
We call ? a natural parameter and ?(x) su?cient statistics.
es
es
Let us express the k-dimensional Student-t distribution with v degrees of freedom as
(
)? v+k
2
?((v + k)/2)
?
?1
1 + (x ? ?) (v?) (x ? ?)
,
St(x; v, ?, ?) =
k/2
1/2
(?v) ?(v/2)|?|
(5)
where ?(x) is the gamma function, |A| is the determinant of matrix A, and A? is the transpose of
matrix A. We can confirm that the Student-t distribution is a member of the t-exponential family as
follows. First, we have
(
) 1
St(x; v, ?, ?) = ? + ? ? (x ? ?)? (v?)?1 (x ? ?) 1?t ,
(6)
)1?t
(
?((v + k)/2)
.
(7)
where ? =
(?v)k/2 ?(v/2)|?|1/2
2
Note that relation ?(v + k)/2 = 1/(1 ? t) holds, from which we have
(
)
?
??(x), ?? =
(x? Kx ? 2?? Kx),
1?t
(
)
?
1
gt (?) = ?
(?? K? + 1) +
,
1?t
1?t
(8)
(9)
where K = (v?)?1 . Then, we can express the Student-t distribution as a member of the t-exponential
family as:
(
) 1
(
)
St(x; v, ?, ?) = 1 + (1 ? t)??(x), ?? ? gt (?) 1?t = expt ??(x), ?? ? gt (?) .
(10)
If t = 1, the deformed exponential function is reduced to the ordinary exponential function, and
therefore the t-exponential family is reduced to the ordinary exponential family, which corresponds
to the Student-t distribution with infinite degrees of freedom. For t-exponential family distributions,
the t-divergence is defined as follows [2]:
? (
)
Dt (p?e
p) =
pes (x) lnt p(x) ? pes (x) lnt pe(x) dx,
(11)
where lnt x :=
3
x1?t ?1
1?t
(x ? 0, t ? R+ ) and pes (x) is the escort function of p(x).
Assumed Density Filtering and Expectation Propagation
We briefly review the assumed density filtering (ADF) and expectation propagation (EP) [6].
Let D = {(x1 , y1 ), . . . , (xi , yi )} be input-output paired data. We denote the likelihood for the i-th
data
likelihood is given as
? as li (w) and the prior distribution of parameter w as p0 (w). The total ?
i li (w) and the posterior distribution can be expressed as p(w|D) ? p0 (w)
i li (w).
3.1
Assumed Density Filtering
ADF is an online approximation method for the posterior distribution.
Suppose that i ? 1 samples (x1 , y1 ), . . . , (xi?1 , yi?1 ) have already been processed and an approximation to the posterior distribution, pei?1 (w), has already been obtained. Given the i-th sample
(xi , yi ), the posterior distribution pi (w) can be obtained as
pi (w) ? pei?1 (w)li (w).
(12)
Since the true posterior distribution pi (w) cannot be obtained analytically, it is approximated in ADF
by minimizing the Kullback-Leibler (KL) divergence from pi (w) to its approximation:
pei = arg min KL(pi ?e
p).
p
e
(13)
Note that if pi and pe are both exponential family members, the above calculation is reduced to
moment matching.
3.2
Expectation Propagation
Although ADF is an e?ective method for online learning, it is not favorable for non-online situations,
because the approximation quality depends heavily on the permutation of data [6]. To overcome this
problem, EP was proposed [6].
In EP, an approximation of the posterior that contains whole data terms is prepared beforehand,
typically as a product of data-corresponding terms:
1 ?e
pe(w) =
(14)
li (w),
Z i
3
where Z is the normalizing constant. In the above expression, factor e
li (w), which is often called
a site approximation [9], corresponds to the local likelihood li (w). If each e
li (w) is an exponential
family member, the total approximation also belongs to the exponential family.
Di?erently from ADF, EP has these site approximation with the following four steps, which is
iteratively updated. First, when we update site e
lj (w), we eliminate the e?ect of site j from the total
approximation as
pe\j (w) =
pe(w)
,
e
lj (w)
(15)
where pe\j (w) is often called a cavity distribution [9]. If an exponential family distribution is used, the
above calculation is reduced to subtraction of natural parameters. Second, we incorporate likelihood
lj (w) by minimizing the divergence KL(e
p\j (w)lj (w)/Z \j ?e
p(w)), where Z \j is the normalizing
constant. Note that this minimization is reduced to moment matching for the exponential family.
After this step, we obtain pe(w). Third, we exclude the e?ect of terms other than j, which is equivalent
e(w)
to calculating a cavity distribution as e
lj (w)new ? pep\j
. Finally, we update the site approximation
(w)
new
e
e
by replacing lj (w) by lj (w) .
Note again that calculation of EP is reduced to addition or subtraction of natural parameters for the
exponential family.
3.3
ADF for t-exponential Family
ADF for the t-exponential family was proposed in [2], which uses the t-divergence instead of the KL
divergence:
? (
)
pe = arg min Dt (p?p? ) =
pes (x) lnt p(x) ? pes (x) lnt p? (x; ?) dx.
(16)
p?
When an approximate distribution is chosen from the t-exponential family, we can utilize the property
es is the escort function of p
f
?? gt (?) = Epf
e(x). Then, minimization of the
es (?(x)), where p
t-divergence yields
Epes [?(x)] = Epf
es [?(x)].
(17)
This is moment matching, which is a celebrated property of the exponential family. Since the
expectation is taken with respect to the escort function, this is called escort moment matching.
As an example, let us consider the situation where the prior is the Student-t distribution and the
e v). Then the
posterior is approximated by the Student-t distribution: p(w|D) ?
e, ?,
= pe(w) = St(w; ?
(i) e i
approximated posterior pei (w) = St(w; ?
e , ? , v) can be obtained by minimizing the t-divergence
from pi (w) ? pei?1 (w)e
li (w) as
arg min Dt (pi (w)?St(w; ?? , ?? , v)).
?? ,??
(18)
This allows us to obtain an analytical update expression for t-exponential family distributions.
4
Expectation Propagation for t-exponential Family
As shown in the previous section, ADF has been extended to EP (which resulted in moment matching
for the exponential family) and to the t-exponential family (which yielded escort moment matching
for the t-exponential family). In this section, we combine these two extensions and propose EP for
the t-exponential family.
4.1
Pseudo Additivity and q-Algebra
Di?erently from ordinary exponential functions, deformed exponential functions do not satisfy the
product rule:
expt (x) expt (y) ?= expt (x + y).
4
(19)
For this reason, the cavity distribution cannot be computed analytically for the t-exponential family.
On the other hand, the following equality holds for the deformed exponential functions:
expt (x) expt (y) = expt (x + y + (1 ? t)xy),
(20)
which is called pseudo additivity.
In statistical physics [7, 11], a special algebra called q-algebra has been developed to handle a system
with pseudo additivity. We will use the q-algebra for e?ciently handling t-exponential distributions.
Definition 1 (q-product) Operation ?q called the q-product is defined as
{
1
[x1?q + y 1?q ? 1] 1?q if x > 0, y > 0, x1?q + y 1?q ? 1 > 0,
x ?q y :=
0
otherwise.
Definition 2 (q-division) Operation ?q called the q-division is defined as
{
1
[x1?q ? y 1?q ? 1] 1?q if x > 0, y > 0, x1?q ? y 1?q ? 1 > 0,
x ?q y :=
0
otherwise.
(21)
(22)
Definition 3 (q-logarithm) The q-logarithm is defined as
lnq x :=
x1?q ? 1
(x ? 0, q ? R+ ).
1?q
(23)
The q-division is the inverse of the q-product (and visa versa), and the q-logarithm is the inverse of
the q-exponential (and visa versa). From the above definitions, the q-logarithm and q-exponential
satisfy the following relations:
lnq (x ?q y) = lnq x + lnq y,
expq (x) ?q expq (y) = expq (x + y),
(24)
(25)
which are called the q-product rules. Also for the q-division, similar properties hold:
lnq (x ?q y) = lnq x ? lnq y,
expq (x) ?q expq (y) = expq (x ? y),
(26)
(27)
which are called the q-division rules.
4.2
EP for t-exponential Family
The q-algebra allows us to recover many useful properties from the ordinary exponential family. For
example, the q-product of t-exponential family distributions yields an unnormalized t-exponential
distribution:
expt (??(x), ?1 ? ? gt (?1 ))?t expt (??(x), ?2 ? ? gt (?2 ))
= expt (??(x), (?1 + ?2 )? ? get (?1 , ?2 )).
(28)
Based on this q-product rule, we develop EP for the t-exponential family.
Consider the situation where prior distribution p(0) (w) is a member of the t-exponential family. As
an approximation to the posterior, we choose a t-exponential family distribution
pe(w; ?) = expt (??(w), ?? ? gt (?)).
(29)
In the original EP for the ordinary exponential family, we considered an approximate posterior of the
form
?
e
pe(w) ? p(0) (w)
li (w),
(30)
i
that is, we factorized the posterior to a product of site approximations corresponding to data. On the
other hand, in the case of the t-exponential family, we propose to use the following form called the
t-factorization:
?
pe(w) ? p(0) (w) ?t
(31)
?te
li (w).
i
5
The t-factorization is reduced to the original factorization form when t = 1.
This t-factorization enables us to calculate EP update rules through natural parameters for the texponential family in the same way as the ordinary exponential family. More specifically, consider
the case where factor j of the t-factorization is updated in four steps in the same way as original EP.
(I) First, we calculate the cavity distribution by using the q-division as
?
pe\j (w) ? pe(w) ?t e
lj (w) ? p(0) (w) ?t
?t e
li (w).
(32)
i?=j
The above calculation is reduced to subtraction of natural parameters by using the q-algebra rules:
?\j = ? ? ?(j) .
(33)
(II) The second step is inclusion of site likelihood lj (w), which can be performed by pe\j (w)lj (w).
The site likelihood lj (w) is incorporated to approximate the posterior by the ordinary product not the
q-product. Thus moment matching is performed to obtain a new approximation. For this purpose,
the following theorem is useful.
Theorem 1 The expected su?cient statistic,
? = ?? gt (?) = Epf
es [?(w)],
(34)
can be derived as
1
? \j Z1 ,
Z2 ?
?
where Z1 = pe\j (w)(lj (w))t dw,
? = ? \j +
(35)
?
Z2 =
\j
es (w)(l (w))t dw.
pf
j
(36)
A proof of Theorem 1 is given in Appendix A of the supplementary material. After moment matching,
we obtain an approximation, penew (w).
(III) Third, we exclude the e?ect of sites other than j. This is achieved by
e
ljnew (w) ? penew (w) ?t pe\j (w),
(37)
which is reduced to subtraction of natural parameter
\j
?new
= ?new ? ?\j .
(38)
(IV) Finally, we update the site approximation by replacing e
lj (w) with e
lj (w)new .
These four steps are our proposed EP method for the t-exponential family. As we have seen, these
steps are reduced to the ordinary EP steps if t = 1. Thus, the proposed method can be regarded as
an extention of the original EP to the t-exponential family.
4.3
Marginal Likelihood for t-exponential Family
In the above, we omitted the normalization term of the site approximation to simplify the derivation.
Here, we derive the marginal likelihood, which requires us to explicitly take into account the
ei :
normalization term C
e
ei , ?
ei ?t expt (??(w), ??).
li (w|C
ei , ?
ei2 ) = C
(39)
We assume that this normalizer corresponds to Z1 , which is the same assumption as that for the
ordinary EP. To calculate Z1 , we use the following theorem (its proof is available in Appendix B of
the supplementary material):
Theorem 2 For the Student-t distribution, we have
?
(
) 3?t
2
expt (??(w), ?? ? g)dw = expt (gt (?)/? ? g/?)
,
where g is a constant, g(?) is the log-partition function and ? is defined in (7).
6
(40)
Figure 1: Boundaries obtained by ADF (left two, with di?erent sample orders) and EP (right).
This theorem yields
2
\j
ej /?,
logt Z13?t = gt (?)/? ? gt (?)/? + logt C
(41)
and therefore the marginal likelihood can be calculated as follows (see Appendix C for details):
?
?
ZEP = p(0) (w) ?t
?t e
li (w)dw
(
=
expt
i
(?
ei /? + gt (?)/? ?
logt C
gtprior (?)/?
) 3?t
) 2
.
(42)
i
ei in Eq.(42), we obtain the marginal likelihood. Note that, if t = 1, the above
By substituting C
expression of ZEP is reduced to the ordinary marginal likelihood expression [9]. Therefore, this
marginal likelihood can be regarded as a generalization of the ordinary exponential family marginal
likelihood to the t-exponential family.
In Appendices D and E of the supplementary material, we derive specific EP algorithms for the
Bayes point machine (BPM) and Student-t process classification.
5
Numerical Experiments
In this section, we numerically illustrate the behavior of our proposed EP applied to BPM and Studentt process classification. Suppose that data (x1 , y1 ), . . . , (xn , yn ) are given, where yi ? {+1, ?1}
expresses a class label for covariate xi . We consider a model whose likelihood term can be expressed
as
li (w) = p(yi |xi , w) = ? + (1 ? 2?)?(yi ?w, xi ?),
(43)
where ?(x) is the step function taking 1 if x > 0 and 0 otherwise.
5.1
BPM
We compare EP and ADF to confirm that EP does not depend on data permutation. We generate a toy dataset in the following way: 1000 data points x are generated from Gaussian mixture model 0.05N (x; [1, 1]? , 0.05I) + 0.25N (x; [?1, 1]? , 0.05I) + 0.45N (x; [?1, ?1]? , 0.05I) +
0.25N (x; [1, ?1]? , 0.05I), where N (x; ?, ?) denotes the Gaussian density with respect to x with
mean ? and covariance matrix ?, and I is the identity matrix. For x, we assign label y = +1 when
x comes from N (x; [1, 1]? , 0.05I) or N (x; [1, ?1]? , 0.05I) and label y = ?1 when x comes from
N (x; [?1, 1]? , 0.05I) or N (x; [?1, ?1]? , 0.05I). We evaluate the dependence of the performance
of BPM (see Appendix D of the supplementary material for details) on data permutation.
Fig.1 shows labeled samples by blue and red points, decision boundaries by black lines which are
derived from ADF and EP for the Student-t distribution with v = 10 by changing data permutations.
The top two graphs show obvious dependence on data permutation by ADF (to clarify the dependence
on data permutation, we showed the most di?erent boundary in the figure), while the bottom graph
exhibits almost no dependence on data permutations by EP.
7
Figure 2: Classification boundaries.
5.2
Student-t Process Classification
We compare the robustness of Student-t process classification (STC) and Gaussian process classification (GPC) visually.
We apply our EP method to Student-t process binary classification, where the latent function follows
the Student-t process (see Appendix E of the supplementary material for details). We compare this
model with Gaussian process binary classification with the likelihood expressed Eq.(43). This kind
of model is called robust Gaussian process classification [5]. Since the posterior distribution cannot
be obtained analytically even for the Gaussian process, we use EP for the ordinary exponential family
to approximate the posterior.
We use a two-dimensional toy dataset, where we generate a two-dimensional data point xi
(i = 1, . . . , 300) following the normal distributions: p(x|yi = +1) = N (x; [1.5, 1.5]? , 0.5I)
and p(x|yi = ?1) = N (x; [?1, ?1]? , 0.5I). We add eight outliers to the dataset and evaluate the
robustness against outliers (about 3% outliers). In the experiment, we used v = 10 for Student-t
processes. We furthermore used the following kernel:
{ D
}
?
d d
d 2
k(xi , xj ) = ?0 exp ?
(44)
?1 (xi ? xj ) + ?2 + ?3 ?i,j ,
d=1
where
xdi
is the dth element of xi , and ?0 , ?1 , ?2 , ?3 are hyperparameters to be optimized.
Fig.2 shows the labeled samples by blue and red points, the obtained decision boundaries by black
lines, and added outliers by blue and red stars. As we can see, the decision boundaries obtained by
the Gaussian process classifier is heavily a?ected by outliers, while those obtained by the Student-t
process classifier are more stable. Thus, as expected, Student-t process classification is more robust
8
Table 2: Approximate log evidence
Table 1: Classification Error Rates (%)
Dataset
Pima
Ionosphere
Thyroid
Sonar
Outliers
0
5%
10%
0
5%
10%
0
5%
10%
0
5%
10%
GPC
34.0(3.0)
34.9(3.1)
36.2(3.3)
9.6(1.7)
9.9(2.8)
13.0(5.2)
4.3(1.3)
4.8(1.8)
5.4(1.4)
15.4(3.6)
18.3(4.4)
19.4(3.8)
STC
Dataset
Outliers
GPC
STC
32.3(2.6)
32.9(3.1)
34.4(3.5)
7.5(2.0)
9.6(3.2)
11.9(5.4)
4.4(1.3)
5.5(2.3)
7.2(3.4)
15.0(3.2)
17.5(3.3)
19.4(3.1)
Pima
0
5%
10%
0
5%
10%
0
5%
10%
0
5%
10%
-74.1(2.4)
-77.8(2.9)
-78.6(1.8)
-59.5(5.2)
-75.0(3.6)
-90.3(5.2)
-32.5(1.6)
-39.1(2.3)
-46.9(1.8)
-55.8(1.2)
-59.4(2.5)
-65.8(1.1)
-37.1(6.1)
-37.2(6.5)
-36.8(6.5)
-36.9(7.4)
-35.8(7.0)
-37.4(7.2)
-41.2(4.3)
-45.8(5.5)
-45.8(4.5)
-41.6(1.2)
-41.3(1.6)
-67.8(2.1)
Ionosphere
Thyroid
Sonar
against outliers compared to Gaussian process classification, thanks to the heavy-tailed structure of
the Student-t distribution.
5.3
Experiments on the Benchmark dataset
We compared the performance of Gaussian process and Student-t process classification on the UCI
datasets1. We used the kernel given in Eq.(44). The detailed explanation about experimental settings
are given in Appendix F. Results are shown in Tables 1 and 2, where outliers mean how many
percentages we randomly flip training dataset labels to make additional outliers. As we can see
Student-t process classification outperforms Gaussian process classification in many cases.
6
Conclusions
In this work, we enabled the t-exponential family to inherit the important property of the exponential
family whose calculation can be e?ciently performed thorough natural parameters by using the
q-algebra. With this natural parameter based calculation, we developed EP for the t-exponential
family by introducing the t-factorization approach. The key concept of our proposed approach is that
the t-exponential family has pseudo additivity. When t = 1, our proposed EP for the t-exponential
family is reduced to the original EP for the ordinary exponential family and t-factorization yields
the ordinary data-dependent factorization. Therefore, our proposed EP method can be viewed as a
generalization of the original EP. Through illustrative experiments, we confirmed that our proposed
EP applied to the Bayes point machine can overcome the drawback of ADF, i.e., the proposed EP
method is independent of data permutations. We also experimentally illustrated that proposed EP
applied to Student-t process classification exhibited high robustness to outliers compared to Gaussian
process classification. Experiments on benchmark data also demonstrated superiority of Student-t
process.
In our future work, we will further extend the proposed EP method to more general message passing
methods or double-loop EP. We would like also to make our method more scalable to large datasets
and develop another approximation method such as variational inference.
Acknowledgement
FF acknowledges support by JST CREST JPMJCR1403 and MS acknowledges support by KAKENHI
17H00757.
1https://archive.ics.uci.edu/ml/index.php
9
References
[1] Christopher M Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[2] Nan Ding, Yuan Qi, and SVN Vishwanathan. t-divergence based approximate inference. In
Advances in Neural Information Processing Systems, pages 1494?1502, 2011.
[3] Nan Ding and SVN Vishwanathan. t-logistic regression. In Advances in Neural Information
Processing Systems, pages 514?522, 2010.
[4] Pasi Jyl?nki, Jarno Vanhatalo, and Aki Vehtari. Robust Gaussian process regression with a
student-t likelihood. Journal of Machine Learning Research, 12(Nov):3227?3257, 2011.
[5] Hyun-Chul Kim and Zoubin Ghahramani. Outlier robust Gaussian process classification. Structural, Syntactic, and Statistical Pattern Recognition, pages 896?905, 2008.
[6] Thomas Peter Minka. A family of algorithms for approximate Bayesian inference. PhD Thesis,
Massachusetts Institute of Technology, 2001.
[7] Laurent Nivanen, Alain Le Mehaute, and Qiuping A Wang. Generalized algebra within a
nonextensive statistics. Reports on Mathematical Physics, 52(3):437?444, 2003.
[8] Carl Edward Rasmussen and Christopher KI Williams. Gaussian Processes for Machine
Learning, volume 1. MIT press Cambridge, 2006.
[9] Matthias Seeger. Expectation propagation for exponential families. Technical Report, 2005.
URL https://infoscience.epfl.ch/record/161464/files/epexpfam.pdf
[10] Amar Shah, Andrew Wilson, and Zoubin Ghahramani. Student-t processes as alternatives to
gaussian processes. In Artificial Intelligence and Statistics, pages 877?885, 2014.
[11] Hiroki Suyari and Makoto Tsukada. Law of error in Tsallis statistics. IEEE Transactions on
Information Theory, 51(2):753?757, 2005.
10
| 6819 |@word deformed:5 determinant:1 briefly:1 vanhatalo:1 covariance:1 p0:2 moment:11 celebrated:1 contains:3 outperforms:1 z2:2 expq:6 dx:3 numerical:1 partition:2 enables:2 update:5 intelligence:1 record:1 provides:2 mathematical:4 ect:3 yuan:1 issei:1 combine:1 jpmjcr1403:1 expected:3 behavior:1 pf:1 z13:1 moreover:1 notation:1 factorized:1 kind:1 developed:2 transformation:1 pseudo:6 thorough:1 masashi:1 zep:2 classifier:2 enjoy:1 yn:1 superiority:1 local:1 laurent:1 black:2 factorization:8 tsallis:1 practice:1 matching:10 nonextensive:1 zoubin:2 get:1 cannot:5 equivalent:1 deterministic:2 demonstrated:1 williams:1 rule:6 regarded:2 borrow:1 dw:4 enabled:1 handle:3 updated:2 play:1 suppose:2 heavily:2 carl:1 us:1 trick:1 element:1 approximated:3 particularly:1 recognition:2 labeled:2 ep:42 role:1 bottom:1 ding:2 wang:1 calculate:3 vehtari:1 depend:1 algebra:13 predictive:2 division:6 ei2:1 joint:1 various:1 riken:3 additivity:6 derivation:1 artificial:1 whose:2 supplementary:5 otherwise:4 statistic:7 amar:1 syntactic:1 noisy:1 online:3 analytical:1 matthias:1 propose:2 product:12 uci:2 loop:1 chul:1 double:1 derive:2 develop:4 ac:3 illustrate:1 andrew:1 erent:2 eq:3 edward:1 expt:18 come:2 drawback:1 tokyo:6 jst:1 material:5 assign:1 generalization:3 summation:1 extension:1 clarify:1 hold:4 considered:1 ic:1 normal:1 exp:2 visually:1 substituting:1 omitted:1 purpose:3 favorable:1 label:4 makoto:1 pasi:1 sensitive:1 tool:1 minimization:3 mit:1 always:1 gaussian:23 ej:1 wilson:1 derived:3 kakenhi:1 likelihood:17 seeger:1 normalizer:1 kim:1 inference:3 dependent:1 epfl:1 typically:1 lj:14 eliminate:1 relation:2 bpm:4 arg:3 classification:21 special:1 jarno:1 marginal:7 beach:1 extention:1 future:1 report:2 simplify:1 employ:1 randomly:1 gamma:1 divergence:11 resulted:1 freedom:2 message:1 highly:1 mixture:1 beforehand:1 xy:1 iv:1 logarithm:4 jyl:1 ordinary:17 introducing:1 usefulness:1 xdi:1 st:7 density:5 thanks:1 physic:4 epf:3 again:1 thesis:1 choose:1 li:15 toy:2 account:1 exclude:2 star:1 student:30 satisfy:2 explicitly:1 depends:1 performed:6 red:3 bayes:4 recover:1 contribution:1 php:1 yield:4 bayesian:3 confirmed:1 definition:4 lnt:5 against:2 sugi:1 minka:1 obvious:1 proof:2 di:4 dataset:7 ective:1 massachusetts:1 dimensionality:1 adf:14 dt:3 evaluated:1 furthermore:2 hand:2 replacing:2 su:4 ei:6 christopher:2 propagation:9 logistic:1 quality:1 usa:1 concept:1 true:1 analytically:5 equality:1 leibler:2 iteratively:1 illustrated:1 aki:1 illustrative:1 unnormalized:1 m:2 generalized:3 pdf:1 demonstrate:2 variational:1 recently:2 jp:3 volume:1 belong:1 extend:2 numerically:3 pep:1 versa:2 cambridge:1 inclusion:1 sugiyama:1 stable:1 gt:15 add:1 posterior:18 showed:1 belongs:1 binary:2 yi:8 seen:1 additional:1 subtraction:4 ii:1 technical:2 calculation:11 long:1 paired:1 qi:1 scalable:1 regression:3 expectation:11 normalization:2 kernel:2 achieved:1 addition:1 epes:3 unlike:1 exhibited:1 archive:1 file:1 member:8 call:1 ciently:4 structural:1 iii:1 xj:2 reduce:1 svn:2 expression:4 heavier:1 studentt:1 url:1 peter:1 lnq:7 passing:1 useful:6 gpc:3 detailed:1 prepared:1 processed:1 reduced:12 generate:2 http:2 percentage:1 blue:3 express:3 epexpfam:1 key:1 four:3 changing:1 utilize:1 hiroki:1 graph:2 erently:2 inverse:2 family:81 almost:1 decision:3 appendix:7 ki:1 nan:2 fold:1 yielded:1 sato:2 vishwanathan:2 ected:1 thyroid:2 min:3 conjugate:1 logt:3 visa:2 outlier:13 taken:1 flip:1 available:1 operation:2 apply:3 eight:1 alternative:3 robustness:3 shah:1 original:8 thomas:1 denotes:2 top:1 calculating:1 ghahramani:2 already:2 added:1 dependence:4 exhibit:1 reason:2 index:1 minimizing:3 unfortunately:1 pima:2 pei:5 perform:1 datasets:1 benchmark:2 hyun:1 situation:3 extended:2 incorporated:1 y1:3 kl:4 z1:4 optimized:1 datasets1:1 nip:1 dth:1 pattern:2 explanation:1 natural:16 nki:1 technology:1 acknowledges:2 prior:5 review:2 acknowledgement:1 law:1 permutation:8 filtering:4 degree:2 pi:8 heavy:1 texponential:2 transpose:1 rasmussen:1 alain:1 allow:1 institute:1 taking:1 overcome:2 boundary:6 calculated:1 xn:1 commonly:1 cope:1 transaction:1 crest:1 approximate:7 nov:1 preferred:1 kullback:2 cavity:4 confirm:2 ml:1 assumed:4 xi:10 latent:1 tailed:2 sonar:2 table:3 robust:4 ca:1 stc:3 inherit:2 whole:1 hyperparameters:1 x1:9 site:11 fig:2 cient:5 ff:1 exponential:87 pe:23 third:2 theorem:6 specific:1 covariate:1 bishop:1 explored:1 ionosphere:2 normalizing:2 evidence:1 exists:1 phd:1 te:1 kx:2 logarithmic:1 explore:1 expressed:3 escort:6 springer:1 ch:1 corresponds:3 satisfies:1 conditional:1 identity:1 viewed:1 experimentally:1 typical:1 specifically:2 infinite:1 total:3 called:11 e:7 experimental:1 support:2 incorporate:2 evaluate:2 handling:1 |
6,434 | 682 | Synaptic Weight Noise During MLP
Learning Enhances Fault-Tolerance,
Generalisation and Learning Trajectory
Alan F. Murray
Dept. of Electrical Engineering
Edinburgh University
Scotland
Peter J. Edwards
Dept. of Electrical Engjneering
Edinburgh University
Scotland
Abstract
We analyse the effects of analog noise on the synaptic arithmetic
during MultiLayer Perceptron training, by expanding the cost function to include noise-mediated penalty terms. Predictions are made
in the light of these calculations which suggest that fault tolerance,
generalisation ability and learning trajectory should be improved
by such noise-injection. Extensive simulation experiments on two
distinct classification problems substantiate the claims. The results appear to be perfectly general for all training schemes where
weights are adjusted incrementally, and have wide-ranging implications for all applications, particularly those involving "inaccurate"
analog neural VLSI.
1
Introduction
This paper demonstrates both by consjderatioll of the cost function and the learning equations, and by simulation experiments, that injection of random noise on
to MLP weights during learning enhances fault-tolerance without additional supervision. We also show that the nature of the hidden node states and the learning
trajectory is altered fundamentally, in a manner that improves training times and
learning quality. The enhancement uses the mediating influence of noise to distribute information optimally across the existing weights.
491
492
Murray and Edwards
Taylor [Taylor , 72] has studied noisy synapses, largely in a biological context, and
infers that the noise might assist learning. vVe have already demonstrated that noise
injection both reduces the learning time and improves the network's generalisation
ability [Murray, 91],[Murray, 92]. It is established[Matsuoka, 92],[Bishop, 90] that
adding noise to the training data in neural (MLP) learning improves the "quality"
of learning, as measured by the trained network's ability to generalise. Here we
infer (synaptic) noise-mediated terms that sculpt the error function to favour faster
learning, and that generate more robust internal representations, giving rise to
better generalisation and immunity to smaIl variations in the characteristics of the
test data. Much closer to the spirit of this paper is the work of Hanson[Hanson, 90].
His stochastic version of the delta rule effectively adapts weight means and standard
deviations. Also Sequin and Clay [Sequin , 91] use stuck-at faults during training
which imbues the trained network with an ability to withstand such faults. They
also note, but do not pursue, an increased generalisation ability.
This paper presents an outline of the mathematical predictions and verification
simulations. A full description of the work is given in [Murray, 93] .
Mathematics
2
Let us analyse an MLP with I input, J hidden and ]{ output nodes, with a set of
P training input vectors Qp = {Oip}, looking at the effect of noise injection into the
error function itself. We are thus able to infer, from the additional terms introduced
by noise, the characteristics of solutions that tend to reduce the error, and those
which tend to increase it. The former will clearly be favoured, or at least stabilised,
by the additional terms. while the latter will be de-stabilised.
Let each weight Tab be augmented by a random noise source, such that Tab -+Tab + ~abTab, for all weights {Tab}. Neuron thresholds are treated in precisely
the same way. Note in passing, but importantly, that this synaptic noise is not
the same as noise on the input data. Input noise is correlated across the synapses
leaving an input node, while the synaptic noise that forms the basis of this study
is not. The effect is thus quite distinct.
Considering, therefore, an error function of the form ;1 K-l
1 K-l
ftot,p ="2 L fk/ ="2 L(okp({Tab}) -Okp)2
(1)
k=O
k=O
Where Okp is the target output. We can now perform a Taylor expansion of the output Okp to second order, around the noise-free weight set, {TN}, and thus augment
the error function ;-
Okp
-+
k
Okp + ""'
L..J Tab~ab (aO
aT. P)
ab
ab
+"21 ""'
L..J
a b,c d
a20kp
Tab~abTcd~cd ( aT.
aT. ) +0(> 3) (2)
ab
cd
If we ignore terms of order ~ 3 and above, and taking the time average over the
learning phase, we can infer that two terms are added to the error function ;-
< ftot >=< (tot( {TN}) > +2~
t"I:l ~2
p=l
k=O
LTab 2
ab
[(~;kP)
ab
2
+ (kp
(:~k~)l
ab
(3)
Synaptic Weight Noise During MLP Learning
Consider also the perceptron rule update on the hidden-output layer along with the
expanded error function :-
< 6Tkj >=
-T
L..J <
"'"
p
I
fkpOjpOkp
' " < OjpOkp > X L..J Tab 2 {)2-?2
kP
> - T~2- "L..J
2
I
"'"
ab
P
(4)
aTab
averaged over several training epochs (which is acceptable for small values of T the
adaption rate parameter).
3
Simulations
The simulations detailed below are based on the virtual targets algorithm
[Murray, 92], a variant on backpropagation, with broadly similar performance. The
"targets" algorithm was chosen for its faster convergence properties. Two contrasting classification tasks were selected to verify the predictions made in the following
section by simulation. The first, a feature location task, uses real world normalised
greyscale image data. The task was to locate eyes in facial images - to classify
sections of these as either "eye" or "not-eye". The network was trained on 16 x 16
preclassified sections of the images, classified as eyes and not-eyes. The not-eyes
were random sections of facial images, avoiding the eyes (see Fig. 1). The second,
16x16 section
------=>~
.:J
"eye"
c=-
"not-eye"
Figure 1: The eye/not-eye classifier.
a more artificial task, was the ubiquitous character encoder (Fig. 2) where a 25-
1B-~111111111111111111
:>
26
I I I I
Figure 2: The character encoder task.
dimensional binary input vector describing the 26 alphabetic characters (each 5 x 5
pixels) was used to train the network with a one-out-of-26 output code.
During the simulations noise was added to the weights at a level proportional to the
weight size and at a probability distribution of uniform density (i.e. -~max < ~ <
~max). Levels of up to 40% were probed in detail - although it is clear that the
expansion above is not quantitatively valid at this level. Above these percentages
further improvements were seen in the network performance, although the dynamics
of the training algorithm became chaotic. The injected noise level was reduced
493
494
Murray and Edwards
smoothly to a minimum value of 1% as the network approached convergence (as
evidenced by the highest output bit error). As ill all neural network simulations, the
results depended upon the training parameters, network sizes and the random start
position of the network. To overcome these factors and to achieve a meaningful
result 35 weight sets were produced for each noise level. All other characteristics
of the training process were held constant. The results are therefore not simply
pathological freaks.
4
4.1
Prediction/Verification
Fault Tolerance
Consider the first derivative penalty term in the expanded cost function (3), averaged over all patterns, output nodes and weights :[{ X
A'
[Ta" ( ~~: ) ']
(5)
The implications of this term are straightforward. For large values of the (weighted) average magnitude of the derivative, the overall error is increased. This term
therefore causes solutions to be favoured where the dependence of outputs on individual weights is evenly distributed across the entire weight set. Furthermore,
weight saliency should not only have a lower average value, but a smaller scatter
across the weight set as the training process attempts to reconcile the competing
pressures to reduce both (1) and (5) . This more distributed representation should
be manifest in an improved tolerance to faulty weights.
~
U
OJ
......
nolSe=O% -
80
noise::;: 1fJro
=
noise 20% ----noise =30%
noise=40% ---
0
U
-0
60
OJ
:-SII!
~
0
40
II!
'.
0::
~
'"
20
'. ~ .
Q.
0
0
5
10
15
20
25
Synapses Removed ('?o)
Figure 3: Fault tolerance in the character encoder problem.
Simulations were carried out on 35 weight sets produced for each ofthe two problems
at each of 5 levels of noise injected during training. Weights were then randomly removed and the networks tested on the training data. The resulting graphs
(Fig. 3, 4) show graceful degradation with an increased tolerance to faults with
injected noise during training. The networks were highly constrained for these simulations to remove some of the natural redundancy of the MLP structure. Although
the eye/not-eye problem contains a high proportion of redundant information, the
Synaptic Weight Noise During MLP Learning
1
I
nDise~O%
noise
=10%
noise = ~
____
o
~
5
_ _ _ _- L_ _ _ _
-._.__
noise~
30%
noise = 40% - - -
-- --
~L-
-
~
______
20
15
Synapses Removed
~
____
~
25
(%)
Figure 4: Fault tolerance enhancement in the eye/not-eye classifier.
improvement in the networks ability to withstand damage, with injected noise, is
clear.
4.2
Generalisation Ability
Considering the derivative in equation 5, and looking at the input-hidden weights.
The term that is added to the error function, again averaged over all patterns,
output nodes and weights is :-
(6)
If an output neuron has a non-zero connection from a particular hidden node (Tkj "I
0), and provided the input Oip is non-zero and is connected to the hidden node (Tji "I
0), there is also a term oJp that will tend to favour solutions with the hidden
nodes also turned firmly ON or OFF (i.e. Ojp = 0 or 1). Remembering, of
course, that all these terms are noise-mediated, and that during the early stages
of training, the "actual" error fkp, in (1), will dominate, this term will de-stabilise
final solutions that balance the hidden nodes on the slope of the sigmoid. Naturally,
hidden nodes OJ that are firmly ON or OFF are less likely to change state as a
result of small variations in the input data {Oi}. This should become evident in an
increased tolerance to input perturbations and therefore an increased generalisation
ability.
Simulations were again carried out on the two problems using 35 weight sets for
each level of injected synaptic noise during training. For the character encoder
problem generalisation is not really an issue, but it is possible to verify the above
prediction by introducing random gaussian noise into the input data and noting the
degradation in performance. The results of these simulations are shown in Fig. 5,
and clearly show an increased ability to withstand input perturbation, with injected
noise into the synapses during training.
Generalisation ability for the eye/not-eye problem is a real issue. This problem
therefore gives a valid test of whether the synaptic noise technique actually improves generalisation performance. The networks were therefore tested on previously unseen facial images and the results are shown in Table 1. These results show
495
496
Murray and Edwards
100
t
~
I::
0
u
:~"
11
70
"
0'"
'"E
60
-=
'iii
.,..!:!
-_.------------..-.-
perturbation = 0,(5 perturbation = 0.10
.
perturbation = 0.15 .....perturbation = 0.20 .. .
50
40
30
-
-.~ . ,
. /.
ll-
..
I
0
10
30
20
40
Noise Level (%)
Figure 5: Generalisation enhancement shown through increased tolerance to input
perturbation , in the character encoder problem.
Noise Levels
Test Patterns
0%
67.875
I
I
Correctly Classified (%)
10% I 20% I 30%
70.406 I 70.416 I 72.454
J
40%
I 75.446
Table 1: Generalisation enhancement shown through increased ability to classifier
previously unseen data, in the eye/not-eye task.
dramatically improved generalisation ability with increased levels of injected synaptic noise during training. An improvement of approximately 8% is seen - consistent
with earlier results on a different "real" problem [Murray, 91].
4.3
Learning Trajectory
Considering now the second derivative penalty term in the expanded cost function
(2). This term is complex as it involves second order derivatives, and also depends
upon the sign a.nd magnitude of the errors themselves {flep}. The simplest way of
looking at its effect is to look at a single exemplar term :-
K t:,.
2 lep T. 2({)2
Olep )
{)T 2
f
ab
ab
(7)
This term implies that when the combination of flep ~~::~ is negative then the overall
cost function error is reduced and vice versa. The term (7) is therefore constructive
as it can actually lower the error locally via noise injection, whereas (6) always
increases it . (7) can therefore be viewed as a sculpting of the error surface during
the early phases of training (i.e. when flep is sUbstantial). In particular, a weight set
with a higher "raw" error value, calculated from (1), may be favoured over one with
a lower value if noise-injected terms indicate that the "poorer" solution is located
in a promising area of weight space. This "look-ahead" property should lead to an
enhanced learning trajectory, perhaps finding a solution more rapidly.
In the augmented weight update equation (4), the noise is acting as a medium
projecting statistical information about the character of the entire weight set on to
Synaptic Weight Noise During MLP Learning
the update equation for each particular weight. So, the effect of the noise term is
to account not only for the weight currently being updated, but to add in a term
that estimates what the other weight changes are likely to do to the output , and
adjust the size of the weight increment/decrement as appropriate.
To verify this by simulation is not as straightforward as the other predictions . It
is however possible to show the mean training time for each level of injected noise.
For each noise level, 1000 random start points were used to allow the underlying
properties of the training process to emerge. The results are shown in Fig. 6 and
600
~u
0
Q.,
.
~
6
l=
00
.5
..c:
'.."
...J
S50
SOO
450
400
c:
i!
::E
350
300
0
10
20
40
30
50
Synaptic Noise Level Used During Training
Figure 6: Training time as a function of injected synaptic noise during training.
clearly show that at low noise levels (::; 30% for the case of the character encoder)
a definite reduction in training times are seen. At higher levels the chaotic nature
of the "noisy learning" takes over.
It is also possible to plot the combination of
fkp
~~::~. This is shown in Fig. 7,
again for the character encoder problem. The term (7) is reduced more quickly
~
~
..
1.0
O.S
0.0
.~
i;j
:>
?c
-0.5
Q
-1.0
.
.G
."
~..
g
'tl
I.<i
~ noise =7%
-I.S
-2.0
-2.5
-3.0
0
SOO
1000
ISOO
2000
Learning Epochs
Figure 7: The second derivative x error term trajectory for injected synaptic noise
levels 0% and 7%.
with injected noise, thus effecting better weight changes via (4) . At levels of noise
> 7% the effect is exaggerated, and the noise mediated improvements take place
497
498
Murray and Edwards
during the first 100-200 epochs of training. The level of 7% is displayed simply
because it is visually clear what is happening, and is also typical.
5
Conclusion
We have shown both by mathematical expansion and by simulation that injecting
random noise on to the synaptic weights of a MultiLayer Perceptron during the
training phase enhances fault-tolerance, generalisation ability and learning trajectory. It has long been held that any inaccuracy during training is detrimental to
MLP learning. This paper proves that analog inaccuracy is not. The mathematical
predictions are perfectly general and the simulations relate to a non-trivial classification task and a "real" world problem. The results are therefore important for
the designers of analog hardware and also as a non-invasive technique for producing
learning enhancements in the software domain.
Acknowledgements
We are grateful to the Science and Engineering Research Council for financial support, and to Lionel Tarassenko and Chris Bishop for encouragement and advice.
References
J. G. Taylor, "Spontaneous Behaviour in Neural Networks" , J. Theor. Bioi., vol. 36, pp. 513-528, 1972.
[Murray, 91]
A. F. Murray, "Analog Noise-Enhanced Learning in Neural Network Circuits," Electronics Letters, vol. 2, no. 17, pp. 1546-1548,
1991.
[Murray, 92]
A. F. Murray, "Multi-Layer Perceptron Learning Optimised for OnChip Implementation - a Noise Robust System," Neural Computation, vol. 4, no. 3, pp. 366-381, 1992.
[Matsuoka, 92] K. Matsuoka, "Noise Injection into Inputs in Back-Propagation
Learning", IEEE Trans. Systems, Man and Cybernetics, vol. 22,
no. 3, pp. 436-440, 1992.
[Bishop, 90]
C. Bishop, "Curvature-Driven Smoothing in Backpropagation Neural Networks," IJCNN, vol. 2, pp. 749-752, 1990.
[Hanson, 90]
S. J. Hanson, "A Stochastic Version of the Delta Rule", Physica D,
vol. 42, pp. 265-272, 1990.
[Sequin, 91]
C. H. Sequin, R. D. Clay, "Fault Tolerance in Feed-Forward Artificial Neural Networks" , Neural Networks: Concepts, Applications
and Implementations, vol. 4, pp. 111-141, 1991.
[Murray, 93]
A. F. Murray, P. J. Edwards, "Enhanced MLP Performance and
Fault Tolerance Resulting from Synaptic Weight Noise During
Training", IEEE Trans. Neural Networks, 1993, In Press.
[Taylor, 72]
| 682 |@word version:2 proportion:1 nd:1 simulation:15 pressure:1 reduction:1 electronics:1 contains:1 existing:1 scatter:1 tot:1 remove:1 plot:1 update:3 selected:1 scotland:2 node:10 location:1 mathematical:3 along:1 sii:1 become:1 manner:1 themselves:1 multi:1 actual:1 considering:3 provided:1 underlying:1 circuit:1 medium:1 what:2 pursue:1 contrasting:1 finding:1 demonstrates:1 classifier:3 appear:1 producing:1 engineering:2 depended:1 optimised:1 approximately:1 onchip:1 might:1 studied:1 averaged:3 definite:1 backpropagation:2 chaotic:2 area:1 suggest:1 faulty:1 context:1 influence:1 demonstrated:1 straightforward:2 smail:1 rule:3 importantly:1 dominate:1 his:1 financial:1 variation:2 increment:1 updated:1 target:3 enhanced:3 spontaneous:1 us:2 sequin:4 particularly:1 located:1 tarassenko:1 electrical:2 connected:1 tkj:2 highest:1 removed:3 substantial:1 dynamic:1 trained:3 grateful:1 upon:2 basis:1 train:1 distinct:2 kp:3 artificial:2 approached:1 sculpt:1 quite:1 encoder:7 ability:13 unseen:2 analyse:2 noisy:2 itself:1 final:1 turned:1 rapidly:1 alphabetic:1 adapts:1 achieve:1 description:1 convergence:2 enhancement:5 lionel:1 exemplar:1 measured:1 edward:6 involves:1 implies:1 indicate:1 tji:1 stochastic:2 virtual:1 behaviour:1 ao:1 really:1 biological:1 theor:1 adjusted:1 physica:1 around:1 visually:1 claim:1 early:2 sculpting:1 injecting:1 currently:1 council:1 vice:1 weighted:1 clearly:3 gaussian:1 always:1 improvement:4 stabilise:1 inaccurate:1 entire:2 hidden:9 vlsi:1 pixel:1 overall:2 classification:3 ill:1 issue:2 augment:1 constrained:1 smoothing:1 look:2 fundamentally:1 quantitatively:1 pathological:1 randomly:1 individual:1 phase:3 ab:10 attempt:1 mlp:10 highly:1 adjust:1 lep:1 light:1 held:2 implication:2 poorer:1 closer:1 facial:3 taylor:5 increased:9 classify:1 earlier:1 cost:5 introducing:1 deviation:1 uniform:1 optimally:1 density:1 off:2 quickly:1 again:3 derivative:6 withstand:3 account:1 distribute:1 de:2 depends:1 tab:8 start:2 slope:1 oi:1 became:1 largely:1 characteristic:3 saliency:1 ofthe:1 raw:1 produced:2 trajectory:7 cybernetics:1 classified:2 synapsis:5 synaptic:16 pp:7 invasive:1 naturally:1 manifest:1 improves:4 infers:1 ubiquitous:1 clay:2 actually:2 back:1 feed:1 stabilised:2 ta:1 higher:2 improved:3 furthermore:1 stage:1 propagation:1 incrementally:1 matsuoka:3 quality:2 perhaps:1 effect:6 verify:3 concept:1 former:1 ll:1 during:21 substantiate:1 outline:1 evident:1 tn:2 ranging:1 image:5 sigmoid:1 qp:1 analog:5 versa:1 encouragement:1 fk:1 mathematics:1 okp:6 supervision:1 surface:1 add:1 curvature:1 oip:2 exaggerated:1 driven:1 binary:1 fault:12 seen:3 minimum:1 additional:3 remembering:1 redundant:1 arithmetic:1 ii:1 full:1 reduces:1 infer:3 alan:1 faster:2 calculation:1 long:1 prediction:7 involving:1 variant:1 multilayer:2 whereas:1 source:1 leaving:1 fkp:2 tend:3 spirit:1 imbues:1 noting:1 iii:1 perfectly:2 competing:1 reduce:2 favour:2 whether:1 assist:1 penalty:3 peter:1 passing:1 cause:1 dramatically:1 detailed:1 clear:3 locally:1 hardware:1 simplest:1 reduced:3 generate:1 percentage:1 sign:1 delta:2 designer:1 correctly:1 broadly:1 probed:1 vol:7 redundancy:1 threshold:1 graph:1 letter:1 injected:12 place:1 acceptable:1 bit:1 layer:2 ahead:1 ijcnn:1 precisely:1 software:1 expanded:3 injection:6 graceful:1 combination:2 across:4 smaller:1 character:9 projecting:1 equation:4 previously:2 describing:1 appropriate:1 include:1 giving:1 murray:16 prof:1 already:1 added:3 damage:1 dependence:1 enhances:3 detrimental:1 evenly:1 chris:1 trivial:1 code:1 balance:1 effecting:1 mediating:1 greyscale:1 relate:1 negative:1 rise:1 implementation:2 perform:1 neuron:2 displayed:1 looking:3 locate:1 perturbation:7 introduced:1 evidenced:1 extensive:1 immunity:1 connection:1 hanson:4 established:1 inaccuracy:2 trans:2 able:1 below:1 pattern:3 max:2 oj:3 soo:2 treated:1 natural:1 scheme:1 altered:1 firmly:2 eye:19 carried:2 mediated:4 epoch:3 acknowledgement:1 proportional:1 verification:2 consistent:1 cd:2 course:1 free:1 l_:1 normalised:1 allow:1 perceptron:4 generalise:1 wide:1 taking:1 emerge:1 tolerance:13 edinburgh:2 overcome:1 distributed:2 calculated:1 world:2 valid:2 stuck:1 made:2 forward:1 ignore:1 table:2 promising:1 nature:2 robust:2 expanding:1 expansion:3 complex:1 domain:1 decrement:1 noise:62 reconcile:1 vve:1 augmented:2 fig:6 advice:1 tl:1 x16:1 favoured:3 position:1 bishop:4 adding:1 effectively:1 magnitude:2 smoothly:1 simply:2 likely:2 happening:1 adaption:1 bioi:1 viewed:1 man:1 change:3 generalisation:14 typical:1 acting:1 degradation:2 meaningful:1 internal:1 support:1 latter:1 constructive:1 dept:2 tested:2 avoiding:1 correlated:1 |
6,435 | 6,820 | Few-Shot Learning Through an Information
Retrieval Lens
Eleni Triantafillou
University of Toronto
Vector Institute
Richard Zemel
University of Toronto
Vector Institute
Raquel Urtasun
University of Toronto
Vector Institute
Uber ATG
Abstract
Few-shot learning refers to understanding new concepts from only a few examples.
We propose an information retrieval-inspired approach for this problem that is
motivated by the increased importance of maximally leveraging all the available
information in this low-data regime. We define a training objective that aims to
extract as much information as possible from each training batch by effectively
optimizing over all relative orderings of the batch points simultaneously. In particular, we view each batch point as a ?query? that ranks the remaining ones based
on its predicted relevance to them and we define a model within the framework
of structured prediction to optimize mean Average Precision over these rankings.
Our method achieves impressive results on the standard few-shot classification
benchmarks while is also capable of few-shot retrieval.
1
Introduction
Recently, the problem of learning new concepts from only a few labelled examples, referred to
as few-shot learning, has received considerable attention [1, 2]. More concretely, K-shot N-way
classification is the task of classifying a data point into one of N classes, when only K examples
of each class are available to inform this decision. This is a challenging setting that necessitates
different approaches from the ones commonly employed when the labelled data of each new concept
is abundant. Indeed, many recent success stories of machine learning methods rely on large datasets
and suffer from overfitting in the face of insufficient data. It is however not realistic nor preferred to
always expect many examples for learning a new class or concept, rendering few-shot learning an
important problem to address.
We propose a model for this problem that aims to extract as much information as possible from each
training batch, a capability that is of increased importance when the available data for learning each
class is scarce. Towards this goal, we formulate few-shot learning in information retrieval terms: each
point acts as a ?query? that ranks the remaining ones based on its predicted relevance to them. We are
then faced with the choice of a ranking loss function and a computational framework for optimization.
We choose to work within the framework of structured prediction and we optimize mean Average
Precision (mAP) using a standard Structural SVM (SSVM) [3], as well as a Direct Loss Minimization
(DLM) [4] approach. We argue that the objective of mAP is especially suited for the low-data regime
of interest since it allows us to fully exploit each batch by simultaneously optimizing over all relative
orderings of the batch points. Figure 1 provides an illustration of this training objective.
Our contribution is therefore to adopt an information retrieval perspective on the problem of few-shot
learning; we posit that a model is prepared for the sparse-labels setting by being trained in a manner
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: Best viewed in color. Illustration of our training objective. Assume a batch of 6 points: G1,
G2 and G3 of class "green", Y1 and Y2 of "yellow", and another point. We show in columns 1-5
the predicted rankings for queries G1, G2, G3, Y1 and Y2, respectively. Our learning objective is to
move the 6 points in positions that simultaneously maximize the Average Precision (AP) of the 5
rankings. For example, the AP of G1?s ranking would be optimal if G2 and G3 had received the two
highest ranks, and so on.
that fully exploits the information in each batch. We also introduce a new form of a few-shot learning
task, ?few-shot retrieval?, where given a ?query? image and a pool of candidates all coming from
previously-unseen classes, the task is to ?retrieve? all relevant (identically labelled) candidates for the
query. We achieve competitive with the state-of-the-art results on the standard few-shot classification
benchmarks and show superiority over a strong baseline in the proposed few-shot retrieval problem.
2
Related Work
Our approach to few-shot learning heavily relies on learning an informative similarity metric, a goal
that has been extensively studied in the area of metric learning. This can be thought of as learning
a mapping of objects into a space where their relative positions are indicative of their similarity
relationships. We refer the reader to a survey of metric learning [5] and merely touch upon a few
representative methods here.
Neighborhood Component Analysis (NCA) [6] learns a metric aiming at high performance in nearest
neirhbour classification. Large Margin Nearest Neighbor (LMNN) [7] refers to another approach for
nearest neighbor classification which constructs triplets and employs a contrastive loss to move the
?anchor? of each triplet closer to the similarly-labelled point and farther from the dissimilar one by at
least a predefined margin.
More recently, various methods have emerged that harness the power of neural networks for metric
learning. These methods vary in terms of loss functions but have in common a mechanism for the
parallel and identically-parameterized embedding of the points that will inform the loss function.
Siamese and triplet networks are commonly-used variants of this family that operate on pairs and
triplets, respectively. Example applications include signature verification [8] and face verification
[9, 10]. NCA and LMNN have also been extended to their deep variants [11] and [12], respectively.
These methods often employ hard-negative mining strategies for selecting informative constraints
for training [10, 13]. A drawback of siamese and triplet networks is that they are local, in the sense
that their loss function concerns pairs or triplets of training examples, guiding the learning process
to optimize the desired relative positions of only two or three examples at a time. The myopia of
these local methods introduces drawbacks that are reflected in their embedding spaces. [14] propose
a method to address this by using higher-order information.
We also learn a similarity metric in this work, but our approach is specifically tailored for few-shot
learning. Other metric learning approaches for few-shot learning include [15, 1, 16, 17]. [15] employs
a deep convolutional neural network that is trained to correctly predict pairwise similarities. Attentive
Recurrent Comparators [16] also perform pairwise comparisons but form the representation of the
pair through a sequence of glimpses at the two points that comprise it via a recurrent neural network.
We note that these pairwise approaches do not offer a natural mechanism to solve K-shot N-way tasks
for K > 1 and focus on one-shot learning, whereas our method tackles the more general few-shot
learning problem. Matching Networks [1] aim to ?match? the training setup to the evaluation trials of
K-shot N-way classification: they divide each sampled training ?episode? into disjoint support and
query sets and backpropagate the classification error of each query point conditioned on the support
set. Prototypical Networks [17] also perform episodic training, and use the simple yet effective
mechanism of representing each class by the mean of its examples in the support set, constructing a
2
?prototype? in this way that each query example will be compared with. Our approach can be thought
of as constructing all such query/support sets within each batch in order to fully exploit it.
Another family of methods for few-shot learning is based on meta-learning. Some representative
work in this category includes [2, 18]. These approaches present models that learn how to use the
support set in order to update the parameters of a learner model in such a way that it can generalize to
the query set. Meta-Learner LSTM [2] learns an initialization for learners that can solve new tasks,
whereas Model-Agnostic Meta-Learner (MAML) [18] learns an update step that a learner can take to
be successfully adapted to a new task. Finally, [19] presents a method that uses an external memory
module that can be integrated into models for remembering rarely occurring events in a life-long
learning setting. They also demonstrate competitive results on few-shot classification.
3
3.1
Background
Mean Average Precision (mAP)
Consider a batch B of points: X = {x1 , x2 , . . . , xN } and denote by cj the class label of the point
xj . Let Relx1 = {xj ? B : c1 == cj } be the set of points that are relevant to x1 , determined in a
binary fashion according to class membership. Let Ox1 denote the ranking based on the predicted
similarity between x1 and the remaining points in B so that Ox1 [j] stores x1 ?s jth most similar point.
Precision at j in the ranking Ox1 , denoted by P rec@j x1 is the proportion of points that are relevant
to x1 within the j highest-ranked ones. The Average Precision (AP) of this ranking is then computed
by averaging the precisions at j over all positions j in Ox1 that store relevant points.
X
|{k ? j : Ox1 [k] ? Relx1 }|
P rec@j x1
AP x1 =
where P rec@j x1 =
x
1
|Rel |
j
j?{1,...,|B?1|:
O x1 [j]?Relx1 }
Finally, mean Average Precision (mAP) calculates the mean AP across batch points.
X
1
mAP =
AP xi
|B|
i?{1,...B}
3.2
Structural Support Vector Machine (SSVM)
Structured prediction refers to a family of tasks with inter-dependent structured output variables
such as trees, graphs, and sequences, to name just a few [3]. Our proposed learning objective
that involves producing a ranking over a set of candidates also falls into this category so we adopt
structured prediction as our computational framework. SSVM [3] is an efficient method for these
tasks with the advantage of being tunable to custom task loss functions. More concretely, let X
and Y denote the spaces of inputs and structured outputs, respectively. Assume a scoring function
F (x, y; w) depending on some weights w, and a task loss L(yGT , y
?) incurred when predicting y
?
when the groundtruth is yGT . The margin-rescaled SSVM optimizes an upper bound of the task loss
formulated as:
min E[max {L(yGT , y
?) ? F (x, yGT ; w) + F (x, y
?; w)}]
w
y
??Y
The loss gradient can then be computed as:
?w L(y) = ?w F (X , yhinge , w) ? ?w F (X , yGT , w)
with yhinge = arg max {F (X , y
?, w) + L(yGT , y
?)}
(1)
y
??Y
3.3
Direct Loss Minimization (DLM)
[4] proposed a method that directly optimizes the task loss of interest instead of an upper bound of it.
In particular, they provide a perceptron-like weight update rule that they prove corresponds to the
gradient of the task loss. [20] present a theorem that equips us with the corresponding weight update
rule for the task loss in the case of nonlinear models, where the scoring function is parameterized by
a neural network. Since we make use of their theorem, we include it below for completeness.
Let D = {(x, y)} be a dataset composed of input x ? X and output y ? Y pairs. Let F (X , y, w) be
a scoring function which depends on the input, the output and some parameters w ? RA .
3
Theorem 1 (General Loss Gradient Theorem from [20]). When given a finite set Y, a scoring function
F (X , y, w), a data distribution, as well as a task-loss L(y, y
?), then, under some mild regularity
conditions, the direct loss gradient has the following form:
1
?w L(y, yw ) = ? lim (?w F (X , ydirect , w) ? ?w F (X , yw , w))
(2)
?0
with:
yw = arg max F (X , y
?, w) and ydirect = arg max {F (X , y
?, w) ? L(y, y
?)}
y
??Y
y
??Y
This theorem presents us with two options for the gradient update, henceforth the positive and negative
update, obtained by choosing the + or ? of the ? respectively. [4] and [20] provide an intuitive view
for each one. In the case of the positive update, ydirect can be thought of as the ?worst? solution since
it corresponds to the output value that achieves high score while producing high task loss. In this
case, the positive update encourages the model to move away from the bad solution ydirect . On the
other hand, when performing the negative update, ydirect represents the ?best? solution: one that does
well both in terms of the scoring function and the task loss. The model is hence encouraged in this
case to adjust its weights towards the direction of the gradient of this best solution?s score.
In a nutshell, this theorem provides us with the weight update rule for the optimization of a custom
task loss, provided that we define a scoring function and procedures for performing standard and
loss-augmented inference.
3.4
Relationship between DLM and SSVM
As also noted in [4], the positive update of direct loss minimization strongly resembles that of the
margin-rescaled structural SVM [3] which also yields a loss-informed weight update rule. This
gradient computation differs from that of the direct loss minimization approach only in that, while
SSVM considers the score of the ground-truth F (X , yGT , w), direct loss minimization considers the
score of the current prediction F (X , yw , w). The computation of yhinge strongly resembles that of
ydirect in the positive update. Indeed SSVM?s training procedure also encourages the model to move
away from weights that produce the ?worst? solution yhinge .
3.5
Optimizing for Average Precision (AP)
In the following section we adapt and extend a method for optimizing AP [20].
Given a query point, the task is to rank N points x = (x1 , . . . , xN ) with respect to their relevance
to the query, where a point is relevant if it belongs to the same class as the query and irrelevant
otherwise. Let P and N be the sets of ?positive? (i.e. relevant) and ?negative? (i.e. irrelevant) points
respectively. The output ranking is represented as yij pairs where ?i, j, yij = 1 if i is ranked higher
than j and yij = ?1 otherwise, and ?i, yii = 0. Define y = (. . . , yij , . . . ) to be the collection of all
such pairwise rankings.
The scoring function that [20] used is borrowed from [21] and [22]:
X
1
yij (?(xi , w) ? ?(xj , w))
F (x, y, w) =
|P||N |
i?P,j?N
where ?(xi , w) can be interpreted as the learned similarity between xi and the query.
[20] devise a dynamic programming algorithm to perform loss-augmented inference in this setting
which we make use of but we omit for brevity.
4
Few-Shot Learning by Optimizing mAP
In this section, we present our approach for few-shot learning that optimizes mAP. We extend the
work of [20] that optimizes for AP in order to account for all possible choices of query among the
batch points. This is not a straightforward extension as it requires ensuring that optimizing the AP of
one query?s ranking does not harm the AP of another query?s ranking.
In what follows we define a mathematical framework for this problem and we show that we can treat
each query independently without sacrificing correctness, therefore allowing to efficiently in parallel
4
learn to optimize all relative orderings within each batch. We then demonstrate how we can use the
frameworks of SSVM and DLM for optimization of mAP, producing two variants of our method
henceforth referred to as mAP-SSVM and mAP-DLM, respectively.
Setup: Let B be a batch of points: B = {x1 , x2 , . . . , xN } belonging to C different classes. Each
class c ? {1, 2, . . . , C} defines the positive set P c containing the points that belong to c and the
negative set N c containing the rest of the points. We denote by ci the class label of the ith point.
i
i
We represent the output rankings as a collection of ykj
variables where ykj
= 1 if k is ranked
i
i
higher than j in i?s ranking, ykk
= 0 and ykj
= ?1 if j is ranked higher than k in i?s ranking. For
i
convenience we combine these comparisons for each query i in y i = (. . . , ykj
, . . . ).
Let f (x, w) be the embedding function, parameterized by a neural network and ?(x1 , x2 , w) the
cosine similarity of points x1 and x2 in the embedding space given by w:
f (x1 , w) ? f (x2 , w)
?(x1 , x2 , w) =
|f (x1 , w)||f (x2 , w)|
?(xi , xj , w) is typically referred in the literature as the score of a siamese network.
We consider for each query i, the function F i (X , y i , w):
X
X
1
i
ykj
(?(xi , xk , w) ? ?(xi , xj , w))
F i (X , y i , w) = ci
|P ||N ci |
c
c
i
k?P i \i j?N
X
We then compose the scoring function by summing over all queries: F (X , y, w) =
F i (X , y i , w)
i?B
i
|P ci |+|N ci |
i
Further, for each query i ? B, we let p = rank(y ) ? {0, 1}
be a vector obtained by
i
sorting the ykj
?s ?k ? P ci \ i, j ? N ci , such that for a point g 6= i, pig = 1 if g is relevant for query i
and pig = ?1 otherwise. Then the AP loss for the ranking induced by some query i is defined as:
X
1
LiAP (pi , p?i ) = 1 ? ci
P rec@j
|P | i
j:p?j =1
where P rec@j is the percentage of relevant points among the top-ranked j and pi and p?i denote the
ground-truth and predicted binary relevance vectors for query i, respectively. We define the mAP loss
to be the average AP loss over all query points.
Inference: We proof-sketch in the supplementary material that inference can be performed efficiently
in parallel as we can decompose the problem of optimizing the orderings induced by the different
queries to optimizing each ordering separately. Specifically, for a query i of class c the computation
0
i
of the ykj
?s, ?k ? P c \ i, j ? N c can happen independently of the computation of the yki 0 j 0 ?s for
some other query i0 6= i. We are thus able to optimize the ordering induced by each query point
independently of those induced by the other queries. For query i, positive point k and negative point
i
j, the solution of standard inference is yw
= arg maxyi F i (X , y i , w) and can be computed as
kj
follows
1, if ?(xi , xk , w) ? ?(xi , xj , w) > 0
i
ywkj =
(3)
?1, otherwise
Loss-augmented inference for query i is defined as
i
ydirect
= arg max F i (X , y?i , w) ? Li (y i , y?i )
(4)
y?i
and can be performed via a run of the dynamic programming algorithm of [20]. We can then combine
the results of all the independent inferences to compute the overall scoring function
X
X
i
i
F (X , yw , w) =
F i (X , yw
, w) and F (X , ydirect , w) =
F i (X , ydirect
, w)
(5)
i?B
i?B
Finally, we define the ground-truth output value yGT . For any query i and distinct points m, n 6= i
i
i
i
we set yGT
= 1 if m ? P ci and n ? N ci , yGT
= ?1 if n ? P ci and m ? N ci and yGT
=0
mn
mn
mn
otherwise.
5
Algorithm 1 Few-Shot Learning by Optimizing mAP
Input: A batch of points X = {x1 , . . . , xN } of C different classes and ?c ? {1, . . . , C} the sets P c and N c .
Initialize w
if using mAP-SSVM then
i
Set yGT
= ONES(|P ci |, |N ci |), ?i = 1, . . . , N
end if
repeat
if using mAP-DLM then
i
Standard inference: Compute yw
, ?i = 1, . . . , N as in Equation 3
end if
i
Loss-augmented inference: Compute ydirect
, ?i = 1, . . . , N via the DP algorithm of [20] as in Equation 4.
In the case of mAP-SSVM, always use the positive update option and set = 1
Compute F (X , ydirect , w) as in Equation 5
if using mAP-DLM then
Compute F (X , yw , w) as in Equation 5
Compute the gradient ?w L(y, yw ) as in Equation 2
else if using mAP-SSVM then
Compute F (X , yGT , w) as in Equation 6
Compute the gradient ?w L(y, yw ) as in Equation 1 (using ydirect in the place of yhinge )
end if
Perform the weight update rule with stepsize ?: w ? w ? ??w L(y, yw )
until stopping criteria
We note that by construction of our scoring function defined above, we will only have to compute
i
ykj
?s where k and i belong to the same class ci and j is a point from another class. Because of this, we
i
i
set the yGT
for each query i to be an appropriately-sized matrix of ones: yGT
= ones(|P ci |, |N ci |).
The overall score of the ground truth is then
F (X , yGT , w) =
X
i
F i (X , yGT
, w)
(6)
i?B
Optimizing mAP via SSVM and DLM We have now defined all the necessary components to
compute the gradient update as specified by the General Loss Gradient Theorem of [20] in equation 2
or as defined by the Structural SVM in equation 1. For clarity, Algorithm 1 describes this process,
outlining the two variants of our approach for few-shot learning, namely mAP-DLM and mAP-SSVM.
5
Evaluation
In what follows, we describe our training setup, the few-shot learning tasks of interest, the datasets we
use, and our experimental results. Through our experiments, we aim to evaluate the few-shot retrieval
ability of our method and additionally to compare our model to competing approaches for few-shot
classification. For this, we have updated our tables to include very recent work that is published
concurrently with ours in order to provide the reader with a complete view of the state-of-the-art on
few-shot learning. Finally, we also aim to investigate experimentally our model?s aptness for learning
from little data via its training objective that is designed to fully exploit each training batch.
Controlling the influence of loss-augmented inference on the loss gradient We found empirically
that for the positive update of mAP-DLM and for mAP-SSVM, it is beneficial to introduce a
hyperparamter ? that controls the contribution of the loss-augmented F (X , ydirect , w) relative to that
of F (X , yw , w) in the case of mAP-DLM, or F (X , yGT , w) in the case of mAP-SSVM. The updated
rules that we use in practice for training mAP-DLM and mAP-SSVM, respectively, are shown below,
where ? is a hyperparamter.
1
?w L(y, yw ) = ? lim (??w F (X , ydirect , w) ? ?w F (X , yw , w)) and
?0
?w L(y) = ??w F (X , ydirect , w) ? ?w F (X , yyGT , w)
We refer the reader to the supplementary material for more details concerning this hyperparameter.
6
Classification
1-shot
5-shot
5-way 20-way 5-way 20-way
Siamese
Matching Networks [1]
Prototypical Networks [17]
MAML [18]
ConvNet w/ Memory [19]
mAP-SSVM (ours)
mAP-DLM (ours)
98.8
98.1
98.8
98.7
98.4
98.6
98.8
95.5
93.8
96.0
95.8
95.0
95.2
95.4
98.9
99.7
99.9
99.6
99.6
99.6
98.5
98.9
98.9
98.6
98.6
98.6
Retrieval
1-shot
5-way 20-way
98.6
98.6
98.7
95.7
95.7
95.8
Table 1: Few-shot learning results on Omniglot (averaged over 1000 test episodes). We report accuracy for the
classification and mAP for the retrieval tasks.
Few-shot Classification and Retrieval Tasks Each K-shot N-way classification ?episode? is constructed as follows: N evaluation classes and 20 images from each one are selected uniformly at
random from the test set. For each class, K out of the 20 images are randomly chosen to act as the
?representatives? of that class. The remaining 20 ? K images of each class are then to be classified
among the N classes. This poses a total of (20 ? K)N classification problems. Following the
standard procedure, we repeat this process 1000 times when testing on Omniglot and 600 times for
mini-ImageNet in order to compute the results reported in tables 1 and 2.
We also designed a similar one-shot N-way retrieval task, where to form each episode we select N
classes at random and 10 images per class, yielding a pool of 10N images. Each of these 10N images
acts as a query and ranks all remaining (10N - 1) images. The goal is to retrieve all 9 relevant images
before any of the (10N - 10) irrelevant ones. We measure the performance on this task using mAP.
Note that since this is a new task, there are no publicly available results for the competing few-shot
learning methods.
Our Algorithm for K-shot N-way classification Our model classifies image x into class c =
arg maxi AP i (x), where AP i (x) denotes the average precision of the ordering that image x assigns
to the pool of all KN representatives assuming that the ground truth class for image x is i. This
means that when computing AP i (x), the K representatives of class i will have a binary relevance of
1 while the K(N ? 1) representatives of the other classes will have a binary relevance of 0. Note that
in the one-shot learning case where K = 1 this amounts to classifying x into the class whose (single)
representative is most similar to x according to the model?s learned similarity metric.
We note that the siamese model does not naturally offer a procedure for exploiting all K representatives
of each class when making the classification decision for some reference. Therefore we omit few-shot
learning results for siamese when K > 1 and examine this model only in the one-shot case.
Training details We use the same embedding architecture for all of our models for both Omniglot and
mini-ImageNet. This architecture mimics that of [1] and consists of 4 identical blocks stacked upon
each other. Each of these blocks consists of a 3x3 convolution with 64 filters, batch normalization
[23], a ReLU activation, and 2x2 max-pooling. We resize the Omniglot images to 28x28, and the
mini-ImageNet images to 3x84x84, therefore producing a 64-dimensional feature vector for each
Omniglot image and a 1600-dimensional one for each mini-ImageNet image. We use ADAM [24]
for training all models. We refer the reader to the supplementary for more details.
Omniglot The Omniglot dataset [25] is designed for testing few-shot learning methods. This dataset
consists of 1623 characters from 50 different alphabets, with each character drawn by 20 different
drawers. Following [1], we use 1200 characters as training classes and the remaining 423 for
evaluation while we also augment the dataset with random rotations by multiples of 90 degrees. The
results for this dataset are shown in Table 1. Both mAP-SSVM and mAP-DLM are trained with
? = 10, and for mAP-DLM the positive update was used. We used |B| = 128 and N = 16 for our
models and the siamese. Overall, we observe that many methods perform very similarly on few-shot
classification on this dataset, ours being among the top-performing ones. Further, we perform equally
well or better than the siamese network in few-shot retrieval. We?d like to emphasize that the siamese
network is a tough baseline to beat, as can be seen from its performance in the classification tasks
where it outperforms recent few-shot learning methods.
mini-ImageNet mini-ImageNet refers to a subset of the ILSVRC-12 dataset [26] that was used as
a benchmark for testing few-shot learning approaches in [1]. This dataset contains 60,000 84x84
color images and constitutes a significantly more challenging benchmark than Omniglot. In order to
7
Classification
5-way
1-shot
5-shot
Baseline Nearset Neighbors*
Matching Networks* [1]
Matching Networks FCE* [1]
Meta-Learner LSTM* [2]
Prototypical Networks [17]
MAML [18]
Siamese
mAP-SSVM (ours)
mAP-DLM (ours)
41.08 ? 0.70 %
43.40 ? 0.78 %
43.56 ? 0.84 %
43.44 ? 0.77 %
49.42 ? 0.78%
48.70 ? 1.84 %
48.42 ? 0.79 %
50.32 ? 0.80 %
50.28 ? 0.80 %
51.04 ? 0.65 %
51.09 ? 0.71 %
55.31 ? 0.73 %
60.60 ? 0.71 %
68.20 ? 0.66 %
63.11 ? 0.92 %
63.94 ? 0.72 %
63.70 ? 0.70 %
Retrieval
5-way
1-shot
20-way
1-shot
51.24 ? 0.57 %
52.85 ? 0.56 %
52.96 ? 0.55 %
22.66 ? 0.13 %
23.87 ? 0.14 %
23.68 ? 0.13 %
Table 2: Few-shot learning results on miniImageNet (averaged over 600 test episodes and reported with 95%
confidence intervals). We report accuracy for the classification and mAP for the retrieval tasks. *Results reported
by [2].
compare our method with the state-of-the-art on this benchmark, we adapt the splits introduced in [2]
which contain a total of 100 classes out of which 64 are used for training, 16 for validation and 20 for
testing. We train our models on the training set and use the validation set for monitoring performance.
Table 2 reports the performance of our method and recent competing approaches on this benchmark.
As for Omniglot, the results of both versions of our method are obtained with ? = 10, and with the
positive update in the case of mAP-DLM. We used |B| = 128 and N = 8 for our models and the
siamese. We also borrow the baseline reported in [2] for this task which corresponds to performing
nearest-neighbors on top of the learned embeddings. Our method yields impressive results here,
outperforming recent approaches tailored for few-shot learning either via deep-metric learning such
as Matching Networks [1] or via meta-learning such as Meta-Learner LSTM [2] and MAML [18] in
few-shot classification. We set the new state-of-the-art for 1-shot 5-way classification. Further, our
models are superior than the strong baseline of the siamese network in the few-shot retrieval tasks.
CUB We also experimented on the Caltech-UCSD Birds (CUB) 200-2011 dataset [27], where we
outperform the siamese network as well. More details can be found in the supplementary.
Learning Efficiency We examine our method?s learning efficiency via comparison with a siamese
network. For fair comparison of these models, we create the training batches in a way that enforces
that they have the same amount of information available for each update: each training batch B
is formed by sampling N classes uniformly at random and |B| examples from these classes. The
siamese network is then trained on all possible pairs from these sampled points. Figure 2 displays the
performance of our model and the siamese on different metrics on Omniglot and mini-ImageNet. The
first two rows show the performance of our two variants and the siamese in the few-shot classification
(left) and few-shot retrieval (right) tasks, for various levels of difficulty as regulated by the different
values of N. The first row corresponds to Omniglot and the second to mini-ImageNet. We observe
that even when both methods converge to comparable accuracy or mAP values, our method learns
faster, especially when the ?way? of the evaluation task is larger, making the problem harder.
In the third row in Figure 2, we examine the few-shot learning performance of our model and the
all-pairs siamese that were trained with N = 8 but with different |B|. We note that for a given N ,
larger batch size implies larger ?shot?. For example, for N = 8, |B| = 64 results to on average 8
examples of each class in each batch (8-shot) whereas |B| = 16 results to on average 2-shot. We
observe that especially when the ?shot? is smaller, there is a clear advantage in using our method
over the all-pairs siamese. Therefore it indeed appears to be the case that the fewer examples we are
given per class, the more we can benefit from our structured objective that simultaneously optimizes
all relative orderings. Further, mAP-DLM can reach higher performance overall with smaller batch
sizes (thus smaller ?shot?) than the siamese, indicating that our method?s training objective is indeed
efficiently exploiting the batch examples and showing promise in learning from less data.
Discussion It is interesting to compare experimentally methods that have pursued different paths
in addressing the challenge of few-shot learning. In particular, the methods we compare against
each other in our tables include deep metric learning approaches such as ours, the siamese network,
Prototypical Networks and Matching Networks, as well as meta-learning methods such as MetaLearner LSTM [2] and MAML [18]. Further, [19] has a metric-learning flavor but employs external
memory as a vehicle for remembering representations of rarely-observed classes. The experimental
8
Figure 2: Few-shot learning performance (on unseen validation classes). Each point represents the average
performance across 100 sampled episodes. Top row: Omniglot. Second and third rows: mini-ImageNet.
results suggest that there is no clear winner category and all these directions are worth exploring
further.
Overall, our model performs on par with the state-of-the-art results on the classification benchmarks,
while also offering the capability of few-shot retrieval where it exhibits superiority over a strong
baseline. Regarding the comparison between mAP-DLM and mAP-SSVM, we remark that they
mostly perform similarly to each other on the benchmarks considered. We have not observed in this
case a significant win for directly optimizing the loss of interest, offered by mAP-DLM, as opposed
to minimizing an upper bound of it.
6
Conclusion
We have presented an approach for few-shot learning that strives to fully exploit the available
information of the training batches, a skill that is utterly important in the low-data regime of few-shot
learning. We have proposed to achieve this via defining an information-retrieval based training
objective that simultaneously optimizes all relative orderings of the points in each training batch.
We experimentally support our claims for learning efficiency and present promising results on two
standard few-shot learning datasets. An interesting future direction is to not only reason about how to
best exploit the information within each batch, but additionally about how to create training batches
in order to best leverage the information in the training set. Furthermore, we leave it as future work to
explore alternative information retrieval metrics, instead of mAP, as training objectives for few-shot
learning (e.g. ROC curve, discounted cumulative gain etc).
9
References
[1] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot
learning. In Advances in Neural Information Processing Systems, pages 3630?3638, 2016.
[2] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In International
Conference on Learning Representations, volume 1, page 6, 2017.
[3] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods
for structured and interdependent output variables. Journal of machine learning research, 6(Sep):1453?
1484, 2005.
[4] Tamir Hazan, Joseph Keshet, and David A McAllester. Direct loss minimization for structured prediction.
In Advances in Neural Information Processing Systems, pages 1594?1602, 2010.
[5] Aur?lien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and
structured data. arXiv preprint arXiv:1306.6709, 2013.
[6] Jacob Goldberger, Sam Roweis, Geoff Hinton, and Ruslan Salakhutdinov. Neighbourhood components
analysis. In Advances in Neural Information Processing Systems, page 513?520, 2005.
[7] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest
neighbor classification. In Advances in neural information processing systems, pages 1473?1480, 2005.
[8] Jane Bromley, James W Bentz, L?on Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard S?ckinger,
and Roopak Shah. Signature verification using a ?siamese? time delay neural network. International
Journal of Pattern Recognition and Artificial Intelligence, 7(04):669?688, 1993.
[9] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with
application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and
Pattern Recognition (CVPR?05), volume 1, pages 539?546. IEEE, 2005.
[10] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face
recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 815?823, 2015.
[11] Ruslan Salakhutdinov and Geoffrey E Hinton. Learning a nonlinear embedding by preserving class
neighbourhood structure. In AISTATS, volume 11, 2007.
[12] Renqiang Min, David A Stanley, Zineng Yuan, Anthony Bonner, and Zhaolei Zhang. A deep non-linear
feature mapping for large-margin knn classification. In Data Mining, 2009. ICDM?09. Ninth IEEE
International Conference on, pages 357?366. IEEE, 2009.
[13] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured
feature embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 4004?4012, 2016.
[14] Hyun Oh Song, Stefanie Jegelka, Vivek Rathod, and Kevin Murphy. Learnable structured clustering
framework for deep metric learning. arXiv preprint arXiv:1612.01213, 2016.
[15] Gregory Koch. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto,
2015.
[16] Pranav Shyam, Shubham Gupta, and Ambedkar Dukkipati. Attentive recurrent comparators. arXiv preprint
arXiv:1703.00767, 2017.
[17] Jake Snell, Kevin Swersky, and Richard S Zemel. Prototypical networks for few-shot learning. arXiv
preprint arXiv:1703.05175, 2017.
[18] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep
networks. arXiv preprint arXiv:1703.03400, 2017.
[19] ?ukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. Learning to remember rare events. arXiv
preprint arXiv:1703.03129, 2017.
[20] Yang Song, Alexander G Schwing, Richard S Zemel, and Raquel Urtasun. Training deep neural networks
via direct loss minimization. In Proceedings of The 33rd International Conference on Machine Learning,
pages 2169?2177, 2016.
10
[21] Yisong Yue, Thomas Finley, Filip Radlinski, and Thorsten Joachims. A support vector method for
optimizing average precision. In Proceedings of the 30th annual international ACM SIGIR conference on
Research and development in information retrieval, pages 271?278. ACM, 2007.
[22] Pritish Mohapatra, CV Jawahar, and M Pawan Kumar. Efficient optimization for average precision svm. In
Advances in Neural Information Processing Systems, pages 2312?2320, 2014.
[23] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[24] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[25] Brenden M Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B Tenenbaum. One shot learning of
simple visual concepts. In CogSci, volume 172, page 2, 2011.
[26] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge.
International Journal of Computer Vision, 115(3):211?252, 2015.
[27] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd
birds-200-2011 dataset. 2011.
11
| 6820 |@word mild:1 trial:1 version:1 proportion:1 pieter:1 jacob:1 contrastive:1 harder:1 shot:76 contains:1 score:6 selecting:1 offering:1 ours:7 outperforms:1 current:1 activation:1 yet:1 goldberger:1 diederik:1 john:1 realistic:1 happen:1 informative:2 hofmann:1 christian:1 designed:3 update:20 pursued:1 selected:1 fewer:1 ydirect:15 intelligence:1 indicative:1 utterly:1 xk:2 ith:1 farther:1 provides:2 completeness:1 toronto:4 shubham:1 zhang:1 mathematical:1 wierstra:1 constructed:1 direct:8 yuan:1 prove:1 consists:3 combine:2 compose:1 introduce:2 manner:1 pairwise:4 inter:1 ra:1 indeed:4 nor:1 examine:3 inspired:1 lmnn:2 discounted:1 salakhutdinov:3 little:1 provided:1 classifies:1 maxyi:1 agnostic:2 what:2 interpreted:1 informed:1 unified:1 remember:1 act:3 tackle:1 nutshell:1 control:1 omit:2 superiority:2 producing:4 positive:12 before:1 local:2 treat:1 aiming:1 cliff:1 path:1 ap:16 bird:2 initialization:1 studied:1 resembles:2 challenging:2 branson:1 averaged:2 ygt:18 nca:2 lecun:2 enforces:1 testing:4 practice:1 block:2 differs:1 x3:1 procedure:4 episodic:1 area:1 thought:3 significantly:1 matching:7 confidence:1 refers:4 pritish:1 suggest:1 altun:1 convenience:1 tsochantaridis:1 andrej:1 influence:1 hyperparamter:2 optimize:5 map:43 straightforward:1 attention:1 independently:3 jimmy:1 survey:2 formulate:1 hadsell:1 sigir:1 assigns:1 rule:6 borrow:1 oh:2 retrieve:2 embedding:8 updated:2 construction:1 controlling:1 heavily:1 programming:2 us:1 samy:1 roy:1 recognition:7 rec:5 observed:2 levine:1 module:1 preprint:8 worst:2 kilian:1 episode:6 ordering:9 highest:2 rescaled:2 gross:1 kalenichenko:1 dukkipati:1 dynamic:2 signature:2 trained:5 upon:2 efficiency:3 learner:7 necessitates:1 sep:1 geoff:1 various:2 represented:1 bonner:1 alphabet:1 stacked:1 train:1 distinct:1 fast:1 effective:1 describe:1 cogsci:1 query:36 zemel:3 artificial:1 kevin:2 neighborhood:1 choosing:1 whose:1 emerged:1 supplementary:4 solve:2 larger:3 cvpr:1 otherwise:5 ability:1 knn:1 g1:3 unseen:2 sequence:2 advantage:2 propose:3 coming:1 drawer:1 yki:1 adaptation:1 relevant:9 yii:1 achieve:2 roweis:1 intuitive:1 exploiting:2 regularity:1 produce:1 adam:2 leave:1 object:1 tim:1 depending:1 recurrent:3 blitzer:1 pose:1 nearest:5 received:2 borrowed:1 strong:3 predicted:5 involves:1 implies:1 larochelle:1 direction:3 posit:1 drawback:2 filter:1 stochastic:1 mcallester:1 material:2 abbeel:1 decompose:1 snell:1 yij:5 extension:1 exploring:1 koch:1 considered:1 ground:5 bromley:1 eduard:1 lawrence:1 mapping:2 predict:1 claim:1 achieves:2 adopt:2 vary:1 cub:2 ruslan:3 schroff:1 label:3 jawahar:1 correctness:1 create:2 successfully:1 minimization:7 concurrently:1 always:2 aim:5 lifted:1 focus:1 joachim:2 rank:6 baseline:6 sense:1 inference:10 dependent:1 stopping:1 membership:1 i0:1 integrated:1 typically:1 perona:1 lien:1 ukasz:1 arg:6 classification:26 among:4 overall:5 denoted:1 augment:1 development:1 art:5 initialize:1 construct:1 comprise:1 miniimagenet:1 beach:1 sampling:1 encouraged:1 identical:1 represents:2 ckinger:1 yu:1 comparators:2 constitutes:1 mimic:1 future:2 report:3 richard:3 few:55 employ:4 randomly:1 composed:1 simultaneously:5 murphy:1 pawan:1 interest:4 mining:2 maml:5 investigate:1 custom:2 evaluation:5 adjust:1 ofir:1 introduces:1 yielding:1 predefined:1 capable:1 closer:1 necessary:1 glimpse:1 tree:1 divide:1 savarese:1 abundant:1 desired:1 sacrificing:1 increased:2 column:1 addressing:1 subset:1 habrard:1 rare:1 delay:1 welinder:1 sumit:1 reported:4 kn:1 gregory:1 st:1 lstm:4 international:6 aur:1 pool:3 michael:1 sanjeev:1 thesis:1 yisong:1 containing:2 choose:1 opposed:1 huang:1 henceforth:2 external:2 li:1 szegedy:1 account:1 ioannis:1 includes:1 ranking:17 depends:1 performed:2 view:3 vehicle:1 philbin:1 jason:1 hazan:1 competitive:2 option:2 capability:2 parallel:3 jia:1 contribution:2 formed:1 publicly:1 accuracy:3 convolutional:1 ambedkar:1 efficiently:3 yield:2 serge:1 yellow:1 generalize:1 monitoring:1 worth:1 russakovsky:1 published:1 classified:1 inform:2 reach:1 myopia:1 attentive:2 against:1 james:2 naturally:1 proof:1 sampled:3 gain:1 tunable:1 dataset:10 color:2 lim:2 stanley:1 cj:2 sean:1 appears:1 steve:1 higher:5 harness:1 reflected:1 maximally:1 strongly:2 furthermore:1 just:1 until:1 hand:1 sketch:1 touch:1 ox1:5 nonlinear:2 su:1 defines:1 usa:1 name:1 lillicrap:1 concept:5 y2:2 contain:1 hence:1 moore:1 vivek:1 encourages:2 noted:1 cosine:1 criterion:1 complete:1 demonstrate:2 performs:1 zhiheng:1 image:18 equips:1 recently:2 charles:1 common:1 rotation:1 superior:1 sebban:1 empirically:1 hugo:1 winner:1 volume:4 extend:2 belong:2 refer:3 significant:1 isabelle:1 cv:1 rd:1 similarly:3 omniglot:12 had:1 impressive:2 similarity:9 etc:1 chelsea:1 recent:5 perspective:1 optimizing:12 optimizes:6 belongs:1 irrelevant:3 store:2 catherine:1 meta:8 binary:4 success:1 outperforming:1 life:1 joshua:1 devise:1 scoring:10 seen:1 caltech:2 yasemin:1 remembering:2 florian:1 preserving:1 employed:1 deng:1 converge:1 maximize:1 siamese:23 multiple:1 match:1 adapt:2 x28:1 offer:2 long:2 retrieval:21 faster:1 concerning:1 icdm:1 equally:1 raia:1 calculates:1 prediction:6 variant:5 ensuring:1 vision:4 metric:18 arxiv:16 represent:1 tailored:2 normalization:2 sergey:2 c1:1 whereas:3 background:1 separately:1 krause:1 interval:1 shyam:1 else:1 appropriately:1 operate:1 rest:1 yue:1 induced:4 pooling:1 leveraging:1 tough:1 structural:4 chopra:1 leverage:1 yang:1 bernstein:1 split:1 identically:2 embeddings:1 rendering:1 bengio:1 xj:6 relu:1 architecture:2 competing:3 regarding:1 prototype:1 blundell:1 shift:1 motivated:1 accelerating:1 song:3 suffer:1 peter:1 remark:1 deep:10 ssvm:21 yw:15 clear:2 karpathy:1 amount:2 prepared:1 extensively:1 tenenbaum:1 category:3 sachin:1 outperform:1 percentage:1 disjoint:1 correctly:1 per:2 triantafillou:1 hyperparameter:1 promise:1 drawn:1 clarity:1 ravi:1 aurko:1 bentz:1 graph:1 merely:1 pietro:1 run:1 parameterized:3 raquel:2 swersky:1 place:1 family:3 reader:4 groundtruth:1 guyon:1 yann:2 lake:1 decision:2 resize:1 comparable:1 bound:3 display:1 annual:1 adapted:1 constraint:1 x2:8 min:2 kumar:1 performing:4 structured:12 according:2 belonging:1 across:2 describes:1 beneficial:1 character:3 smaller:3 strives:1 bellet:1 g3:3 joseph:1 making:2 sam:1 thorsten:2 dlm:20 mohapatra:1 equation:9 previously:1 mechanism:3 finn:1 end:3 available:6 observe:3 away:2 stepsize:1 neighbourhood:2 batch:28 alternative:1 weinberger:1 jane:1 shah:1 thomas:2 top:4 remaining:6 include:5 denotes:1 clustering:2 exploit:6 especially:3 society:1 jake:1 objective:11 move:4 kaiser:1 strategy:1 exhibit:1 gradient:12 dp:1 regulated:1 convnet:1 win:1 distance:1 nachum:1 argue:1 considers:2 urtasun:2 reason:1 assuming:1 relationship:2 insufficient:1 illustration:2 mini:9 minimizing:1 setup:3 mostly:1 hao:1 negative:6 ba:1 satheesh:1 perform:7 allowing:1 upper:3 convolution:1 datasets:3 benchmark:8 finite:1 daan:1 hyun:2 beat:1 defining:1 extended:1 hinton:2 y1:2 ucsd:2 ninth:1 brenden:1 introduced:1 david:2 pair:8 namely:1 specified:1 imagenet:10 wah:1 learned:3 kingma:1 nip:1 address:2 able:1 below:2 pattern:4 regime:3 pig:2 challenge:2 green:1 memory:3 max:6 power:1 event:2 natural:1 rely:1 ranked:5 predicting:1 difficulty:1 scarce:1 mn:3 representing:1 stefanie:2 extract:2 finley:1 kj:1 faced:1 understanding:1 literature:1 interdependent:1 rathod:1 relative:8 xiang:1 loss:38 expect:1 fully:5 fce:1 par:1 prototypical:5 interesting:2 discriminatively:1 geoffrey:1 outlining:1 validation:3 incurred:1 degree:1 offered:1 verification:4 yhinge:5 amaury:1 jegelka:2 story:1 classifying:2 pi:2 row:5 repeat:2 jth:1 perceptron:1 institute:3 neighbor:5 fall:1 face:4 saul:1 sparse:1 benefit:1 curve:1 xn:4 cumulative:1 tamir:1 concretely:2 commonly:2 collection:2 emphasize:1 skill:1 preferred:1 dmitry:1 overfitting:1 anchor:1 ioffe:1 harm:1 summing:1 filip:1 belongie:1 xi:9 triplet:6 khosla:1 table:7 additionally:2 promising:1 learn:3 ca:1 bottou:1 constructing:2 marc:1 anthony:1 aistats:1 fair:1 ykk:1 x1:18 augmented:6 referred:3 representative:8 roc:1 fashion:1 precision:12 position:4 guiding:1 candidate:3 third:2 learns:4 theorem:7 bad:1 covariate:1 showing:1 maxi:1 learnable:1 experimented:1 svm:4 gupta:1 concern:1 rel:1 effectively:1 importance:2 ci:17 keshet:1 phd:1 conditioned:1 occurring:1 margin:7 sorting:1 flavor:1 suited:1 backpropagate:1 explore:1 visual:2 vinyals:1 aditya:1 g2:3 corresponds:4 truth:5 relies:1 acm:2 ma:1 goal:3 viewed:1 formulated:1 sized:1 towards:2 labelled:4 considerable:1 hard:1 experimentally:3 specifically:2 determined:1 uniformly:2 reducing:1 averaging:1 schwing:1 olga:1 lens:1 total:2 silvio:1 experimental:2 uber:1 rarely:2 select:1 indicating:1 ilsvrc:1 internal:1 support:8 radlinski:1 jonathan:1 dissimilar:1 relevance:6 brevity:1 facenet:1 oriol:1 alexander:1 evaluate:1 ykj:8 atg:1 |
6,436 | 6,821 | Formal Guarantees on the Robustness of a
Classifier against Adversarial Manipulation
Matthias Hein and Maksym Andriushchenko
Department of Mathematics and Computer Science
Saarland University, Saarbr?cken Informatics Campus, Germany
Abstract
Recent work has shown that state-of-the-art classifiers are quite brittle,
in the sense that a small adversarial change of an originally with high
confidence correctly classified input leads to a wrong classification again
with high confidence. This raises concerns that such classifiers are vulnerable
to attacks and calls into question their usage in safety-critical systems. We
show in this paper for the first time formal guarantees on the robustness
of a classifier by giving instance-specific lower bounds on the norm of the
input manipulation required to change the classifier decision. Based on
this analysis we propose the Cross-Lipschitz regularization functional. We
show that using this form of regularization in kernel methods resp. neural
networks improves the robustness of the classifier with no or small loss in
prediction performance.
1
Introduction
The problem of adversarial manipulation of classifiers has been addressed initially in the area
of spam email detection, see e.g. [5, 16]. The goal of the spammer is to manipulate the spam
email (the input of the classifier) in such a way that it is not detected by the classifier. In deep
learning the problem was brought up in the seminal paper by [24]. They showed for state-ofthe-art deep neural networks, that one can manipulate an originally correctly classified input
image with a non-perceivable small transformation so that the classifier now misclassifies
this image with high confidence, see [7] or Figure 3 for an illustration. This property calls
into question the usage of neural networks and other classifiers showing this behavior in
safety critical systems, as they are vulnerable to attacks. On the other hand this also shows
that the concepts learned by a classifier are still quite far away from the visual perception
of humans. Subsequent research has found fast ways to generate adversarial samples with
high probability [7, 12, 19] and suggested to use them during training as a form of data
augmentation to gain more robustness. However, it turns out that the so-called adversarial
training does not settle the problem as one can yet again construct adversarial examples
for the final classifier. Interestingly, it has recently been shown that there exist universal
adversarial changes which when applied lead, for every image, to a wrong classification with
high probability [17]. While one needs access to the neural network model for the generation
of adversarial changes, it has been shown that adversarial manipulations generalize across
neural networks [18, 15, 14], which means that neural network classifiers can be attacked
even as a black-box method. The most extreme case has been shown recently [15], where
they attack the commercial system Clarifai, which is a black-box system as neither the
underlying classifier nor the training data are known. Nevertheless, they could successfully
generate adversarial images with an existing network and fool this commercial system. This
emphasizes that there are indeed severe security issues with modern neural networks. While
countermeasures have been proposed [8, 7, 26, 18, 12, 2], none of them provides a guarantee
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
of preventing this behavior [3]. One might think that generative adversarial neural networks
should be resistant to this problem, but it has recently been shown [13] that they can also
be attacked by adversarial manipulation of input images.
In this paper we show for the first time instance-specific formal guarantees on the robustness
of a classifier against adversarial manipulation. That means we provide lower bounds on the
norm of the change of the input required to alter the classifier decision or said otherwise: we
provide a guarantee that the classifier decision does not change in a certain ball around the
considered instance. We exemplify our technique for two widely used family of classifiers:
kernel methods and neural networks. Based on the analysis we propose a new regularization
functional, which we call Cross-Lipschitz Regularization. This regularization functional
can be used in kernel methods and neural networks. We show that using Cross-Lipschitz
regularization improves both the formal guarantees of the resulting classifier (lower bounds)
as well as the change required for adversarial manipulation (upper bounds) while maintaining
similar prediction performance achievable with other forms of regularization. While there
exist fast ways to generate adversarial samples [7, 12, 19] without constraints, we provide
algorithms based on the first order approximation of the classifier which generate adversarial
samples satisfying box constraints in O(d log d), where d is the input dimension.
2
Formal Robustness Guarantees for Classifiers
In the following we consider the multi-class setting for K classes and d features where one
has a classifier f : Rd ? RK and a point x is classified via c = arg max fj (x). We call a
j=1,...,K
classifier robust at x if small changes of the input do not alter the decision. Formally, the
problem can be described as follows [24]. Suppose that the classifier outputs class c for
input x, that is fc (x) > fj (x) for j =
6 c (we assume the decision is unique). The problem of
generating an input x + ? such that the classifier decision changes, can be formulated as
min k?kp ,
??Rd
s.th.
max fl (x + ?) ? fc (x + ?) and x + ? ? C,
l6=c
(1)
where C is a constraint set specifying certain requirements on the generated input x + ?, e.g.,
an image has to be in [0, 1]d . Typically, the optimization problem (1) is non-convex and thus
intractable. The so generated points x + ? are called adversarial samples. Depending on
the p-norm the perturbations have different characteristics: for p = ? the perturbations are
small and affect all features, whereas for p = 1 one gets sparse solutions up to the extreme
case that only a single feature is changed. In [24] they used p = 2 which leads to more spread
but still localized perturbations. The striking result of [24, 7] was that for most instances
in computer vision datasets, the change ? necessary to alter the decision is astonishingly
small and thus clearly the label should not change. However, we will see later that our new
regularizer leads to robust classifiers in the sense that the required adversarial change is so
large that now also the class label changes (we have found the correct decision boundary),
see Fig 3. Already in [24] it is suggested to add the generated adversarial samples as a form
of data augmentation during the training of neural networks in order to achieve robustness.
This is denoted as adversarial training. Later on fast ways to approximately solve (1) were
proposed in order to speed up the adversarial training process [7, 12, 19]. However, in this
way, given that the approximation is successful, that is arg max fj (x + ?) 6= c, one gets just
j
upper bounds on the perturbation necessary to change the classifier decision. Also it was
noted early on, that the final classifier achieved by adversarial training is again vulnerable
to adversarial samples [7]. Robust optimization has been suggested as a measure against
adversarial manipulation [12, 21] which effectively boils down to adversarial training in
practice. It is thus fair to say that up to date no mechanism exists which prevents the
generation of adversarial samples nor can defend against it [3].
In this paper we focus instead on robustness guarantees, that is we show that the classifier
decision does not change in a small ball around the instance. Thus our guarantees hold for
any method to generate adversarial samples or input transformations due to noise or sensor
failure etc. Such formal guarantees are in our point of view absolutely necessary when a
classifier becomes part of a safety-critical technical system such as autonomous driving. In
the following we will first show how one can achieve such a guarantee and then explicitly
2
derive bounds for kernel methods and neural networks. We think that such formal guarantees
on robustness should be investigated further and it should become standard to report them
for different classifiers alongside the usual performance measures.
2.1
Formal Robustness Guarantee against Adversarial Manipulation
The following guarantee holds for any classifier which is continuously differentiable with
respect to the input in each output component. It is instance-specific and depends to some
extent on the confidence in the decision, at least if we measure confidence by the relative
difference fc (x) ? maxj6=c fj (x) as it is typical for the cross-entropy loss and other multi-class
losses. In the following we use the notation Bp (x, R) = {y ? Rd | kx ? ykp ? R}.
Theorem 2.1. Let x ? Rd and f : Rd ? RK be a multi-class classifier with continuously
differentiable components and let c = arg max fj (x) be the class which f predicts for x. Let
j=1,...,K
k?kp
1
p
1
q
= 1, then for all ? ? Rd with
?
?
?
?
fc (x) ? fj (x)
? max min min
, R := ?,
R>0
? j6=c max k?fc (y) ? ?fj (y)kq
?
q ? R be defined as
+
y?Bp (x,R)
it holds c = arg max fj (x + ?), that is the classifier decision does not change on Bp (x, ?).
j=1,...,K
Note that the bound requires in the denominator a bound on the local Lipschitz constant
of all cross terms fc ? fj , which we call local cross-Lipschitz constant in the following.
However, we do not require to have a global bound. The problem with a global bound is
that the ideal robust classifier is basically piecewise constant on larger regions with sharp
transitions between the classes. However, the global Lipschitz constant would then just be
influenced by the sharp transition zones and would not yield a good bound, whereas the
local bound can adapt to regions where the classifier is approximately constant and then
yields good guarantees. In [24, 4] they suggest to study the global Lipschitz constant1 of
each fj , j = 1, . . . , K. A small global Lipschitz constant for all fj implies a good bound as
k?fj (y) ? ?fc (y)kq ? k?fj (y)kq + k?fc (y)kq ,
(2)
but the converse does not hold. As discussed below it turns out that our local estimates are
significantly better than the suggested global estimates which implies also better robustness
guarantees. In turn we want to emphasize that our bound is tight, that is the bound is
attained, for linear classifiers fj (x) = hwj , xi, j = 1, . . . , K. It holds
k?kp = min
j6=c
hwc ? wj , xi
.
kwc ? wj kq
In Section 4 we refine this result for the case when the input is constrained to [0, 1]d . In
general, it is possible to integrate constraints on the input by simply doing the maximum
over the intersection of Bp (x, R) with the constraint set e.g. [0, 1]d for gray-scale images.
2.2
Evaluation of the Bound for Kernel Methods
Next, we discuss how the bound can be evaluated for different classifier models. For simplicity
we restrict ourselves to the case p = 2 (which implies q = 2) and leave the other cases to
future work. We consider the class of kernel methods, that is the classifier has the form
fj (x) =
n
X
?jr k(xr , x),
r=1
where (xr )nr=1 are the n training points, k : Rd ? Rd ? R is a positive definite kernel function
and ? ? RK?n are the trained parameters e.g. of a SVM. The goal is to upper bound the
1
The Lipschitz constant L wrt to p-norm of a piecewise continuously differentiable function is
given as L = supx?Rd k?f (x)kq . Then it holds, |f (x) ? f (y)| ? L kx ? ykp .
3
term maxy?B2 (x,R) k?fj (y) ? ?fc (y)k2 for this classifier model. A simple calculation shows
n
X
2
0 ? k?fj (y) ? ?fc (y)k2 =
(?jr ? ?cr )(?js ? ?cs ) h?y k(xr , y), ?y k(xs , y)i
(3)
r,s=1
It has been reported that kernel methods with a Gaussian kernel are robust to noise. Thus
2
we specialize now to this class, that is k(x, y) = e??kx?yk2 . In this case
2
2
h?y k(xr , y), ?y k(xs , y)i = 4? 2 hy ? xr , y ? xs i e??kxr ?yk2 e??kxs ?yk2 .
We derive the following bound
Proposition 2.1. Let ?r = ?jr ? ?cr , r = 1, . . . , n and define M = min
n
k2x?xr ?xs k2
,R
2
o
and S = k2x ? xr ? xs k2 . Then
n
X
maxy?B2 (x,R) k?fj (y) ? ?fc (y)k2 ? 2?
h
?r ?s max{hx ? xr , x ? xs i + RS + R2 , 0}e??
kx?xr k22 +kx?xs k22 ?2M S+2M 2
r,s=1
?r ?s ?0
+ min{hx ? xr , x ? xs i + RS + R2 , 0}e??
n
X
+
h
?r ?s max{hx ? xr , x ? xs i ? M S + M 2 , 0}e??
kx?xr k22 +kx?xs k22 +2RS+2R2
i
kx?xr k22 +kx?xs k22 +2RS+2R2
r,s=1
?r ?s <0
1
2
+ min{hx ? xr , x ? xs i ? M S + M , 0}e
?? kx?xr k22 +kx?xs k22 ?2M S+2M 2
i! 2
While the bound leads to non-trivial estimates as seen in Section 5, the bound is not very
tight. The reason is that the sum is bounded elementwise, which is quite pessimistic. We
think that better bounds are possible but have to postpone this to future work.
2.3
Evaluation of the Bound for Neural Networks
We derive the bound for a neural network with one hidden layer. In principle, the technique
we apply below can be used for arbitrary layers but the computational complexity increases
rapidly. The problem is that in the directed network topology one has to consider almost
each path separately to derive the bound. Let U be the number of hidden units and w, u
are the weight matrices of the output resp. input layer. We assume that the activation
function ? is continuously differentiable and assume that the derivative ? 0 is monotonically
increasing. Our prototype activation function we have in mind and which we use later
on in the experiment is the differentiable approximation, ?? (x) = ?1 log(1 + e?x ) of the
ReLU activation function ?ReLU (x) = max{0, x}. Note that lim??? ?? (x) = ?ReLU (x) and
??0 (x) = 1+e1??x . The output of the neural network can be written as
fj (x) =
U
X
r=1
wjr ?
d
X
urs xs ,
j = 1, . . . , K,
s=1
where for simplicity we omit any bias terms, but it is straightforward to consider also models
with bias. A direct computation shows that
2
k?fj (y) ? ?fc (y)k2 =
U
X
(wjr ? wcr )(wjm ? wcm )? 0 (hur , yi)? 0 (hum , yi)
r,m=1
d
X
url uml , (4)
l=1
where ur ? Rd is the r-th row of the weight matrix u ? RU ?d . The resulting bound is given
in the following proposition.
4
Proposition 2.2. Let ? be a continuously differentiable activation function with ? 0 monoPd
tonically increasing. Define ?rm = (wjr ? wcr )(wjm ? wcm ) l=1 url uml . Then
maxy?B2 (x,R) k?fj (y) ? ?fc (y)k2
U
h X
?
max{?rm , 0}? 0 hur , xi + R kur k2 ? 0 hum , xi + R kum k2
r,m=1
i 12
+ min{?rm , 0}? 0 hur , xi ? R kur k2 ? 0 hum , xi ? R kum k2
As discussed above the global Lipschitz bounds of the individual classifier outputs, see (2),
lead to an upper bound of our desired local cross-Lipschitz constant. In the experiments
below our local bounds on the Lipschitz constant are up to 8 times smaller, than what one
would achieve via the global Lipschitz bounds of [24]. This shows that their global approach
is much too rough to get meaningful robustness guarantees.
3
The Cross-Lipschitz Regularization Functional
We have seen in Section 2 that if
max
max
j6=c y?Bp (x,R)
k?fc (y) ? ?fj (y)kq ,
(5)
is small and fc (x) ? fj (x) is large, then we get good robustness guarantees. The latter
property is typically already optimized in a multi-class loss function. We consider for all
methods in this paper the cross-entropy loss so that the differences in the results only come
from the chosen function class (kernel methods versus neural networks) and the chosen
regularization functional. The cross-entropy loss L : {1, . . . , K} ? RK ? R is given as
K
efy (x)
X
L(y, f (x)) = ? log PK
= log 1 +
efk (x)?fy (x) .
fk (x)
k=1 e
k6=y
In the latter formulation it becomes apparent that the loss tries to make the difference
fy (x) ? fk (x) as large as possible for all k = 1, . . . , K.
As our goal are good robustness guarantees it is natural to consider a proxy of the quantity
in (5) for regularization. We define the Cross-Lipschitz Regularization functional as
?(f ) =
n
K
1 X X
2
k?fl (xi ) ? ?fm (xi )k2 ,
nK 2 i=1
(6)
l,m=1
where the (xi )ni=1 are the training points. The goal of this regularization functional is to
make the differences of the classifier functions at the data points as constant as possible. In
total by minimizing
n
1X
L yi , f (xi ) + ??(f ),
n i=1
(7)
over some function class we thus try to maximize fc (xi ) ? fj (xi ) and at the same time
2
keep k?fl (xi ) ? ?fm (xi )k2 small uniformly over all classes. This automatically enforces
robustness of the resulting classifier. It is important to note that this regularization functional
is coherent with the loss as it shares the same degrees of freedom, that is adding the same
function g to all outputs: fj0 (x) = fj (x) + g(x) leaves loss and regularization functional
invariant. This is the main difference to [4], where they enforce the global Lipschitz constant
to be smaller than one.
3.1
Cross-Lipschitz Regularization in Kernel Methods
In kernel methods one uses typically the regularization P
functional induced by the kernel which
n
is given as the squared norm of the function, f (x) = i=1 ?i k(xi , x), in the corresponding
5
Pn
2
reproducing kernel Hilbert space Hk , kf kHk =
i,j=1 ?i ?j k(xi , xj ). In particular, for
translation invariant kernels one can make directly a connection to penalization of derivatives
of the function f via the Fourier transform, see [20]. However, penalizing higher-order
derivatives is irrelevant for achieving robustness. Given the kernel expansion of f , one can
write the Cross-Lipschitz regularization function as
n
K
n
1 X X X
?(f ) =
(?lr ? ?mr )(?ls ? ?ms ) h?y k(xr , xi ), ?y k(xs , xi )i
nK 2 i,j=1
r,s=1
l,m=1
? is convex in ? ? R
as k 0 (xr , xs ) = h?y k(xr , xi ), ?y k(xs , xi )i is a positive definite
kernel for any xi and with the convex cross-entropy loss the learning problem in (7) is convex.
K?n
3.2
Cross-Lipschitz Regularization in Neural Networks
The standard way to regularize neural networks is weight decay; that is, the squared
Euclidean norm of all weights is added to the objective. More recently dropout [22], which
can be seen as a form of stochastic regularization, has been introduced. Dropout can
also be interpreted as a form of regularization of the weights [22, 10]. It is interesting
to note that classical regularization functionals which penalize derivatives of the resulting
classifier function are not typically used in deep learning, but see [6, 11]. As noted above
we restrict ourselves to one hidden
layer neural networks to simplify notation, that is,
PU
Pd
j = 1, . . . , K. Then we can write the Cross-Lipschitz
fj (x) = r=1 wjr ?
s=1 urs xs ,
regularization as
U
K
K
K
n
d
X
X
X
X
2 X X
0
0
?(f ) =
w
w
?
w
w
?
(hu
,
x
i)?
(hu
,
x
i)
url usl
lr ls
lr
ms
r
i
s
i
nK 2 r,s=1
m=1
i,j=1
l=1
l=1
l=1
which leads to an expression which can be fast evaluated using vectorization. Obviously, one
can also implement the Cross-Lipschitz Regularization also for all standard deep networks.
4
Box Constrained Adversarial Sample Generation
The main emphasis of this paper are robustness guarantees without resorting to particular
ways how to generate adversarial samples. On the other hand while Theorem 2.1 gives
lower bounds on the required input transformation, efficient ways to approximately solve
the adversarial sample generation in (1) are helpful to get upper bounds on the required
change. Upper bounds allow us to check how tight our derived lower bounds are. As all of
our experiments will be concerned with images, it is reasonable that our adversarial samples
are also images. However, up to our knowledge, the current main techniques to generate
adversarial samples [7, 12, 19] integrate box constraints by clipping the results to [0, 1]d . We
provide in the following fast algorithms to generate adversarial samples which lie in [0, 1]d .
The strategy is similar to [12], where they use a linear approximation of the classifier to
derive adversarial samples with respect to different norms. Formally,
fj (x + ?) ? fj (x) + h?fj (x), ?i , j = 1, . . . , K.
Assuming that the linear approximation holds, the optimization problem (1) integrating box
constraints for changing class c into j becomes
min??Rd k?kp
(8)
sbj. to: fj (x) ? fc (x) ? h?fc (x) ? ?fj (x), ?i
0 ? xj + ?j ? 1
In order to get the minimal adversarial sample we have to solve this for all j 6= c and take
the one with minimal k?kp . This yields the minimal adversarial change for linear classiifers.
Note that (8) is a convex optimization problem, which can be reduced to a one-parameter
problem in the dual. This allows to derive the following result (proofs and algorithms are in
the supplement).
Proposition 4.1. Let p ? {1, 2, ?}, then (8) can be solved in O(d log d) time.
For nonlinear classifiers a change of the decision is not guaranteed and thus we use later on
a binary search with a variable c instead of fc (x) ? fj (x).
6
5
Experiments
The goal of the experiments is the evaluation of the robustness of the resulting classifiers
and not necessarily state-of-the-art results in terms of test error. In all cases we compute the
robustness guarantees from Theorem 2.1 (lower bound on the norm of the minimal change
required to change the classifier decision), where we optimize over R using binary search,
and adversarial samples with the algorithm for the 2-norm from Section 4 (upper bound
on the norm of the minimal change required to change the classifier decision), where we do
a binary search in the classifier output difference in order to find a point on the decision
boundary. Additional experiments can be found in the supplementary material.
Kernel methods: We optimize the cross-entropy loss once with the standard regularization
(Kernel-LogReg) and with Cross-Lipschitz regularization (Kernel-CL). Both are convex
optimization problems and we use L-BFGS to solve them. We use the Gaussian kernel
2
k(x, y) = e??kx?yk where ? = ?2 ?
and ?KNN40 is the mean of the 40 nearest neighbor
KNN40
distances on the training set and ? ? {0.5, 1, 2, 4}. We show the results for MNIST (60000
training and 10000 test samples). However, we have checked that parameter selection
using a subset of 50000 images from the training set and evaluating on the rest yields
indeed the parameters which give the best test errors when trained on the full set. The
regularization parameter is chosen in ? ? {10?k |k ? {5, 6, 7, 8}} for Kernel-SVM and
? ? {10?k | k ? {0, 1, 2, 3}} for our Kernel-CL. The results of the optimal parameters are
given in the following table and the performance of all parameters is shown in Figure 1. Note
that due to the high computational complexity we could evaluate the robustness guarantees
only for the optimal parameters.
No Reg.
(? = 0)
K-SVM
K-CL
test
error
2.23%
avg. k?k2
adv.
samples
2.39
avg.k?k2
rob.
guar.
0.037
1.48%
1.44%
1.91
3.12
0.058
0.045
Figure 1: Kernel Methods: Cross-Lipschitz regularization achieves both better test error and robustness against
adversarial samples (upper bounds, larger is better) compared to the standard regularization. The robustness
guarantee is weaker than for neural networks but this is most likely due to the relatively loose bound.
Neural Networks: Before we demonstrate how upper and lower bounds improve using
cross-Lipschitz regularization, we first want to highlight the importance of the usage of the
local cross-Lipschitz constant in Theorem 2.1 for our robustness guarantee.
Local versus global Cross-Lipschitz constant: While no robustness guarantee has
been proven before, it has been discussed in [24] that penalization of the global Lipschitz
constant should improve robustness, see also [4]. For that purpose they derive the Lipschitz
constants of several different layers and use the fact that the Lipschitz constant of a
composition of functions is upper bounded by the product of the Lipschitz constants of
the functions. In analogy, this would mean that the term supy?B(x,R) k?fc (y) ? ?fj (y)k2 ,
which we have upper bounded in Proposition 2.2, in the denominator in Theorem 2.1 could
be replaced2 by the global Lipschitz constant of g(x) := fc (x) ? fj (x). which is given as
supy?Rd k?g(x)k2 = supx6=y |g(x)?g(y)|
kx?yk2 . We have with kU k2,2 being the largest singular value
of U ,
|g(x) ? g(y)| = hwc ? wj , ?(U x) ? ?(U y)i ? kwc ? wj k2 k?(U x) ? ?(U y)k2
? kwc ? wj k2 kU (x ? y)k2 ? kwc ? wj k2 kU k2,2 kx ? yk2 ,
where we used that ? is contractive as ? 0 (z) = 1+e1??z and thus we get
sup k?fc (x) ? ?fj (x)k2 ? kwc ? wj k2 kU k2,2 .
y?Rd
2
Note that then the optimization of R in Theorem 2.1 would be unnecessary.
7
MNIST (plain)
CIFAR10 (plain)
None Dropout Weight Dec. Cross Lip. None Dropout Weight Dec. Cross Lip.
0.69
0.48
0.68
0.21
0.22
0.13
0.24
0.17
?
Table 1: We show the average ratio ?global of the robustness guarantees ?global , ?local from Theorem 2.1 on
local
the test data for MNIST and CIFAR10 and different regularizers. The guarantees using the local Cross-Lipschitz
constant are up to eight times better than with the global one.
The advantage is clearly that this global Cross-Lipschitz constant can just be computed
once and by using it in Theorem 2.1 one can evaluate the guarantees very quickly. However,
it turns out that one gets significantly better robustness guarantees by using the local
Cross-Lipschitz constant in terms of the bound derived in Proposition 2.2 instead of the just
derived global Lipschitz constant. Note that the optimization over R in Theorem 2.1 is done
using a binary search, noting that the bound of the local Lipschitz constant in Proposition
2.2 is monotonically decreasing in R. We have the following comparison in Table 1. We
want to highlight that the robustness guarantee with the global Cross-Lipschitz constant
was always worse than when using the local Cross-Lipschitz constant across all regularizers
and data sets. Table 1 shows that the guarantees using the local Cross-Lipschitz can be up
to eight times better than for the global one. As these are just one hidden layer networks, it
is obvious that robustness guarantees for deep neural networks based on the global Lipschitz
constants will be too coarse to be useful.
Experiments: We use a one hidden layer network with 1024 hidden units and the softplus
activation function with ? = 10. Thus the resulting classifier is continuously differentiable.
We compare three different regularization techniques: weight decay, dropout and our CrossLipschitz regularization. Training is done with SGD. For each method we have adapted
the learning rate (two per method) and regularization parameters (4 per method) so that
all methods achieve good performance. We do experiments for MNIST and CIFAR10
in three settings: plain, data augmentation and adversarial training. The exact settings
of the parameters and the augmentation techniques are described in the supplementary
material.The results for MNIST are shown in Figure 2 and the results for CIFAR10 are
in the supplementary material.For MNIST there is a clear trend that our Cross-Lipschitz
regularization improves the robustness of the resulting classifier while having competitive
resp. better test error. It is surprising that data augmentation does not lead to more
robust models. However, adversarial training improves the guarantees as well as adversarial
resistance. For CIFAR10 the picture is mixed, our CL-Regularization performs well for
the augmented task in test error and upper bounds but is not significantly better in the
robustness guarantees. The problem might be that the overall bad performance due to the
simple model is preventing a better behavior. Data augmentation leads to better test error
but the robustness properties (upper and lower bounds) are basically unchanged. Adversarial
training slightly improves performance compared to the plain setting and improves upper
and lower bounds in terms of robustness. We want to highlight that our guarantees (lower
bounds) and the upper bounds from the adversarial samples are not too far away.
Illustration of adversarial samples: we take one test image from MNIST and apply the
adversarial generation from Section 4 wrt to the 2-norm to generate the adversarial samples for
the different kernel methods and neural networks (plain setting), where we use for each method
the parameters leading to best test performance. All classifiers change their originally correct
decision to a ?wrong? one. It is interesting to note that for Cross-Lipschitz regularization
(both kernel method and neural network) the ?adversarial? sample is really at the decision
boundary between 1 and 8 (as predicted) and thus the new decision is actually correct.
This effect is strongest for our Kernel-CL, which also requires the strongest modification to
generate the adversarial sample. The situation is different for neural networks, where the
classifiers obtained from the two standard regularization techniques are still vulnerable, as
the adversarial sample is still clearly a 1 for dropout and weight decay.
Outlook Formal guarantees on machine learning systems are becoming increasingly more
important as they are used in safety-critical systems. We think that there should be more
8
Adversarial Resistance (Upper Bound)
wrt to L2 -norm
Robustness Guarantee (Lower Bound)
wrt to L2 -norm
Figure 2: Neural Networks, Left: Adversarial resistance wrt to L2 -norm on MNIST. Right: Average robustness guarantee wrt to L2 -norm on MNIST for different neural networks (one hidden layer, 1024 HU) and
hyperparameters. The Cross-Lipschitz regularization leads to better robustness with similar or better prediction
performance. Top row: plain MNIST, Middle: Data Augmentation, Bottom: Adv. Training
research on robustness guarantees (lower bounds), whereas current research is focused on
new attacks (upper bounds). We have argued that our instance-specific guarantees using our
local Cross-Lipschitz constant is more effective than using a global one and leads to lower
bounds which are up to 8 times better. A major open problem is to come up with tight
lower bounds for deep networks.
Original, Class 1
K-SVM, Pred:7, k?k2 = 1.2
K-CL, Pred:8, k?k2 = 3.5
NN-WD, Pred:8, k?k2 = 1.2 NN-DO, Pred:7, k?k2 = 1.1 NN-CL, Pred:8, k?k2 = 2.6
Figure 3: Top left: original test image, for each classifier we generate the corresponding adversarial sample which
changes the classifier decision (denoted as Pred). Note that for Cross-Lipschitz regularization this new decision
makes (often) sense, whereas for the neural network models (weight decay/dropout) the change is so small that
the new decision is clearly wrong.
9
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis,
J. Dean, M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia,
R. J?zefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man?, R. Monga, S. Moore, D. G.
Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker,
V. Vanhoucke, V. Vasudevan, F. B. Vi?gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke,
Y. Yu, and X. Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed
systems, 2016.
[2] O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori, and A. Criminisi. Measuring
neural net robustness with constraints. In NIPS, 2016.
[3] N. Carlini and D. Wagner. Adversarial examples are not easily detected: Bypassing ten
detection methods. In ACM Workshop on Artificial Intelligence and Security, 2017.
[4] M. Cisse, P. Bojanowksi, E. Grave, Y. Dauphin, and N. Usunier. Parseval networks: Improving
robustness to adversarial examples. In ICML, 2017.
[5] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. Adversarial classification. In KDD,
2004.
[6] H. Drucker and Y. Le Cun. Double backpropagation increasing generalization performance. In
IJCNN, 1992.
[7] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples.
In ICLR, 2015.
[8] S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples.
In ICLR Workshop, 2015.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR,
pages 770?778, 2016.
[10] D. P. Helmbold and P. Long. On the inductive bias of dropout. Journal of Machine Learning
Research, 16:3403?3454, 2015.
[11] S. Hochreiter and J. Schmidhuber. Simplifying neural nets by discovering flat minima. In NIPS,
1995.
[12] R. Huang, B. Xu, D. Schuurmans, and C. Szepesvari. Learning with a strong adversary. In
ICLR, 2016.
[13] J. Kos, I. Fischer, and D. Song. Adversarial examples for generative models. In ICLR Workshop,
2017.
[14] A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial examples in the physical world. In
ICLR Workshop, 2017.
[15] Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and
black-box attacks. In ICLR, 2017.
[16] D. Lowd and C. Meek. Adversarial learning. In KDD, 2005.
[17] S.M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations.
In CVPR, 2017.
[18] N. Papernot, P. McDonald, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial
perturbations against deep networks. In IEEE Symposium on Security & Privacy, 2016.
[19] P. Frossard S.-M. Moosavi-Dezfooli, A. Fawzi. Deepfool: a simple and accurate method to fool
deep neural networks. In CVPR, pages 2574?2582, 2016.
[20] B. Sch?lkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[21] U. Shaham, Y. Yamada, and S. Negahban. Understanding adversarial training: Increasing local
stability of neural nets through robust optimization. In NIPS, 2016.
[22] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A
simple way to prevent neural networks from overfitting. Journal of Machine Learning Research,
15:1929?1958, 2014.
10
[23] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine
learning algorithms for traffic sign recognition. Neural Networks, 32:323?332, 2012.
[24] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus.
Intriguing properties of neural networks. In ICLR, pages 2503?2511, 2014.
[25] S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, pages 87.1?87.12.
[26] S. Zheng, Y. Song, T. Leung, and I. J. Goodfellow. Improving the robustness of deep neural
networks via stability training. In CVPR, 2016.
11
| 6821 |@word moosavi:2 middle:1 achievable:1 norm:15 open:1 hu:3 r:4 simplifying:1 sgd:1 outlook:1 liu:2 interestingly:1 existing:1 steiner:1 current:2 wd:1 surprising:1 activation:5 yet:1 intriguing:1 written:1 devin:1 subsequent:1 kdd:2 v:1 generative:2 leaf:1 isard:1 intelligence:1 discovering:1 yamada:1 lr:3 provides:1 coarse:1 attack:5 zhang:1 saarland:1 olah:1 direct:1 become:1 symposium:1 abadi:1 specialize:1 khk:1 dalvi:1 privacy:1 indeed:2 frossard:2 behavior:3 nor:2 multi:4 salakhutdinov:1 decreasing:1 wcm:2 automatically:1 increasing:4 becomes:3 campus:1 underlying:1 notation:2 bounded:3 what:1 interpreted:1 transformation:3 guarantee:40 every:1 zaremba:1 classifier:57 wrong:4 k2:32 rm:3 unit:2 converse:1 omit:1 safety:4 positive:2 before:2 local:17 path:1 becoming:1 approximately:3 black:3 might:2 emphasis:1 specifying:1 contractive:1 igel:1 directed:1 unique:1 enforces:1 practice:1 definite:2 postpone:1 implement:1 backpropagation:1 xr:18 area:1 universal:2 significantly:3 confidence:5 integrating:1 suggest:1 get:8 selection:1 seminal:1 optimize:2 dean:1 straightforward:1 l:2 convex:6 focused:1 simplicity:2 helmbold:1 deepfool:1 shlens:2 regularize:1 stability:2 autonomous:1 resp:3 commercial:2 suppose:1 exact:1 us:1 goodfellow:5 domingo:1 trend:1 satisfying:1 recognition:2 predicts:1 bottom:1 solved:1 sanghai:1 region:2 wj:7 ykp:2 adv:2 sun:1 yk:1 pd:1 complexity:2 trained:2 raise:1 tight:4 swami:1 logreg:1 gu:1 easily:1 regularizer:1 fast:5 effective:1 kp:5 detected:2 artificial:1 nori:1 harnessing:1 quite:3 apparent:1 widely:1 solve:4 larger:2 say:1 supplementary:3 otherwise:1 grave:1 cvpr:4 fischer:1 think:4 transform:1 final:2 obviously:1 advantage:1 differentiable:7 matthias:1 net:3 mausam:1 propose:2 product:1 date:1 rapidly:1 dezfooli:2 achieve:4 sutskever:3 double:1 requirement:1 zefowicz:1 generating:1 leave:1 depending:1 derive:7 nearest:1 strong:1 c:1 predicted:1 implies:3 come:2 correct:3 stochastic:1 criminisi:1 human:1 settle:1 material:3 require:1 argued:1 hx:4 generalization:1 really:1 proposition:7 pessimistic:1 bypassing:1 hold:7 around:2 considered:1 driving:1 major:1 achieves:1 cken:1 early:1 purpose:1 tonically:1 shaham:1 label:2 largest:1 successfully:1 mit:1 brought:1 clearly:4 sensor:1 gaussian:2 rough:1 always:1 pn:1 cr:2 derived:3 focus:1 check:1 hk:1 adversarial:65 defend:1 sense:3 helpful:1 nn:3 leung:1 typically:4 initially:1 hidden:7 germany:1 issue:1 classification:3 arg:4 dual:1 denoted:2 k6:1 dauphin:1 overall:1 misclassifies:1 art:3 constrained:2 construct:1 once:2 having:1 beach:1 yu:1 icml:1 alter:3 future:2 report:1 piecewise:2 simplify:1 supx6:1 modern:1 individual:1 ourselves:2 freedom:1 detection:2 zheng:2 evaluation:3 severe:1 extreme:2 regularizers:2 accurate:1 fj0:1 necessary:3 cifar10:5 perceivable:1 euclidean:1 desired:1 hein:1 minimal:5 fawzi:3 instance:7 schlipsing:1 measuring:1 clipping:1 subset:1 kq:7 krizhevsky:1 successful:1 too:3 reported:1 supx:1 kurakin:1 kudlur:1 st:1 negahban:1 informatics:1 continuously:6 quickly:1 again:3 augmentation:7 squared:2 huang:1 worse:1 derivative:4 leading:1 szegedy:2 bfgs:1 b2:3 jha:1 explicitly:1 depends:1 vi:1 later:4 view:1 try:2 doing:1 sup:1 traffic:1 competitive:1 jia:1 ni:1 characteristic:1 yield:4 ofthe:1 generalize:1 lkopf:1 emphasizes:1 basically:2 none:3 ren:1 j6:3 classified:3 strongest:2 influenced:1 checked:1 email:2 papernot:1 against:7 failure:1 tucker:1 obvious:1 proof:1 boil:1 gain:1 k2x:2 hur:3 exemplify:1 lim:1 improves:6 knowledge:1 hilbert:1 wicke:1 actually:1 originally:3 attained:1 higher:1 bmvc:1 formulation:1 evaluated:2 box:7 done:2 just:5 smola:1 hand:2 nonlinear:1 lowd:1 gray:1 usa:1 usage:3 k22:8 concept:1 effect:1 vasudevan:1 regularization:38 inductive:1 moore:1 komodakis:1 during:2 irving:1 davis:1 noted:2 levenberg:1 transferable:1 m:2 demonstrate:1 mcdonald:1 performs:1 fj:35 image:13 recently:4 functional:10 physical:1 discussed:3 he:1 elementwise:1 distillation:1 composition:1 cambridge:1 rd:13 fk:2 mathematics:1 resorting:1 maxj6:1 bruna:1 access:1 resistant:1 yk2:5 etc:1 add:1 pu:1 j:1 recent:1 showed:1 irrelevant:1 wattenberg:1 manipulation:9 schmidhuber:1 certain:2 binary:4 yi:3 seen:3 minimum:1 additional:1 mr:1 maximize:1 monotonically:2 corrado:1 full:1 technical:1 adapt:1 calculation:1 cross:36 long:2 hwj:1 manipulate:2 e1:2 prediction:3 ko:1 denominator:2 vision:1 heterogeneous:1 kernel:28 monga:1 agarwal:1 achieved:1 dec:2 penalize:1 hochreiter:1 whereas:4 want:4 separately:1 addressed:1 singular:1 sch:1 rest:1 zagoruyko:1 warden:1 rigazio:1 induced:1 call:5 noting:1 ideal:1 bengio:1 concerned:1 affect:1 relu:3 xj:2 architecture:1 restrict:2 topology:1 fm:2 prototype:1 barham:1 kwc:5 drucker:1 expression:1 defense:1 url:3 song:3 spammer:1 resistance:3 deep:11 useful:1 fool:2 clear:1 ten:1 reduced:1 generate:11 exist:2 sign:1 correctly:2 per:2 write:2 harp:1 nevertheless:1 achieving:1 bastani:1 changing:1 prevent:1 neither:1 penalizing:1 sum:1 talwar:1 salmen:1 striking:1 hwc:2 family:1 almost:1 reasonable:1 wu:1 decision:22 dropout:9 bound:52 fl:3 layer:8 guaranteed:1 meek:1 refine:1 adapted:1 ijcnn:1 constraint:8 bp:5 countermeasure:1 flat:1 hy:1 fourier:1 speed:1 min:9 kxr:1 relatively:1 uml:2 department:1 ball:2 jr:3 across:2 smaller:2 slightly:1 increasingly:1 ur:3 cun:1 rob:1 modification:1 maxy:3 invariant:2 turn:4 discus:1 mechanism:1 loose:1 wrt:6 mind:1 usunier:1 brevdo:1 kur:2 apply:2 eight:2 away:2 enforce:1 robustness:40 sbj:1 original:2 top:2 maintaining:1 l6:1 ioannou:1 giving:1 murray:1 classical:1 unchanged:1 objective:1 question:2 already:2 hum:3 quantity:1 added:1 strategy:1 kaiser:1 usual:1 nr:1 said:1 iclr:7 distance:1 extent:1 fy:2 trivial:1 reason:1 assuming:1 ru:1 illustration:2 ratio:1 minimizing:1 upper:17 datasets:1 attacked:2 gas:1 situation:1 hinton:1 perturbation:6 reproducing:1 sharp:2 arbitrary:1 introduced:1 pred:6 required:8 optimized:1 connection:1 security:3 coherent:1 learned:1 tensorflow:1 saarbr:1 nip:4 suggested:4 alongside:1 below:3 perception:1 adversary:1 max:13 critical:4 natural:1 residual:2 improve:2 picture:1 cisse:1 understanding:1 l2:4 kf:1 relative:1 loss:11 parseval:1 highlight:3 brittle:1 mixed:1 generation:5 interesting:2 proven:1 versus:2 localized:1 analogy:1 penalization:2 integrate:2 astonishingly:1 degree:1 supy:2 vanhoucke:1 proxy:1 principle:1 verma:1 share:1 translation:1 row:2 changed:1 formal:9 bias:3 allow:1 weaker:1 neighbor:1 explaining:1 wide:1 wagner:1 sparse:1 distributed:1 boundary:3 dimension:1 plain:6 transition:2 evaluating:1 world:1 preventing:2 avg:2 spam:2 far:2 erhan:1 functionals:1 emphasize:1 keep:1 global:22 overfitting:1 unnecessary:1 xi:21 fergus:1 search:4 vectorization:1 table:4 lip:2 ku:4 szepesvari:1 delving:1 robust:8 ca:1 efk:1 schuurmans:1 improving:2 expansion:1 investigated:1 necessarily:1 cl:7 carlini:1 pk:1 spread:1 main:3 noise:2 hyperparameters:1 fair:1 xu:1 augmented:1 fig:1 benchmarking:1 lie:1 rk:4 down:1 theorem:9 bad:1 specific:4 wjr:4 showing:1 ghemawat:1 r2:4 x:18 svm:4 decay:4 concern:1 intractable:1 exists:1 mnist:10 workshop:4 adding:1 effectively:1 importance:1 supplement:1 kx:14 nk:3 chen:2 entropy:5 intersection:1 fc:22 simply:1 likely:1 visual:1 prevents:1 vinyals:1 vulnerable:4 stallkamp:1 acm:1 ma:1 goal:5 formulated:1 towards:1 lipschitz:45 man:2 change:26 typical:1 wjm:2 uniformly:1 called:2 total:1 kxs:1 meaningful:1 citro:1 zone:1 formally:2 softplus:1 latter:2 absolutely:1 evaluate:2 reg:1 schuster:1 srivastava:1 |
6,437 | 6,822 | Associative Embedding: End-to-End Learning for
Joint Detection and Grouping
Alejandro Newell
Computer Science and Engineering
University of Michigan
Ann Arbor, MI
Zhiao Huang*
Institute for Interdisciplinary Information Sciences
Tsinghua University
Beijing, China
[email protected]
[email protected]
Jia Deng
Computer Science and Engineering
University of Michigan
Ann Arbor, MI
[email protected]
Abstract
We introduce associative embedding, a novel method for supervising convolutional
neural networks for the task of detection and grouping. A number of computer
vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of
detections is achieved with multi-stage pipelines, instead we propose an approach
that teaches a network to simultaneously output detections and group assignments.
This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to
multi-person pose estimation and report state-of-the-art performance on the MPII
and MS-COCO datasets.
1
Introduction
Many computer vision tasks can be viewed in the context of detection and grouping: detecting smaller
visual units and grouping them into larger structures. For example, in multi-person pose estimation
we detect body joints and group them into individual people; in instance segmentation we detect
pixels belonging to a semantic class and group them into object instances; in multi-object tracking
we detect objects across video frames and group them into tracks. In all of these cases, the output is a
variable number of visual units and their assignment into a variable number of visual groups.
Such tasks are often approached with two-stage pipelines that perform detection first and grouping
second. But such approaches may be suboptimal because detection and grouping are tightly coupled:
for example, in multiperson pose estimation, the same features used to recognize wrists or elbows in
an image would also suggest whether a wrist and elbow belong to the same limb.
In this paper we ask whether it is possible to jointly perform detection and grouping using a singlestage deep network trained end-to-end. We propose associative embedding, a novel method to express
output for joint detection and grouping. The basic idea is to introduce, for each detection, a vector
embedding that serves as a ?tag? to identify its group assignment. All detections associated with the
same tag value belong to the same group. Concretely, the network outputs a heatmap of per-pixel
* Work done while a visiting student at the University of Michigan.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
detection scores and a set of per-pixel embeddings. The detections and groups are decoded by
extracting the corresponding embeddings from pixel locations with top detection scores.
To train a network to produce the correct tags, we use a loss function that encourages pairs of tags to
have similar values if the corresponding detections belong to the same group or dissimilar values
otherwise. It is important to note that we have no ?ground truth? tags for the network to predict,
because what matters is not the particular tag values, only the differences between them. The network
has the freedom to decide on the tag values as long as they agree with the ground truth grouping.
We apply our approach to multiperson pose estimation, an important task for understanding humans
in images. Given an input image, multi-person pose estimation seeks to detect each person and
localize their body joints. Unlike single-person pose there are no prior assumptions of a person?s
location or size. Multi-person pose systems must scan the whole image detecting all people and their
corresponding keypoints. For this task, we integrate associative embedding with a stacked hourglass
network [31], which produces a detection heatmap and a tagging heatmap for each body joint, and
then group body joints with similar tags into individual people. Experiments demonstrate that our
approach outperforms all recent methods and achieves state-of-the-art results on MS-COCO [27] and
MPII Multiperson Pose [3].
Our contributions are two fold: (1) we introduce associative embedding, a new method for singlestage, end-to-end joint detection and grouping. This method is simple and generic; it works with
any network architecture that produces pixel-wise prediction; (2) we apply associative embedding to
multiperson pose estimation and achieve state-of-the-art results on two standard benchmarks.
2
Related Work
Vector Embeddings Our method is related to many prior works that use vector embeddings. Works
in image retrieval have used vector embeddings to measure similarity between images [12, 43]. Works
in image classification, image captioning, and phrase localization have used vector embeddings to
connect visual features and text features by mapping them to the same vector space [11, 14, 22].
Works in natural language processing have used vector embeddings to represent the meaning of
words, sentences, and paragraphs [30, 24]. Our work differs from these prior works in that we use
vector embeddings as identity tags in the context of joint detection and grouping.
Perceptual Organization Work in perceptual organization aims to group the pixels of an image into
regions, parts, and objects. Perceptual organization encompasses a wide range of tasks of varying
complexity from figure-ground segmentation [28] to hierarchical image parsing [15]. Prior works
typically use a two stage pipeline [29], detecting basic visual units (patches, superpixels, parts, etc.)
first and grouping them second. Common grouping approaches include spectral clustering [41, 36],
conditional random fields (e.g. [23]), and generative probabilistic models (e.g. [15]). These grouping
approaches all assume pre-detected basic visual units and pre-computed affinity measures between
them but differ among themselves in the process of converting affinity measures into groups. In
contrast, our approach performs detection and grouping in one stage using a generic network that
includes no special design for grouping.
It is worth noting a close connection between our approach to those using spectral clustering.
Spectral clustering (e.g. normalized cuts [36]) techniques takes as input pre-computed affinities (such
as predicted by a deep network) between visual units and solves a generalized eigenproblem to
produce embeddings (one per visual unit) that are similar for visual units with high affinity. Angular
Embedding [28, 37] extends spectral clustering by embedding depth ordering as well as grouping. Our
approach differs from spectral clustering in that we have no intermediate representation of affinities
nor do we solve any eigenproblems. Instead our network directly outputs the final embeddings.
Our approach is also related to the work by Harley et al. on learning dense convolutional embeddings [16], which trains a deep network to produce pixel-wise embeddings for the task of semantic
segmentation. Our work differs from theirs in that our network produces not only pixel-wise embeddings but also pixel-wise detection scores. Our novelty lies in the integration of detection and
grouping into a single network; to the best of our knowledge such an integration has not been
attempted for multiperson human pose estimation.
Multiperson Pose Estimation Recent methods have made great progress improving human pose
estimation in images in particular for single person pose estimation [40, 38, 42, 31, 8, 5, 32, 4, 9, 13,
2
Figure 1: We use the stacked hourglass architecture from Newell et al. [31]. The network performs
repeated bottom-up, top-down inference producing a series of intermediate predictions (marked in
blue) until the last ?hourglass? produces a final result (marked in green). Each box represents a 3x3
convolutional layer. Features are combined across scales by upsampling and performing elementwise
addition. The same ground truth is enforced across all predictions made by the network.
26, 18, 7, 39, 34]. For multiperson pose, prior and concurrent work can be categorized as either topdown or bottom-up. Top-down approaches [33, 17, 10] first detect individual people and then estimate
each person?s pose. Bottom-up approaches [35, 20, 21, 6] instead detect individual body joints and
then group them into individuals. Our approach more closely resembles bottom-up approaches but
differs in that there is no separation of a detection and grouping stage. The entire prediction is done at
once in a single stage. This does away with the need for complicated post-processing steps required
by other methods [6, 20].
3
Approach
To introduce associative embedding for joint detection and grouping, we first review the basic
formulation of visual detection. Many visual tasks involve detection of a set of visual units. These
tasks are typically formulated as scoring of a large set of candidates. For example, single-person
human pose estimation can be formulated as scoring candidate body joint detections at all possible
pixel locations. Object detection can be formulated as scoring candidate bounding boxes at various
pixel locations, scales, and aspect ratios.
The idea of associative embedding is to predict an embedding for each candidate in addition to the
detection score. The embeddings serve as tags that encode grouping: detections with similar tags
should be grouped together. In multiperson pose estimation, body joints with similar tags should be
grouped to form a single person. It is important to note that the absolute values of the tags do not
matter, only the distances between tags. That is, a network is free to assign arbitrary values to the
tags as long as the values are the same for detections belonging to the same group.
To train a network to predict the tags, we enforce a loss that encourages similar tags for detections
from the same group and different tags for detections across different groups. Specifically, this
tagging loss is enforced on candidate detections that coincide with the ground truth. We compare
pairs of detections and define a penalty based on the relative values of the tags and whether the
detections should be from the same group.
3.1
Network Architecture
Our approach requires that a network produce dense output to define a detection score and vector
embedding at each pixel of the input image. In this work we use the stacked hourglass architecture,
a model used previously for single-person pose estimation [31]. Each ?hourglass? is comprised
of a standard set of convolutional and pooling layers to process features down to a low resolution
capturing the full global context of the image. These features are upsampled and combined with
outputs from higher resolutions until reaching a final output resolution. Stacking multiple hourglasses
enables repeated bottom-up and top-down inference to produce a more accurate final prediction.
Intermediate predictions are made by the network after each hourglass (Fig. 1). We refer the reader
to [31] for more details of the network architecture.
The stacked hourglass model was originally developed for single-person human pose estimation
and designed to output a heatmap for each body joint of a target person. The pixel with the highest
heatmap activation is used as the predicted location for that joint. The network consolidates global
and local features to capture information about the full structure of the body while preserving fine
3
Figure 2: An overview of our approach for producing multi-person pose estimates. For each joint
of the body, the network simultaneously produces detection heatmaps and predicts associative
embedding tags. We take the top detections for each joint and match them to other detections that
share the same embedding tag to produce a final set of individual pose predictions.
details for precise localization. This balance between global and local context is just as important
when predicting poses of multiple people.
We make some modifications to the network architecture to increase its capacity and accommodate
the increased difficulty of multi-person pose estimation. We increase the number of features at each
drop in resolution of the hourglass (256 ? 384 ? 512 ? 640 ? 768). In addition, individual layers
are composed of 3x3 convolutions instead of residual modules. Residual links are still included
across each hourglass as well as skip connections at each resolution.
3.2
Detection and Grouping
For multiperson pose estimation, we train the network to detect joints in a similar manner to prior
work on single-person pose estimation [31]. The model predicts a detection score at each pixel
location for each body joint (?left wrist?, ?right shoulder?, etc.) regardless of person identity. The
difference from single-person pose being that an ideal heatmap for multiple people should have
multiple peaks (e.g. to identify multiple left wrists belonging to different people), as opposed to just a
single peak for a single target person.
During training, we impose a detection loss on the output heatmaps. The detection loss computes
mean square error between each predicted detection heatmap and its ?ground truth? heatmap which
consists of a 2D gaussian activation at each keypoint location. This loss is the same as the one used
by Newell et al. [31].
Given the top activating detections from these heatmaps we need to pull together all joints that belong
to the same individual. For this, we turn to the associative embeddings. For each joint of the body,
the network produces additional channels to define an embedding vector at every pixel. Note that the
dimension of the embeddings is not critical. If a network can successfully predict high-dimensional
embeddings to separate the detections into groups, it should also be able to learn to project those
high-dimensional embeddings to lower dimensions, as long as there is enough network capacity.
In practice we have found that 1D embedding is sufficient for multiperson pose estimation, and
higher dimensions do not lead to significant improvement. Thus throughout this paper we assume 1D
embeddings.
We think of these 1D embeddings as ?tags? indicating which person a detected joint belongs to.
Each detection heatmap has its own corresponding tag heatmap, so if there are m body joints to
predict then the network will output a total of 2m channels; m for detection and m for grouping. To
parse detections into individual people, we get the peak detections for each joint and retrieve their
corresponding tags at the same pixel location (illustrated in Fig. 2). We then group detections across
body parts by comparing the tag values of detections and matching up those that are close enough. A
group of detections now forms the pose estimate for a single person.
4
Figure 3: Tags produced by our network on a held-out validation image from the MS-COCO training
set. The tag values are already well separated and decoding the groups is straightforward.
The grouping loss assesses how well the predicted tags agree with the ground truth grouping.
Specifically, we retrieve the predicted tags for all body joints of all people at their ground truth
locations; we then compare the tags within each person and across people. Tags within a person
should be the same, while tags across people should be different.
Rather than enforce the loss across all possible pairs of keypoints, we produce a reference embedding
for each person. This is done by taking the mean of the output embeddings of all joints belonging
to a single person. Within an individual, we compute the squared distance between the reference
embedding and the predicted embedding for each joint. Then, between pairs of people, we compare
their reference embeddings to each other with a penalty that drops exponentially to zero as the
distance between the two tags increases.
Formally, let hk ? RW ?H be the predicted tagging heatmap for the k-th body joint, where h(x)
is a tag value at pixel location x. Given N people, let the ground truth body joint locations be
T = {(xnk )}, n = 1, . . . , N, k = 1 . . . , K, where xnk is the ground truth pixel location of the k-th
body joint of the n-th person.
Assuming all K joints are annotated, the reference embedding for the nth person would be
X
?n = 1
h
hk (xnk )
K
k
The grouping loss Lg is then defined as
2
1 ?
1 XX ?
1 XX
? 2
exp{? 2 h
Lg (h, T ) =
hn ? hk (xnk ) + 2
n ? hn0 }
NK n
N n 0
2?
n
k
The first half of the loss pulls together all of the embeddings belonging to an individual, and the
second half pushes apart embeddings across people. We use a ? value of 1 in our training.
3.3
Parsing Network Output
Once the network has been trained, decoding is straightforward. We perform non-maximum suppression on the detection heatmaps and threshold to get a set of detections for each body joint. Then, for
each detection we retrieve its corresponding associative embedding tag. To give an impression of the
types of tags produced by the network and the trivial nature of grouping we refer to Figure 3; we
plot a set of detections where the y-axis indicates the class of body joint and the x-axis the assigned
embedding.
To produce a final set of predictions we iterate through each joint one by one. An ordering is
determined by first considering joints around the head and torso and gradually moving out to the
limbs. We use the detections from the first joint (the neck, for example) to form our initial pool
of detected people. Then, given the next joint, say the left shoulder, we have to figure out how to
best match its detections to the current pool of people. Each detection is defined by its score and
embedding tag, and each person is defined by the mean embedding of their current joints.
5
Figure 4: Qualitative results on MSCOCO validation images
We compare the distance between these embeddings, and for each person we greedily assign a new
joint based on the detection with the highest score whose embedding falls within some distance
threshold. New detections that are not matched are used to start a new person instance. This accounts
for cases where perhaps only a leg or hand is visible for a particular person. We repeat this process
for each joint of the body until every detection has been assigned to a person. No steps are taken to
ensure anatomical correctness or reasonable spatial relationships between pairs of joints.
Missing joints: In some evaluation settings we may need to ensure that each person has a prediction
for all joints, but our parsing does not guarantee this. Missing joints are usually fine, as in cases
with truncation and extreme occlusion, but when it is necessary to produce complete predictions we
introduce an additional processing step: given a missing joint, we identify all pixels whose embedding
falls close enough to the target person, and choose the pixel location with the highest activation. This
score may be lower than our usual cutoff threshold for detections.
Multiscale Evaluation: While it is feasible to train a network to predict poses for people of all
scales, there are some drawbacks. Extra capacity is required of the network to learn the necessary
scale invariance, and the precision of predictions for small people will suffer due to issues of low
resolution after pooling. To account for this, we evaluate images at test time at multiple scales. We
take the heatmaps produced at each scale and resize and average them together. Then, to combine
tags across scales, we concatenate the set of tags at a pixel location into a vector v ? Rm (assuming
m scales). The decoding process remains unchanged.
4
Experiments
Datasets We evaluate on two datasets: MS-COCO [27] and MPII Human Pose [3]. MPII Human
Pose consists of about 25k images and contains around 40k total annotated people (three-quarters of
which are available for training). Evaluation is performed on MPII Multi-Person, a set of 1758 groups
of multiple people taken from the test set as outlined in [35]. The groups for MPII Multi-Person
are usually a subset of the total people in a particular image, so some information is provided to
make sure predictions are made on the correct targets. This includes a general bounding box and
6
Iqbal&Gall, ECCV16 [21]
Insafutdinov et al., ECCV16 [20]
Insafutdinov et al., arXiv16a [35]
Levinkov et al., CVPR17 [25]
Insafutdinov et al., CVPR17 [19]
Cao et al., CVPR17 [6]
Fang et al., ICCV17 [10]
Our method
Head
58.4
78.4
89.4
89.8
88.8
91.2
88.4
92.1
Shoulder
53.9
72.5
84.5
85.2
87.0
87.6
86.5
89.3
Elbow
44.5
60.2
70.4
71.8
75.9
77.7
78.6
78.9
Wrist
35.0
51.0
59.3
59.6
64.9
66.8
70.4
69.8
Hip
42.2
57.2
68.9
71.1
74.2
75.4
74.4
76.2
Knee
36.7
52.0
62.7
63.0
68.8
68.9
73.0
71.6
Ankle
31.1
45.4
54.6
53.5
60.5
61.7
65.8
64.7
Total
43.1
59.5
70.0
70.6
74.3
75.6
76.7
77.5
Table 1: Results (AP) on MPII Multi-Person.
CMU-Pose [6]
G-RMI [33]
Our method
AP
0.611
0.643
0.663
AP50
0.844
0.846
0.865
AP75
0.667
0.704
0.727
APM
0.558
0.614
0.613
APL
0.684
0.696
0.732
AR
0.665
0.698
0.715
AR50
0.872
0.885
0.897
AR75
0.718
0.755
0.772
ARM
0.602
0.644
0.662
ARL
0.749
0.771
0.787
Table 2: Results on MS-COCO test-std, excluding systems trained with external data.
CMU-Pose [6]
Mask-RCNN [17]
G-RMI [33]
Our method
AP
0.618
0.627
0.649
0.655
AP50
0.849
0.870
0.855
0.868
AP75
0.675
0.684
0.713
0.723
APM
0.571
0.574
0.623
0.606
APL
0.682
0.711
0.700
0.726
AR
0.665
?
0.697
0.702
AR50
0.872
?
0.887
0.895
AR75
0.718
?
0.755
0.760
ARM
0.606
?
0.644
0.646
ARL
0.746
?
0.771
0.781
Table 3: Results on MS-COCO test-dev, excluding systems trained with external data.
scale term used to indicate the occupied region. No information is provided on the number of people
or the scales of individual figures. We use the evaluation metric outlined by Pishchulin et al. [35]
calculating average precision of joint detections.
MS-COCO [27] consists of around 60K training images with more than 100K people with annotated
keypoints. We report performance on two test sets, a development test set (test-dev) and a standard
test set (test-std). We use the official evaluation metric that reports average precision (AP) and average
recall (AR) in a manner similar to object detection except that a score based on keypoint distance is
used instead of bounding box overlap. We refer the reader to the MS-COCO website for details [1].
Implementation Details The network used for this task consists of four stacked hourglass modules,
with an input size of 512 ? 512 and an output resolution of 128 ? 128. We train the network using
a batch size of 32 with a learning rate of 2e-4 (dropped to 1e-5 after about 150k iterations) using
Tensorflow [2]. The associative embedding loss is weighted by a factor of 1e-3 relative to the MSE
loss of the detection heatmaps. The loss is masked to ignore crowds with sparse annotations. At
test time an input image is run at multiple scales; the output detection heatmaps are averaged across
scales, and the tags across scales are concatenated into higher dimensional tags.
Following prior work [6], we apply a single-person pose model [31] trained on the same dataset to
investigate further refinement of predictions. We run each detected person through the single person
model, and average the output with the predictions from our multiperson pose model. From Table
5, it is clear the benefit of this refinement is most pronounced in the single-scale setting on small
figures. This suggests output resolution is a limit of performance at a single scale. Using our method
for evaluation at multiple scales, the benefits of single person refinement are almost entirely mitigated
as illustrated in Tables 4 and 5.
MPII Results Average precision results can be seen in Table 1 demonstrating an improvement over
state-of-the-art methods in overall AP. Associative embedding proves to be an effective method for
teaching the network to group keypoint detections into individual people. It requires no assumptions
about the number of people present in the image, and also offers a mechanism for the network to
express confusion of joint assignments. For example, if the same joint of two people overlaps at the
exact same pixel location, the predicted associative embedding will be a tag somewhere between the
respective tags of each person.
We can get a better sense of the associative embedding output with visualizations of the embedding
heatmap (Figure 5). We put particular focus on the difference in the predicted embeddings when
7
Figure 5: Here we visualize the associative embedding channels for different joints. The change
in embedding predictions across joints is particularly apparent in these examples where there is
significant overlap of the two target figures.
multi scale
multi scale + refine
Head
92.9
93.1
Shoulder
90.9
90.3
Elbow
81.0
81.9
Wrist
71.0
72.1
Hip
79.3
80.2
Knee
70.6
72.0
Ankle
63.4
67.8
Total
78.5
79.6
Table 4: Effect of single person refinement on a held out validation set on MPII.
single scale
single scale + refine
multi scale
multi scale + refine
AP
0.566
0.628
0.650
0.655
AP50
0.818
0.846
0.867
0.868
AP75
0.618
0.692
0.713
0.723
APM
0.498
0.575
0.597
0.606
APL
0.670
0.706
0.725
0.726
Table 5: Effect of multi-scale evaluation and single person refinement on MS-COCO test-dev.
people overlap heavily as the severe occlusion and close spacing of detected joints make it much
more difficult to parse out the poses of individual people.
MS-COCO Results Table 2 and 3 report our results on MS-COCO. We report results on both test-std
and test-dev because not all recent methods report on test-std. We see that on both sets we achieve
the state of the art performance. An illustration of the network?s predictions can be seen in Figure 4.
Typical failure cases of the network stem from overlapping and occluded joints in cluttered scenes.
Table 5 reports performance of ablated versions of our full pipeline, showing the contributions
from applying our model at multiple scales and from further refinement using a single-person pose
estimator. We see that simply applying our network at multiple scales already achieves competitive
performance against prior state of the art methods, demonstrating the effectiveness of our end-to-end
joint detection and grouping.
We perform an additional experiment on MS-COCO to gauge the relative difficulty of detection
versus grouping, that is, which part is the main bottleneck of our system. We evaluate our system on
a held-out set of 500 training images. In this evaluation, we replace the predicted detections with the
ground truth detections but still use the predicted tags. Using the ground truth detections improves AP
from 59.2 to 94.0. This shows that keypoint detection is the main bottleneck of our system, whereas
the network has learned to produce high quality grouping. This fact is also supported by qualitative
inspection of the predicted tag values, as shown in Figure 3, from which we can see that the tags are
well separated and decoding the grouping is straightforward.
5
Conclusion
In this work we introduce associative embeddings to supervise a convolutional neural network such
that it can simultaneously generate and group detections. We apply this method to multi-person pose
and demonstrate the feasibility of training to achieve state-of-the-art performance. Our method is
general enough to be applied to other vision problems as well, for example instance segmentation
and multi-object tracking in video. The associative embedding loss can be implemented given any
network that produces pixelwise predictions, so it can be easily integrated with other state-of-the-art
architectures.
8
6
Acknowledgements
This work is partially supported by the National Science Foundation under Grant No. 1734266. ZH
is partially supported by the Institute for Interdisciplinary Information Sciences, Tsinghua University.
References
[1] COCO: Common Objects in Context. http://mscoco.org/home/.
[2] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado,
Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey
Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg,
Dan Man?, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens,
Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda
Vi?gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.
TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from
tensorflow.org.
[3] Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estimation: New
benchmark and state of the art analysis. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE
Conference on, pages 3686?3693. IEEE, 2014.
[4] Vasileios Belagiannis and Andrew Zisserman. Recurrent human pose estimation. In Automatic Face &
Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, pages 468?475. IEEE,
2017.
[5] Adrian Bulat and Georgios Tzimiropoulos. Human pose estimation via convolutional part heatmap
regression. In ECCV, 2016.
[6] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using
part affinity fields. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, 2017.
[7] Joao Carreira, Pulkit Agrawal, Katerina Fragkiadaki, and Jitendra Malik. Human pose estimation with iterative error feedback. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 4733?4742, 2016.
[8] Xiao Chu, Wei Yang, Wanli Ouyang, Cheng Ma, Alan L Yuille, and Xiaogang Wang. Multi-context
attention for human pose estimation. 2017.
[9] Xiaochuan Fan, Kang Zheng, Yuewei Lin, and Song Wang. Combining local appearance and holistic view:
Dual-source deep neural networks for human pose estimation. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pages 1347?1355, 2015.
[10] Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. RMPE: Regional multi-person pose estimation.
In ICCV, 2017.
[11] Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. Devise: A
deep visual-semantic embedding model. In Advances in neural information processing systems, pages
2121?2129, 2013.
[12] Andrea Frome, Yoram Singer, Fei Sha, and Jitendra Malik. Learning globally-consistent local distance
functions for shape-based image retrieval and classification. In 2007 IEEE 11th International Conference
on Computer Vision, pages 1?8. IEEE, 2007.
[13] Georgia Gkioxari, Alexander Toshev, and Navdeep Jaitly. Chained predictions using convolutional neural
networks. In European Conference on Computer Vision, pages 728?743. Springer, 2016.
[14] Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hockenmaier, and Svetlana Lazebnik. Improving
image-sentence embeddings using large weakly annotated photo collections. In European Conference on
Computer Vision, pages 529?545. Springer, 2014.
[15] Feng Han and Song-Chun Zhu. Bottom-up/top-down image parsing with attribute grammar. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 31(1):59?73, 2009.
[16] Adam W Harley, Konstantinos G Derpanis, and Iasonas Kokkinos. Learning dense convolutional embeddings for semantic segmentation. In International Conference on Learning Representations (Workshop),
2016.
9
[17] Kaiming He, Georgia Gkioxari, Piotr Doll?r, and Ross Girshick. Mask r-cnn. 2017.
[18] Peiyun Hu and Deva Ramanan. Bottom-up and top-down reasoning with hierarchical rectified gaussians.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5600?5609,
2016.
[19] Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Bjoern Andres, and Bernt Schiele.
Articulated multi-person tracking in the wild. arXiv preprint arXiv:1612.01465, 2016.
[20] Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka, and Bernt Schiele. Deepercut:
A deeper, stronger, and faster multi-person pose estimation model. In European Conference on Computer
Vision (ECCV), May 2016.
[21] Umar Iqbal and Juergen Gall. Multi-person pose estimation with local joint-to-person associations. arXiv
preprint arXiv:1608.08526, 2016.
[22] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128?3137,
2015.
[23] Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. Adv. Neural Inf.
Process. Syst, 2011.
[24] Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML,
volume 14, pages 1188?1196, 2014.
[25] Evgeny Levinkov, Jonas Uhrig, Siyu Tang, Mohamed Omran, Eldar Insafutdinov, Alexander Kirillov,
Carsten Rother, Thomas Brox, Bernt Schiele, and Bjoern Andres. Joint graph decomposition & node
labeling: Problem, algorithms, applications. In IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2017.
[26] Ita Lifshitz, Ethan Fetaya, and Shimon Ullman. Human pose estimation using deep consensus voting. In
European Conference on Computer Vision, pages 246?260. Springer, 2016.
[27] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll?r,
and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on
Computer Vision, pages 740?755. Springer, 2014.
[28] Michael Maire. Simultaneous segmentation and figure/ground organization using angular embedding. In
European Conference on Computer Vision, pages 450?464. Springer, 2010.
[29] Michael Maire, X Yu Stella, and Pietro Perona. Object detection and segmentation from joint embedding
of parts and pixels. In 2011 International Conference on Computer Vision, pages 2142?2149. IEEE, 2011.
[30] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations
of words and phrases and their compositionality. In Advances in neural information processing systems,
pages 3111?3119, 2013.
[31] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation.
ECCV, 2016.
[32] Guanghan Ning, Zhi Zhang, and Zhihai He. Knowledge-guided deep fractal neural networks for human
pose estimation. arXiv preprint arXiv:1705.02407, 2017.
[33] George Papandreou, Tyler Zhu, Nori Kanazawa, Alexander Toshev, Jonathan Tompson, Chris Bregler, and Kevin Murphy. Towards accurate multi-person pose estimation in the wild. arXiv preprint
arXiv:1701.01779, 2017.
[34] Leonid Pishchulin, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele. Poselet conditioned pictorial
structures. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
588?595, 2013.
[35] Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and
Bernt Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
[36] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern
analysis and machine intelligence, 22(8):888?905, 2000.
10
[37] X Yu Stella. Angular embedding: from jarring intensity differences to perceived luminance. In Computer
Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 2302?2309. IEEE, 2009.
[38] Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient object
localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 648?656, 2015.
[39] Jonathan J Tompson, Arjun Jain, Yann LeCun, and Christoph Bregler. Joint training of a convolutional
network and a graphical model for human pose estimation. In Advances in neural information processing
systems, pages 1799?1807, 2014.
[40] Alexander Toshev and Christian Szegedy. Deeppose: Human pose estimation via deep neural networks. In
Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1653?1660. IEEE,
2014.
[41] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395?416, 2007.
[42] Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines.
Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, 2016.
[43] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest
neighbor classification. In Advances in neural information processing systems, pages 1473?1480, 2005.
11
| 6822 |@word cnn:1 version:1 kokkinos:1 stronger:1 adrian:1 ankle:2 hu:1 seek:1 decomposition:1 accommodate:1 initial:1 series:1 score:10 contains:1 iqbal:2 document:1 outperforms:1 steiner:1 current:2 comparing:1 activation:3 chu:1 must:1 parsing:4 takeo:1 john:1 devin:1 visible:1 concatenate:1 partition:1 shape:1 enables:1 christian:1 hourglass:12 designed:1 drop:2 plot:1 generative:1 half:2 website:1 isard:1 intelligence:2 inspection:1 detecting:3 node:1 location:15 org:2 zhang:1 olah:1 koltun:1 jonas:1 qualitative:2 consists:4 abadi:1 yuan:1 combine:1 dan:1 wild:2 paragraph:1 manner:3 introduce:6 mask:2 tagging:3 andrea:2 themselves:1 nor:1 multi:28 globally:1 zhi:1 considering:1 elbow:4 project:1 xx:2 matched:1 provided:2 mitigated:1 joao:1 what:1 ouyang:1 developed:1 guarantee:1 every:2 voting:1 rm:1 jianbo:1 unit:8 grant:1 ramanan:2 producing:2 engineering:2 local:5 dropped:1 tsinghua:3 limit:1 deeppose:1 ap:7 china:1 resembles:1 suggests:1 christoph:2 range:1 jiadeng:1 averaged:1 lecun:2 wrist:6 practice:1 differs:4 x3:2 maire:3 liwei:1 matching:1 word:2 pre:3 upsampled:1 suggest:1 get:3 close:4 andrej:1 put:1 context:7 applying:2 gkioxari:2 dean:3 missing:3 crfs:1 shi:1 straightforward:3 regardless:1 attention:1 cluttered:1 resolution:8 tomas:4 knee:2 matthieu:1 estimator:1 shlens:2 pull:2 fang:2 retrieve:3 embedding:39 siyu:3 target:5 heavily:1 exact:1 gall:2 samy:1 goodfellow:1 jaitly:1 kunal:1 recognition:14 particularly:1 std:4 cut:2 predicts:2 gehler:3 bottom:7 mike:1 module:2 preprint:4 wang:3 capture:1 region:2 connected:1 adv:1 kilian:1 ordering:2 highest:3 complexity:1 schiele:6 occluded:1 chained:1 trained:5 weakly:1 deva:2 serve:1 localization:3 yuille:1 easily:2 joint:57 various:1 train:6 stacked:6 separated:2 articulated:1 effective:1 jain:2 detected:5 approached:1 labeling:2 kevin:1 crowd:1 nori:1 whose:2 apparent:1 larger:1 solve:1 bernt:6 say:1 cvpr:7 otherwise:1 kai:1 grammar:1 statistic:1 think:1 jointly:1 final:6 associative:18 agrawal:1 propose:2 cao:2 combining:1 holistic:1 achieve:3 description:1 pronounced:1 sutskever:2 deepcut:1 captioning:1 produce:18 adam:1 generating:1 object:12 supervising:1 blitzer:1 andrew:2 recurrent:1 gong:1 pose:57 nearest:1 progress:1 arjun:2 solves:1 implemented:1 predicted:12 skip:1 arl:2 goroshin:1 indicate:1 differ:1 frome:2 ning:1 guided:1 closely:1 correct:2 annotated:4 attribute:1 drawback:1 human:18 jonathon:1 activating:1 assign:2 bregler:3 heatmaps:7 around:3 ground:13 exp:1 great:1 lawrence:2 mapping:1 tyler:1 visualize:1 predict:6 achieves:2 perceived:1 estimation:36 ross:2 concurrent:1 grouped:2 correctness:1 gauge:1 successfully:1 weighted:1 gaussian:2 aim:1 reaching:1 rather:1 occupied:1 varying:1 apm:3 encode:1 focus:1 june:1 improvement:2 indicates:1 superpixels:1 hk:3 contrast:1 suppression:1 greedily:1 detect:7 sense:1 inference:3 integrated:2 typically:2 entire:1 xnk:4 perona:2 pixel:24 issue:1 classification:3 among:1 overall:1 dual:1 eldar:4 development:1 heatmap:13 art:10 special:1 integration:2 spatial:1 brox:1 field:2 once:2 eigenproblem:1 andriluka:5 beach:1 piotr:2 represents:1 yu:4 icml:1 jon:1 report:7 composed:1 simultaneously:3 tightly:1 recognize:1 individual:14 national:1 murphy:1 pictorial:1 occlusion:2 jeffrey:1 microsoft:1 harley:2 freedom:1 detection:77 organization:4 investigate:1 zheng:2 evaluation:8 severe:1 benoit:1 alignment:1 extreme:1 tompson:3 held:3 accurate:2 andy:1 edge:1 necessary:2 respective:1 pulkit:1 ablated:1 girshick:1 instance:5 increased:1 hip:2 dev:4 ar:3 papandreou:1 assignment:4 juergen:1 phrase:2 stacking:1 subset:2 masked:1 comprised:1 pixelwise:1 connect:1 kudlur:1 combined:2 person:56 st:1 peak:3 international:4 interdisciplinary:2 probabilistic:1 cewu:1 decoding:4 pool:2 michael:4 together:4 ashish:1 ilya:2 squared:1 von:1 opposed:1 huang:1 hn:1 choose:1 rafal:1 external:2 lukasz:1 wing:1 ullman:1 li:1 syst:1 account:2 potential:1 szegedy:1 bjoern:4 student:1 includes:2 matter:2 jitendra:3 vi:1 tsung:1 performed:1 view:1 ulrike:1 start:1 competitive:1 complicated:1 annotation:1 simon:1 jia:3 contribution:2 ass:1 square:1 greg:3 convolutional:11 serge:1 identify:3 vincent:1 andres:4 produced:3 craig:1 lu:1 worth:1 rectified:1 simultaneous:1 failure:1 against:1 derek:1 tucker:1 mohamed:1 james:1 associated:1 mi:2 dataset:1 ask:1 recall:1 knowledge:2 wicke:1 improves:1 torso:1 segmentation:9 iasonas:1 higher:3 originally:1 xie:1 varun:1 zisserman:1 wei:3 formulation:1 done:3 box:4 angular:3 stage:6 just:2 until:3 hand:1 fetaya:1 parse:2 multiscale:1 overlapping:1 quality:1 perhaps:1 usa:1 effect:2 singlestage:2 normalized:2 vasudevan:1 assigned:2 moore:1 semantic:5 illustrated:2 during:1 irving:1 encourages:2 davis:1 levenberg:1 m:12 generalized:1 impression:1 complete:1 demonstrate:2 confusion:1 vasileios:1 performs:2 julia:1 reasoning:1 image:27 wise:5 meaning:1 novel:2 lazebnik:1 common:3 quarter:1 overview:1 exponentially:1 volume:1 belong:4 micah:1 he:2 elementwise:1 theirs:1 association:1 refer:3 significant:2 jozefowicz:1 wanli:1 framed:1 automatic:1 outlined:2 teaching:1 language:1 moving:1 han:1 similarity:1 alejandro:2 etc:2 pete:1 own:1 recent:3 belongs:1 apart:1 coco:14 wattenberg:1 sherry:1 inf:1 hay:1 poselet:1 fernanda:1 yi:1 devise:1 scoring:3 xiaochuan:1 preserving:1 seen:2 additional:3 george:1 impose:1 deng:2 converting:1 novelty:1 corrado:3 full:3 multiple:11 keypoints:3 stem:1 alan:1 match:2 faster:1 gesture:1 offer:1 long:4 retrieval:2 lin:2 post:1 feasibility:1 prediction:19 basic:4 regression:1 heterogeneous:1 vision:24 cmu:2 metric:3 navdeep:1 arxiv:8 iteration:1 represent:1 monga:1 agarwal:1 achieved:1 addition:3 whereas:1 fine:2 spacing:1 source:1 extra:1 unlike:1 warden:1 regional:1 sure:1 pooling:2 effectiveness:1 extracting:1 noting:1 ideal:1 intermediate:3 yang:2 embeddings:29 enough:4 bengio:1 iterate:1 architecture:8 suboptimal:1 idea:2 cn:1 barham:1 konstantinos:1 bottleneck:2 whether:3 manjunath:1 penalty:2 mykhaylo:5 suffer:1 peter:3 yaser:2 song:2 deep:9 fractal:1 fragkiadaki:1 clear:1 involve:1 eigenproblems:1 karpathy:1 rw:1 generate:1 http:1 tutorial:1 track:1 per:3 blue:1 anatomical:1 express:2 group:25 harp:1 four:1 shih:2 threshold:3 demonstrating:2 yangqing:1 localize:1 cutoff:1 luminance:1 graph:1 pietro:2 beijing:1 enforced:2 run:2 talwar:1 luxburg:1 svetlana:1 extends:1 throughout:1 reader:2 decide:1 reasonable:1 almost:1 patch:1 separation:1 home:1 realtime:1 yann:2 resize:1 capturing:1 layer:3 entirely:1 cheng:1 fold:1 fan:1 refine:3 rmi:2 xiaogang:1 fei:3 scene:1 software:1 tag:46 toshev:3 aspect:1 performing:1 mikolov:3 martin:2 consolidates:1 vladlen:1 belonging:5 smaller:1 across:14 hodosh:1 sheikh:2 hockenmaier:1 modification:1 quoc:1 supervise:1 leg:1 gradually:1 iccv:1 pipeline:4 taken:2 agree:2 previously:1 remains:1 turn:1 visualization:1 mechanism:1 tai:1 singer:1 end:8 umich:2 serf:1 photo:1 available:2 gaussians:1 brevdo:1 kirillov:1 doll:2 apply:5 limb:2 hierarchical:2 away:1 generic:2 spectral:6 enforce:2 batch:1 weinberger:1 thomas:1 top:8 clustering:6 include:1 ensure:2 graphical:1 calculating:1 somewhere:1 yoram:1 umar:1 concatenated:1 prof:1 murray:1 unchanged:1 feng:1 malik:3 already:2 kaiser:1 yunchao:1 sha:1 usual:1 visiting:1 affinity:6 distance:8 link:1 separate:1 upsampling:1 capacity:3 chris:2 mail:1 consensus:1 trivial:1 assuming:2 rother:1 relationship:1 illustration:1 ratio:1 balance:1 lg:2 difficult:1 teach:1 hao:1 shu:1 design:1 implementation:1 perform:4 convolution:1 datasets:3 benchmark:2 gas:1 shoulder:4 precise:1 head:3 frame:1 excluding:2 arbitrary:1 intensity:1 compositionality:1 pair:5 required:2 sentence:3 connection:2 ethan:1 xiaoqiang:1 learned:1 tensorflow:3 kang:1 nip:1 able:1 topdown:1 usually:3 sanjay:1 pattern:15 encompasses:1 including:1 green:1 video:2 critical:1 overlap:4 natural:1 difficulty:2 predicting:1 belagiannis:1 residual:2 nth:1 arm:2 zhu:2 pishchulin:6 keypoint:4 axis:2 stella:2 coupled:1 omran:1 text:1 prior:8 understanding:1 review:1 acknowledgement:1 zh:1 eugene:1 relative:3 apl:3 georgios:1 loss:14 fully:1 versus:1 geoffrey:1 ita:1 ramakrishna:1 validation:3 rcnn:1 integrate:1 foundation:1 vanhoucke:1 sufficient:1 consistent:1 xiao:1 share:1 hn0:1 eccv:3 repeat:1 last:1 free:1 truncation:1 supported:3 deeper:1 institute:2 neighbor:1 wide:1 fall:2 taking:1 face:1 saul:1 absolute:1 sparse:1 fg:1 benefit:2 distributed:2 feedback:1 depth:1 dimension:3 computes:1 concretely:1 made:4 refinement:6 coincide:1 collection:1 transaction:2 ignore:1 global:3 belongie:1 zhe:1 evgeny:1 iterative:1 mpii:9 table:10 kanade:1 channel:3 learn:2 nature:1 ca:1 improving:2 mse:1 european:6 zitnick:1 official:1 dense:3 main:2 whole:1 bounding:3 paul:2 repeated:2 derpanis:1 categorized:1 body:21 fig:2 insafutdinov:7 en:2 multiperson:11 georgia:2 mscoco:2 precision:4 decoded:1 lie:1 candidate:5 perceptual:3 zhifeng:1 ian:1 tang:3 down:6 shimon:1 showing:1 ghemawat:1 chun:1 grouping:32 workshop:1 kanazawa:1 conditioned:1 push:1 margin:1 nk:1 chen:2 vijay:1 michigan:3 simply:1 appearance:1 visual:14 josh:1 vinyals:1 tracking:4 partially:2 kaiming:1 springer:5 newell:4 truth:11 mart:1 ma:1 conditional:1 viewed:1 identity:2 marked:2 ann:2 formulated:3 towards:1 carsten:1 jeff:2 replace:1 rmpe:1 man:1 feasible:1 change:1 leonid:5 included:1 specifically:2 determined:1 except:1 typical:1 carreira:1 kaiyu:1 total:5 neck:1 invariance:1 arbor:2 attempted:1 citro:1 katerina:1 indicating:1 formally:1 people:28 scan:1 jonathan:3 dissimilar:1 alexander:4 rajat:1 oriol:1 evaluate:3 schuster:1 |
6,438 | 6,823 | Practical Locally Private Heavy Hitters
Raef Bassily?
Kobbi Nissim?
Uri Stemmer?
Abhradeep Thakurta?
Abstract
We present new practical local differentially private heavy hitters algorithms
achieving optimal or near-optimal worst-case error ? TreeHist and Bitstogram.
?
?
In both algorithms, server running time is O(n)
and user running time is O(1),
hence improving on the prior state-of-the-art result of Bassily and Smith [STOC
? 5/2 ) server time and O(n
? 3/2 ) user time. With a typically
2015] requiring O(n
large number of participants in local algorithms (n in the millions), this reduction
in time complexity, in particular at the user side, is crucial for the use of such
algorithms in practice. We implemented Algorithm TreeHist to verify our theoretical analysis and compared its performance with the performance of Google?s
RAPPOR code.
1
Introduction
We revisit the problem of computing heavy hitters with local differential privacy. Such computations
have already been implemented to provide organizations with valuable information about their user
base while providing users with the strong guarantee that their privacy would be preserved even if
the organization is subpoenaed for the entire information seen during an execution. Two prominent
examples are Google?s use of RAPPOR in the Chrome browser [10] and Apple?s use of differential
privacy in iOS-10 [16]. These tools are used for learning new words typed by users and identifying
frequently used emojis and frequently accessed websites.
Differential privacy in the local model. Differential privacy [9] provides a framework for rigorously analyzing privacy risk and hence can help organization mitigate users? privacy concerns as it
ensures that what is learned about any individual user would be (almost) the same whether the user?s
information is used as input to an analysis or not.
Differentially private algorithms work in two main modalities ? the curator model and the local
model. The curator model assumes a trusted centralized curator that collects all the personal information and then analyzes it. The local model on the other hand, does not involve a central repository.
Instead, each piece of personal information is randomized by its provider to protect privacy even if
all information provided to the analysis is revealed. Holding a central repository of personal information can become a liability to organizations in face of security breaches, employee misconduct,
subpoenas, etc. This makes the local model attractive for implementation. Indeed in the last few
years Google and Apple have deployed local differentially private analyses [10, 16].
Challenges of the local model. A disadvantage of the local model is that it requires introducing
noise at a significantly higher level than what is required in the curator model. Furthermore, some
tasks which are possible in the curator model are impossible in the local model [9, 14, 7]. To see
the effect of noise, consider estimating the number of HIV positives in a given population of n
participants. In the curated model, it suffices to add Laplace noise of magnitude O(1/) [9], i.e.,
?
Department of Computer Science & Engineering, The Ohio State University. [email protected]
Department of Computer Science, Georgetown University. [email protected]
?
Center for Research on Computation and Society (CRCS), Harvard University. [email protected]
?
Department of Computer Science, University of California Santa Cruz. [email protected].
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?
independent of n. In contrast, a lowerbound of ?( n/) is known for the local model [7]. A higher
noise level implies that the number of participants n needs to be large (maybe in the millions for a
reasonable choice of ). An important consequence is that practical local algorithms must exhibit
low time, space, and communication complexity, especially at the user side. This is the problem
addressed in our work.
Heavy hitters and histograms in the local model. Assume each of n users holds an element xi
taken from a domain of size d. A histogram of this data lists (an estimate of) the multiplicity of each
domain element in the data. When d is large, a succinct representation of the histogram is desired either in form of a frequency oracle ? allowing to approximate the multiplicity of any domain element
? and heavy hitters ? listing the multiplicities of most frequent domain elements, implicitly considering the multiplicities of other domain elements as zero. The problem of computing histograms
with differential privacy has attracted significant attention both in the curator model [9, 5, 6] and the
local model [13, 10, 4]. Of relevance is the work in [15].
We briefly report on the state of the art heavy hitters algorithms of Bassily and Smith [4] and
Thakurta et al. [16], which are most relevant
for the current work. Bassily and Smith provide
p
matching lower and upper bounds of ?( n log(d)/) on the worst-case error of local heavy hitters
algorithms. Their local algorithm exhibits optimal communication but a rather high time complex? 5/2 ) and, crucially, user running time is O(n
? 3/2 ) ? complexity that
ity: Server running time is O(n
severely hampers the practicality of this algorithm. The construction by Thakurta et al. is a heuristic
?
with no bounds on server running time and accuracy.1 User computation time is O(1),
a significant
improvement over [4]. See Table 1.
Our contributions. The focus of this work is on the design of locally private heavy hitters algorithms with near optimal error, keeping time, space, and communication complexity minimal.
We provide two new constructions of heavy hitters algorithms TreeHist and Bitstogram that apply
different techniques and achieve similar performance. We implemented Algorithm TreeHist and
provide measurements in comparison with RAPPOR [10] (the only currently available implementation for local histograms). Our measurements are performed with a setting that is favorable to
RAPPOR (i.e., a small input domain), yet they indicate that Algorithm TreeHist performs better
than RAPPOR in terms of noise level.
Table 1 details various performance parameters of algorithms TreeHist and Bitstogram, and the
reader can check that these are similar up to small factors which we ignore in the following discus? 5/2 ) to
sion. Comparing with [4], we improve time complexity both at the server (reduced from O(n
2
3/2
?
?
O(n)) and at the user (reduced from O(n ) to O(max (log n, log d) )). Comparing with [16],
we get provable bounds on the server running time and worst-case error. Note that Algorithm
Bitstogrampachieves optimal worst-case error whereas Algorithm TreeHist is almost optimal, by
a factor of log(n).
Performance metric
TreeHist
(this work)
Bitstogram
(this work)
Bassily and Smith
[4]2
Server time
?
O(n)
?
O(1)
? ?n)
O(
?
O(1)
?
O(n)
?
O(1)
? ?n)
O(
?
O(1)
? 5/2 )
O(n
? 3/2 )
O(n
O (1)
?
O(1)
O (1)
?
O(1)
p
O
n log(d)
O (1)
?
O(n3/2 )
p
O
n log(d)
User time
Server processing memory
User memory
Communication/user
Public randomness/user
Worst-case Error
3
O
p
n log(n) log(d)
O(n2 )
? 3/2 )
O(n
Table 1: Achievable performance of our protocols, and comparison to the prior state-of-the-art by
? notation hides logarithmic factors in n and d. DepenBassily and Smith [4]. For simplicity, the O
dencies on the failure probability ? and the privacy parameter are omitted.
1
The underlying construction in [16] is of a frequency oracle.
2
Elements of the constructions. Main details of our constructions are presented in sections 3 and 4.
Both our algorithms make use of frequency oracles ? data structures that allow estimating various
counts.
Algorithm TreeHist identifies heavy-hitters and estimates their frequencies by scanning the levels of
a binary prefix tree whose leaves correspond to dictionary items. The recovery of the heavy hitters is
in a bit-by-bit manner. As the algorithm progresses down the tree it prunes all the nodes that cannot
? ?n) nodes in every depth. This is done by making
be prefixes of heavy hitters, hence leaving O(
queries to a frequency oracle. Once the algorithm reaches the final level of the tree it identifies the
list of heavy hitters. It then invokes the frequency oracle once more on those particular items to
obtain more accurate estimates for their frequencies.
?
Algorithm Bitstogram hashes the input domain into a domain of size roughly n. The observation behind this algorithm is that if a heavy hitter x does not collide with other heavy hitters then
(h(x), xi ) would have a significantly higher count than (h(x), ?xi ) where xi is the ith bit of x. This
allows recovering all bits of x in parallel given an appropriate frequency oracle.
We remark that even though we describe our protocols as operating in phases (e.g., scanning the
levels of a binary tree), these phases are done in parallel, and our constructions are non-interactive.
All users participate simultaneously, each sending a single message to the server. We also remark
that while our focus is on algorithms achieving the optimal (i.e., smallest possible) error, our algorithms are also applicable when the server is interested in a larger error, in which case the server
can choose a random subsample of the users to participate in the computation. This will reduce the
server runtime and memory usage, and also reduce the privacy cost in the sense that the unsampled
users get perfect privacy (so the server might use their data in another analysis).
2
2.1
Preliminaries
Definitions and Notation
Dictionary and users items: Let V = [d]. We consider a set of n users, where each user i ? [n] has
some item vi ? V. Sometimes, we will also use vi to refer to the binary representation vi when it is
clear from the context.
Frequencies: For each item v ? V, we define the frequency f (v) of such item as the number
P
of users holding that item, namely, f (v) , i?[n] 1(vi = v), where 1(E) of an event E is the
indicator function of E.
A frequency oracle: is a data structure together with an algorithm that, for any given v ? V, allows
computing an estimate f?(v) of the frequency f (v).
A succinct histogram: is a data structure that provides a (short) list of items v?1 , ..., v?k , called the
heavy hitters, together with estimates for their frequencies (f?(?
vj ) : j ? [k]). The frequencies of
the items not in the list are implicitly estimated as f?(v) = 0. We measure the error in a succinct
histogram by the `? distance between the estimated and true frequencies, maxv?[d] |f?(v) ? f (v)|.
We will also consider the maximum error in the estimated frequencies restricted to the items in the
list, that is, maxv?j :j?[k] |f?(?
vj ) ? f (?
vj )|.
If a data succinct histogram aims to provide `? error ?, the list does not need to contain more than
O(1/?) items (since items with estimated frequencies below ? may be omitted from the list, at the
price of at most doubling the error).
2
The user?s run-time and memory in [4] can be improved to O(n) if one assumes random access to the
public randomness, which we do not assume in this work.
3
Our protocols can be implemented without public randomness while attaining essentially the same performance.
3
2.2
Local Differential Privacy
In the local model, an algorithm A : V ? Z accesses the database v = (v1 , ? ? ? , vn ) ? V n only
via an oracle that, given any index i ? [n], runs a local randomized algorithm (local randomizer)
R : V ? Z? on input vi and returns the output R(vi ) to A.
Definition 2.1 (Local differential privacy [9, 11]). An algorithm satisfies -local differential privacy
(LDP) if it accesses the database v = (v1 , ? ? ? , vn ) ? V n only via invocations of a local randomizer
R and if for all i ? [n], if R(1) , . . . , R(k) denote the algorithm?s
invocations of R on the data sample
(1)
(2)
(k)
vi , then the algorithm A(?) , R (?), R (?), . . . , R (?) is -differentially private. That is, if
for any pair of data samples v, v 0 ? V and ?S ? Range(A), Pr[A(v) ? S] ? e Pr[A(v 0 ) ? S].
The TreeHist Protocol
3
In this section, we briefly give an overview of our construction that is based on a compressed,
noisy version of the count sketch. To maintain clarity of the main ideas, we give here a high-level
description of our construction. We refer to the full version of this work [3] for a detailed description
of the full construction.
We first introduce some objects and public parameters that will be used in the construction:
Prefixes: For a binary string v, we will use v[1 : `] to denote the `-bit prefix of v. Let V = v ?
{0, 1}` for some ` ? [log d] . Note that elements of V arranged in a binary prefix tree of depth
log d, where the nodes at level ` of the tree represent all binary strings of length `. The items of the
dictionary V represent the bottommost level of that tree.
Hashes: Let t, m be positive integers to be specified later. We will consider a set of t pairs of hash
functions {(h1 , g1 ), . . . , (ht , gt )}, where for each i ? [t], hi : V ? [m] and gi : V ? {?1, +1} are
independently and uniformly chosen pairwise independent hash functions.
m?m ?
Basis matrix: Let W ? ?1, +1
be m?Hm where Hm is the Hadamard transform matrix
of size m. It is important to note that we do not need to store this matrix. The value of any entry in
this matrix can be computed in O(log m) bit operations given the (row, column) index of that entry.
Global parameters: The total number of users n, the size of the Hadamard matrix m, the number
of hash pairs t, the
privacy parameter , the confidence parameter ?, and the hash functions
(h1 , g1
), . . . , (ht , g
t ) are assumed to be public information. We set t = O(log(n/?)) and
q
n
m=O
log(n/?) .
Public randomness: In addition to the t hash pairs {(h1 , g1 ), . . . , (ht , gt )}, we assume that the
server creates a random partition ? : [n] ? [log d] ? [t] that assigns to each user i ? [n] a random
pair (`i , ji ) ? [log(d)] ? [t], and another random function Q : [n] ? [m] that assigns4 to each user
i a uniformly random index ri ? [m]. We assume that such random indices `i , ji , ri are shared
between the server and each user.
First, we describe the two main modules of our protocol.
A local randomizer: LocalRnd
3.1
For each i ? [n], user i runs her own independent copy of a local randomizer, denoted as
LocalRnd, to generate her private report. LocalRnd of user i starts by acquiring the index triple
(`i , ji , ri ) ? [log d] ? [t] ? [m] from public randomness. For each user, LocalRnd is invoked twice
in the full protocol: once during the first phase of the protocol (called the pruning phase) where the
high-frequency items (heavy hitters) are identified, and a second time during the final phase (the estimation phase) to enable the protocol to get better estimates for the frequencies of the heavy hitters.
4
We could have grouped ? and Q into one random function mapping [n] to [log d] ? [t] ? [m], however,
we prefer to split them for clarity of exposition as each source of randomness will be used for a different role.
4
In the first invocation, LocalRnd of user i performs its computation on the `i -th prefix of the item vi
of user i, while in the second invocation, it performs the computation on the entire user?s string vi .
Apart from this, in both invocations, LocalRnd follows similar steps. It first selects the hash pair
? (where `? = `i in the first invocation and `? = log d in the
(hji , gji ), computes ci = hji (vi [1 : `])
? is the `-th
? prefix of vi ), then it computes a bit xi = gj vi [1 : `]
? ?
second invocation, and vi [1 : `]
i
Wri ,ci (where Wr,c denotes the (r, c) entry of the basis matrix W). Finally, to guarantee -local
differential privacy, it generates a randomized response yi based on xi (i.e., yi = xi with probability
e/2 /(1 + e/2 ) and yi = ?xi with probability 1/(1 + e/2 ), which is sent to the server.
Our local randomizer can thought of as a transformed, compressed (via sampling), and randomized
version of the count sketch [8]. In particular, we can think of LocalRnd as follows. It starts off with
similar steps to the standard count sketch algorithm, but then deviates from it as it applies Hadamard
transform to the user?s signal, then samples one bit from the result. By doing so, we can achieve
significant savings in space and communication without sacrificing accuracy.
3.2
A frequency oracle: FreqOracle
b ? {0, 1}` for
Suppose we want to allow the server estimate the frequencies of some given subset V
some given ` ? [log d] based on the noisy users? reports. We give a protocol, denoted as FreqOracle,
for accomplishing this task.
b and for each hash index j ? [t], FreqOracle computes c = hj (?
For each queried item v? ? V
v ),
then collects the noisy reports of a collection of users I`,j that contains every user i whose pair of
prefix and hash indices (`i , ji ) match (`, j). Next, it estimates the inverse Hadamard transform of
the compressed and noisy signal of each user in I`,j . In particular, for each i ? I`,j , it computes
yi Wri ,c which can be described as a multiplication between yi eri (where eri is the indicator vector
with 1 at the ri -th position) and the scaled Hadamard matrix W, followed by selecting the c-th entry
of the resulting vector. This brings us back to the standard count sketch representation. It then sums
all the results and multiplies the outcome by gj (?
v ) to obtain an estimate f?j (?
v ) for the frequency
of v?. As in the count sketch algorithm, this is done for every j ? [t], then FreqOracle obtains a
high-confidence estimate by computing the median of all the t frequency estimates.
3.3
The protocol: TreeHist
The protocol is easier to describe via operations over nodes of the prefix tree V of depth log d
(described earlier). The protocol runs through two main phases: the pruning (or, scanning) phase,
and the final estimation phase.
In the pruning phase, the protocol scans the levels of the prefix tree starting from the top level (that
contains just 0 and 1) to the bottom level (that contains all items of the dictionary). For a given node
at level ` ? [log d], using FreqOracle as a subroutine, the protocol gets an estimate for the frequency
of the corresponding `-bit prefix. For any ` ? [log(d) ? 1], before the protocol moves to level ` + 1
of the tree, it prunes all the nodes in level ` that cannot be prefixes of actual heavy hitters (highfrequency items in the dictionary).Then, as it moves to level ` + 1, the protocol considers only the
children of the surviving nodes in level `. The construction guarantees
that, with
q
high probability,
n
the number of survining nodes in each level cannot exceed O
log(d) log(n) . Hence, the total
q
n log(d)
number of nodes queried by the protocol (i.e., submitted to FreqOracle) is at most O
log(n) .
In the second and final phase, after reaching the final level of the tree, the protocol would have
already identified a list of the candidate heavy hitters, however, their estimated frequencies may not
be as accurate as we desire due to the large variance caused by the random partitioning of users
across all the levels of the tree. Hence, it invokes the frequency oracle once more on those particular
items, and this time, the sampling variance is reduced as the set of users is partitioned only across
the t hash pairs (rather than across log(d) ? t bins as in the pruning phase). By doing this, the server
obtains more accurate estimates for the frequencies of the identified heavy hitters. The privacy and
accuracy guarantees are stated below. The full details are given in the full version [3].
5
3.4
Privacy and Utility Guartantees
Theorem 3.1. Protocol TreeHist is -local differentially private.
p
n log(n/?) log(d))/ such that with probability at
Theorem 3.2. There is a number ? = O
least 1 ? ?, the output list of the TreeHist protocol satisfies the following properties:
1. it contains all items v ? V whose true frequencies above 3?.
2. it does not contain any item v ? V whose true frequency below ?.
3. Every frequency estimate
in the output list is accurate up to an error
p
?O
n log(n/?)/
4
Locally Private Heavy-hitters ? bit by bit
We now present a simplified description of our second protocol, that captures most of the ideas. We
refer the reader to the full version of this work for the complete details.
First Step: Frequency Oracle. Recall that a frequency oracle is a protocol that, after communicating with the users, outputs a data structure capable of approximating the frequency of every domain
element v ? V. So, if we were to allow the server to have linear runtime in the domain size |V| = d,
then a frequency oracle would suffice for computing histograms. As we are interested in protocols
with a significantly?lower runtime, we will only use a frequency oracle as a subroutine, and query it
only for (roughly) n elements.
Let Z ? {?1}d?n be a matrix chosen uniformly at random, and assume that Z is publicly known.5
That is, for every domain element v ? V and every user j ? [n], we have a random bit Z[v, j] ?
{?1}. As Z is publicly known, every user j can identify its corresponding bit Z[vj , j], where vj ? V
is the input of user j. Now consider a protocol in which users send randomized responses of their
corresponding bits. That is, user j sends yj = Z[vj , j] w.p. 12 + 2 and sends yj = ?Z[vj , j] w.p.
1
2 ? 2 . We can now estimate the frequency of every domain element v ? V as
1 X
a(v) = ?
yj ? Z[v, j].
j?[n]
To see that a(v) is accurate, observe that a(v) is the sum of n independent random variables (one for
every user). For the users j holding the input v being estimated (that is, vj = v) we will have that
1
E[yj ? Z[v, j]] = 1. For the other users we will have that yj and Z[v, j] are independent, and hence
E[yj ? Z[v, j]] = E[yj ] ? E[Z[v, j]] = 0. That is, a(v) can be expressed as the sum of n independent
random variables: f (v) variables with expectation 1, and (n ? f (v)) variables with expectation 0.
The fact that a(v) is an accurate estimation for f (v) now follows from the Hoeffding bound.
Lemma 4.1 (Algorithm Hashtogram). Let ? 1. Algorithm Hashtogram satisfies -LDP.
Furthermore, with probability at least 1 ?
Hashtogram answers every query v ? V
r
?, algorithm
with a(v) satisfying: |a(v) ? f (v)| ? O 1 ? n log nd
.
?
Second Step: Identifying Heavy-Hitters. Let us assume that we have a frequency oracle protocol
with worst-case error ? . We now want to use our frequency oracle in order to construct a protocol
that operates on two steps: First, it identifies a small set of potential ?heavy-hitters?, i.e., domain
elements that appear in the database at least 2? times. Afterwards, it uses the frequency oracle to
estimate the frequencies of those potential heavy elements.6
Let h : V ? [T ] be a (publicly known) random hash function, mapping domain elements into [T ],
where T will be set later.7 We will now use h in order to identify the heavy-hitters. To that end,
5
As we describe in the full version of this work, Z has a short description, as it need not be uniform.
Event though we describe the protocol as having two steps, the necessary communication for these steps
can be done in parallel, and hence, our protocol will have only 1 round of communication.
7
As with the matrix Z, the hash function h can have a short description length.
6
6
Estimated frequency
0.10
0.08
0.06
Estimated frequency versus epsilon
Rank 1-True
Rank 1-Priv
Rank 10-True
Rank 10-Priv
Rank 100-True
Rank 100-Priv
Comparison between Count-Sketch and RAPPOR
0.04
0.06
0.04
0.02
0.02
0.00
True Freq
Count_Sketch
RAPPOR
0.08
Frequency estimate
0.12
0.1
1.0
2.0
5.0
0.00
10.0
0
20
40
60
80
100
Figure 1: Frequency vs privacy () on the NLTK- Figure 2: Frequency vs privacy () on the Demo
Brown corpus.
3 experiment from RAPPOR
let v ? ? V denote such a heavy-hitter, appearing at least 2? times in the database S, and denote
t? = h(v ? ). Assuming that T is big enough, w.h.p. we will have that v ? is the only input element
(from S) that is mapped (by h) into the hash value t? . Assuming that this is indeed the case, we will
now identify v ? bit by bit.
For ` ? [log d], denote S` = (h(vj ), vj [`])j?[n] , where vj [`] is bit ` of vj . That is, S` is a database
over the domain ([T ]?{0, 1}), where the row corresponding to user j is (h(vj ), vj [`]). Observe that
every user can compute her own row locally. As v ? is a heavy-hitter, for every ` ? [log d] we have
that (t? , v ? [`]) appears in S` at least 2? times. On the other hand, as we assumed that v ? is the only
input element that is mapped into t? we get that (t? , 1 ? v ? [`]) does not appear in S` at all. Recall
that our frequency oracle has error at most ? , and hence, we can use it to accurately determine the
bits of v ? .
To make things more concrete, consider the protocol that for every hash value t ? [T ], for every
coordinate ` ? [log d], and for every bit b ? {0, 1}, obtains an estimation (using the frequency
oracle) for the multiplicity of (t, b) in S` (so there are log d invocations of the frequency oracle, and
a total of 2T log d estimations). Now, for every t ? [T ] let us define v?(t) where bit ` of v?(t) is the bit b
?
s.t. (t, b) is more frequent than (t, 1?b) in S` . By the above discussion, we will have that v?(t ) = v ? .
That is, the protocol identifies a set of T domain elements, containing all of the heavy-hitters. The
frequency of the identified heavy-hitters can then be estimated using the frequency oracle.
Remark 4.1. As should be clear from the above discussion, it suffices to take T & n2 , as this will
ensure that there are no collisions among different
? input elements. As we only care about collisions
between ?heavy-hitters? (appearing in S at least n times), it would suffice to take T & n
?to ensure
that w.h.p. there are no collisions between heavy-hitters. In fact, we could even take T & n, which
would ensure that a heavy-hitter x? has no collisions with constant probability, and then to amplify
our confidence using repetitions.
Lemma 4.2 (Algorithm Bitstogram). Let ? 1. Algorithm Bitstogram satisfies -LDP.
? ?n) satisfying:
Furthermore, the algorithm returns a list L of length O(
p
1. With probability 1 ? ?, for every (v, a) ? L we have that |a ? f (v)| ? O 1 n log(n/?) .
q
2. W.p. 1 ? ?, for every v ? V s.t. f (v) ? O 1 n log(d/?) log( ?1 ) , we have that v is in L.
5
Empirical Evaluation
We now discuss implementation details of our algorithms mentioned in Section 38 . The main objective of this section is to emphasize the empirical efficacy of our algorithms. [16] recently claimed
space optimality for a similar problem, but a formal analysis (or empirical evidence) was not provided.
7
5.1
Evaluation of the Private Frequency Oracle
The objective of this experiment is to test the efficacy of our algorithm in estimating the frequencies
of a known set of dictionary of user items, under local differential privacy. We estimate the error in
estimation while varying the privacy parameter . (See Section 2.1 for a refresher on the notation.)
We ran the experiment (Figure 1) on a data set drawn uniformly at random from the NLTK Brown
corpus [1]. The data set we created has n = 10 million samples drawn i.i.d. from the corpus with
replacement (which corresponds to 25, 991 unique words), and the system parameters?are chosen
as follows: number of data samples (n) : 10 million, range of the hash function (m): n, number
of hash functions (t): 285. For the hash functions, we used the prefix bits of SHA-256. The estimated frequency is scaled by the number of samples to normalize the result, and each experiment
is averaged over ten runs. In this plot, the rank corresponds to the rank of a domain element in the
distribution of true frequencies in the data set. Observations: i) The plots corroborate the fact that
the frequency oracle is indeed unbiased. The average frequency estimate (over ten runs) for each
percentile is within one standard deviation of the corresponding true estimate. ii) The error in the
estimates go down significantly as the privacy parameter is increased.
Comparison with RAPPOR [10]. Here we compare our implementation with the only publicly
available code for locally private frequency estimation. We took the snapshot of the RAPPOR
code base (https://github.com/google/rappor) on May 9th, 2017. To perform a fair
comparison, we tested our algorithm against one of the demo experiments available for RAPPOR
(Demo3 using the demo.sh script) with the same privacy parameter = ln(3), the number of data
samples n = 1 million, and the data set to be the same data set generated by the demo.sh script. In
Figure 2 we observe that for higher frequencies both RAPPOR and our algorithm perform similarly,
with ours being slightly better. However, in lower frequency regimes, the RAPPOR estimates are
zero most of the times, while our estimates are closer to the true estimates. We do not claim our
algorithm to be universally better than RAPPOR on all data sets. Rather, through our experiments
we want to motivate the need for more thorough empirical comparison of both the algorihtms.
5.2
Private Heavy-hitters
In this section, we take on the harder task of identifying the heavy hitters, rather than estimating the
frequencies of domain elements. We run our experiments on the NLTK data set described earlier,
with the same default system parameters (as Section 5.1) along with n = 10 mi and = 2, except
now we assume that we do not know the domain. As a part of our algorithm design, we assume that
every element in the domain is from the english alphabet set [a-z] and are of length exactly equal
to six letters. Words longer than six letters were ?
truncated and words shorter than six letters were
tagged ? at the end. We set a threshold of 15 ? n as the threshold for being a heavy hitter. As
with moth natural language data sets, the NLTK Brown data follows a power law dirstribution with
a very long tail. (See the full version of this work for a visualization of the distribution.)
In Table 5.2 we state our corresponding precision and recall parameters, and the false positive rate.
The total number of positive examples is 22 (out of 25991 unique words),and the total number
of negative examples is roughly 3 ? 108 . The total number of false positives FP = 60, and false
negatives FN = 3. This corresponds to a vanishing FP-rate, considering the total number of negative
examples roughly equals 3 ? 108 . In practice, if there are false positives, they can be easily pruned
using domain expertise. For example, if we are trying to identify new words which users are typing
in English [2], then using the domain expertise of English, a set of false positives can be easily ruled
out by inspecting the list of heavy hitters output by the algorithm. On the other hand, this cannot
be done for false negatives. Hence, it is important to have a high recall value. The fact that we
have
? three false negatives is because the frequency of those words are very close to the threshold of
15 n. While there are other algorithms for finding heavy-hitters [4, 13], either they do not provide
any theoretical guarantee for the utility [10, 12, 16], or there does not exist a scalable and efficient
implementation for them.
8
The experiments are performed without the Hadamard compression during data transmission.
8
Data set
unique words
Precision
Recall (TPR)
FPR
NLTK Brown corpus
25991
0.24 (? = 0.04) 0.86 (? = 0.05) 2 ? 10?7
?
Table 2: Private Heavy-hitters with threshold=15 n. Here ? corresponds to the standard deviation.
TPR and FPR correspond to true positive rate and false positive rates respectively.
References
[1] Nltk brown corpus. www.nltk.org.
[2] Apple tries to peek at user habits without violating privacy. The Wall Street Journal, 2016.
[3] Raef Bassily, Kobbi Nissim, Uri Stemmer, and Abhradeep Thakurta. Practical locally private
heavy hitters. CoRR, abs/1707.04982, 2017.
[4] Raef Bassily and Adam Smith. Local, private, efficient protocols for succinct histograms. In
Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages
127?135. ACM, 2015.
[5] Amos Beimel, Kobbi Nissim, and Uri Stemmer. Private learning and sanitization: Pure vs.
approximate differential privacy. Theory of Computing, 12(1):1?61, 2016.
[6] Mark Bun, Kobbi Nissim, Uri Stemmer, and Salil P. Vadhan. Differentially private release and
learning of threshold functions. In Venkatesan Guruswami, editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October,
2015, pages 634?649. IEEE Computer Society, 2015.
[7] T.-H. Hubert Chan, Elaine Shi, and Dawn Song. Optimal lower bound for differentially private
multi-party aggregation. In Leah Epstein and Paolo Ferragina, editors, Algorithms - ESA 2012 20th Annual European Symposium, Ljubljana, Slovenia, September 10-12, 2012. Proceedings,
volume 7501 of Lecture Notes in Computer Science, pages 277?288. Springer, 2012.
[8] Moses Charikar, Kevin Chen, and Martin Farach-Colton. Finding frequent items in data
streams. In ICALP, 2002.
[9] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to
sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265?284.
Springer, 2006.
?
[10] Ulfar
Erlingsson, Vasyl Pihur, and Aleksandra Korolova. Rappor: Randomized aggregatable
privacy-preserving ordinal response. In CCS, 2014.
[11] Alexandre Evfimievski, Johannes Gehrke, and Ramakrishnan Srikant. Limiting privacy
breaches in privacy preserving data mining. In PODS, pages 211?222. ACM, 2003.
[12] Giulia Fanti, Vasyl Pihur, and Ulfar Erlingsson. Building a rappor with the unknown: Privacypreserving learning of associations and data dictionaries. arXiv preprint arXiv:1503.01214,
2015.
[13] Justin Hsu, Sanjeev Khanna, and Aaron Roth. Distributed private heavy hitters. In International Colloquium on Automata, Languages, and Programming, pages 461?472. Springer,
2012.
[14] Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam
Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793?826, 2011.
[15] Nina Mishra and Mark Sandler. Privacy via pseudorandom sketches. In Proceedings of the
twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems,
pages 143?152. ACM, 2006.
[16] A.G. Thakurta, A.H. Vyrros, U.S. Vaishampayan, G. Kapoor, J. Freudiger, V.R. Sridhar, and
D. Davidson. Learning new words. US Patent 9594741, 2017.
9
| 6823 |@word private:20 briefly:2 version:7 repository:2 achievable:1 compression:1 nd:1 bun:1 crucially:1 prasad:1 pihur:2 harder:1 reduction:1 contains:4 efficacy:2 selecting:1 ours:1 prefix:13 mishra:1 current:1 comparing:2 com:1 yet:1 must:1 attracted:1 cruz:1 fn:1 partition:1 plot:2 korolova:1 maxv:2 hash:18 v:3 leaf:1 website:1 item:23 fpr:2 ith:1 smith:8 short:3 vanishing:1 provides:2 node:9 kasiviswanathan:1 org:1 accessed:1 along:1 ucsc:1 differential:11 become:1 symposium:4 focs:1 introduce:1 manner:1 privacy:31 pairwise:1 indeed:3 roughly:4 frequently:2 multi:1 actual:1 considering:2 provided:2 estimating:4 notation:3 underlying:1 suffice:2 what:3 string:3 finding:2 guarantee:5 mitigate:1 every:20 thorough:1 berkeley:1 interactive:1 runtime:3 exactly:1 scaled:2 partitioning:1 appear:2 positive:9 before:1 engineering:1 local:32 io:1 consequence:1 severely:1 analyzing:1 might:1 twice:1 collect:2 co:1 wri:2 range:2 averaged:1 lowerbound:1 practical:4 unique:3 yj:7 practice:2 habit:1 empirical:4 significantly:4 thought:1 matching:1 word:9 confidence:3 get:5 cannot:4 amplify:1 close:1 risk:1 impossible:1 context:1 raskhodnikova:1 www:1 center:1 shi:1 send:1 go:1 attention:1 starting:1 independently:1 pod:1 roth:1 automaton:1 simplicity:1 identifying:3 recovery:1 assigns:1 pure:1 communicating:1 ity:1 population:1 coordinate:1 beimel:1 laplace:1 limiting:1 construction:11 suppose:1 user:57 programming:1 us:1 harvard:1 element:21 satisfying:2 curated:1 database:6 bottom:1 role:1 module:1 preprint:1 capture:1 worst:6 ensures:1 elaine:1 valuable:1 ran:1 mentioned:1 colloquium:1 complexity:5 rigorously:1 personal:3 salil:1 motivate:1 bottommost:1 creates:1 basis:2 easily:2 collide:1 refresher:1 various:2 erlingsson:2 alphabet:1 describe:5 query:3 kevin:1 outcome:1 hiv:1 heuristic:1 whose:4 larger:1 compressed:3 raef:3 gi:1 browser:1 g1:3 think:1 transform:3 noisy:4 final:5 took:1 frequent:3 relevant:1 hadamard:6 kapoor:1 achieve:2 description:5 normalize:1 differentially:7 transmission:1 perfect:1 adam:3 object:1 help:1 progress:1 strong:1 implemented:4 recovering:1 implies:1 indicate:1 chrome:1 enable:1 public:7 bin:1 unsampled:1 suffices:2 wall:1 preliminary:1 inspecting:1 hold:1 mapping:2 claim:1 dictionary:7 smallest:1 omitted:2 favorable:1 estimation:7 evfimievski:1 applicable:1 currently:1 thakurta:5 grouped:1 repetition:1 gehrke:1 tool:1 trusted:1 amos:1 aim:1 rather:4 reaching:1 priv:3 hj:1 sion:1 varying:1 release:1 focus:2 improvement:1 rank:8 check:1 contrast:1 sense:1 typically:1 entire:2 her:3 dencies:1 transformed:1 subroutine:2 interested:2 ldp:3 selects:1 among:1 sandler:1 denoted:2 multiplies:1 art:3 equal:2 once:4 saving:1 construct:1 beach:1 sampling:2 having:1 report:4 few:1 simultaneously:1 hamper:1 individual:1 phase:12 replacement:1 maintain:1 ab:1 organization:4 centralized:1 message:1 dwork:1 mining:1 evaluation:2 sh:2 behind:1 mcsherry:1 hubert:1 accurate:6 capable:1 closer:1 necessary:1 shorter:1 tree:12 desired:1 ruled:1 sacrificing:1 theoretical:2 minimal:1 increased:1 column:1 earlier:2 corroborate:1 disadvantage:1 cost:1 introducing:1 deviation:2 subset:1 entry:4 uniform:1 seventh:1 answer:1 scanning:3 ljubljana:1 st:1 international:1 randomized:6 sensitivity:1 siam:1 lee:1 off:1 together:2 concrete:1 sanjeev:1 central:2 containing:1 choose:1 hoeffding:1 emojis:1 kobbi:7 return:2 potential:2 attaining:1 caused:1 vi:13 stream:1 piece:1 performed:2 later:2 h1:3 script:2 try:1 doing:2 start:2 aggregation:1 participant:3 parallel:3 contribution:1 il:1 publicly:4 accuracy:3 accomplishing:1 variance:2 listing:1 correspond:2 identify:4 farach:1 sofya:1 accurately:1 provider:1 expertise:2 apple:3 cc:1 randomness:6 submitted:1 reach:1 definition:2 failure:1 against:1 typed:1 frequency:61 mi:1 hsu:1 recall:5 back:1 appears:1 alexandre:1 higher:4 violating:1 response:3 improved:1 arranged:1 done:5 though:2 furthermore:3 just:1 hand:3 sketch:7 google:4 khanna:1 epstein:1 brings:1 building:1 usa:2 calibrating:1 effect:1 requiring:1 verify:1 usage:1 true:11 hence:9 contain:2 brown:5 unbiased:1 tagged:1 freq:1 attractive:1 round:1 during:4 percentile:1 prominent:1 trying:1 randomizer:5 complete:1 performs:3 slovenia:1 invoked:1 ohio:1 recently:1 dawn:1 ji:4 overview:1 patent:1 volume:1 million:5 tail:1 association:1 tpr:2 employee:1 significant:3 measurement:2 refer:3 queried:2 similarly:1 language:2 access:3 longer:1 operating:1 gj:2 etc:1 base:2 add:1 gt:2 own:2 hide:1 chan:1 apart:1 store:1 claimed:1 server:19 binary:6 yi:5 seen:1 analyzes:1 preserving:2 care:1 prune:2 determine:1 forty:1 venkatesan:1 signal:2 ii:1 full:8 afterwards:1 match:1 long:2 scalable:1 essentially:1 metric:1 expectation:2 arxiv:2 histogram:10 sometimes:1 represent:2 abhradeep:2 preserved:1 whereas:1 addition:1 want:3 addressed:1 median:1 leaving:1 source:1 crucial:1 modality:1 sends:2 peek:1 sigact:1 privacypreserving:1 sent:1 thing:1 integer:1 surviving:1 vadhan:1 near:2 revealed:1 split:1 exceed:1 enough:1 shiva:1 identified:4 reduce:2 idea:2 whether:1 six:3 utility:2 guruswami:1 song:1 remark:3 collision:4 santa:1 involve:1 clear:2 detailed:1 maybe:1 johannes:1 locally:6 ten:2 reduced:3 generate:1 http:1 fanti:1 exist:1 srikant:1 revisit:1 moses:1 estimated:10 wr:1 hji:2 aleksandra:1 paolo:1 threshold:5 achieving:2 drawn:2 clarity:2 ht:3 v1:2 year:1 sum:3 run:7 inverse:1 letter:3 almost:2 reasonable:1 reader:2 vn:2 prefer:1 bit:22 bound:5 hi:1 followed:1 oracle:23 annual:3 n3:1 ri:4 generates:1 optimality:1 pruned:1 pseudorandom:1 martin:1 moth:1 department:3 charikar:1 across:3 slightly:1 partitioned:1 making:1 restricted:1 multiplicity:5 pr:2 taken:1 ln:1 visualization:1 discus:2 count:8 know:1 ordinal:1 hitter:40 sending:1 end:2 available:3 operation:2 apply:1 observe:3 appropriate:1 appearing:2 assumes:2 running:6 denotes:1 eri:2 top:1 ensure:3 sigmod:1 practicality:1 invokes:2 especially:1 epsilon:1 approximating:1 society:2 move:2 objective:2 already:2 sha:1 highfrequency:1 exhibit:2 september:1 distance:1 mapped:2 street:1 participate:2 nissim:7 considers:1 provable:1 nina:1 assuming:2 code:3 length:4 index:7 gji:1 providing:1 october:1 stoc:1 holding:3 frank:1 sigart:1 stated:1 negative:5 implementation:5 design:2 unknown:1 perform:2 allowing:1 upper:1 twenty:1 observation:2 snapshot:1 truncated:1 communication:7 esa:1 namely:1 required:1 pair:8 specified:1 security:1 california:1 learned:1 protect:1 nip:1 justin:1 below:3 regime:1 fp:2 challenge:1 max:1 memory:4 power:1 event:2 natural:1 typing:1 indicator:2 improve:1 github:1 vasyl:2 identifies:4 created:1 hm:2 breach:2 deviate:1 prior:2 curator:6 georgetown:2 multiplication:1 law:1 lecture:1 icalp:1 versus:1 triple:1 foundation:1 principle:1 editor:2 heavy:41 row:3 last:1 keeping:1 liability:1 copy:1 english:3 side:2 allow:3 formal:1 stemmer:4 face:1 fifth:1 distributed:1 depth:3 default:1 computes:4 collection:1 universally:1 simplified:1 party:1 approximate:2 pruning:4 ignore:1 implicitly:2 obtains:3 emphasize:1 global:1 colton:1 corpus:5 assumed:2 xi:8 demo:4 davidson:1 table:5 learn:1 ca:2 improving:1 sanitization:1 complex:1 european:1 domain:22 protocol:31 vj:14 main:6 rappor:17 privately:1 big:1 noise:6 subsample:1 n2:2 sridhar:1 succinct:5 child:1 fair:1 cryptography:1 bassily:8 deployed:1 precision:2 position:1 invocation:8 candidate:1 down:2 theorem:2 nltk:7 cynthia:1 list:12 concern:1 evidence:1 giulia:1 false:8 corr:1 ci:2 magnitude:1 execution:1 uri:5 chen:1 easier:1 logarithmic:1 homin:1 desire:1 expressed:1 doubling:1 applies:1 acquiring:1 springer:3 corresponds:4 ramakrishnan:1 satisfies:4 acm:5 exposition:1 price:1 shared:1 except:1 uniformly:4 operates:1 lemma:2 called:2 total:7 osu:1 aaron:1 mark:2 scan:1 relevance:1 tested:1 |
6,439 | 6,824 | Large-Scale Quadratically Constrained Quadratic
Program via Low-Discrepancy Sequences
Kinjal Basu, Ankan Saha, Shaunak Chatterjee
LinkedIn Corporation
Mountain View, CA 94043
{kbasu, asaha, shchatte}@linkedin.com
Abstract
We consider the problem of solving a large-scale Quadratically Constrained
Quadratic Program. Such problems occur naturally in many scientific and web
applications. Although there are efficient methods which tackle this problem, they
are mostly not scalable. In this paper, we develop a method that transforms the
quadratic constraint into a linear form by sampling a set of low-discrepancy points
[16]. The transformed problem can then be solved by applying any state-of-the-art
large-scale quadratic programming solvers. We show the convergence of our approximate solution to the true solution as well as some finite sample error bounds.
Experimental results are also shown to prove scalability as well as improved quality
of approximation in practice.
1
Introduction
In this paper we consider the class of problems called quadratically constrained quadratic programming (QCQP) which take the following form:
Minimize
x
subject to
1 T
x P0 x + qT0 x + r0
2
1 T
x Pi x + qTi x + ri ? 0,
2
Ax = b,
i = 1, . . . , m
(1)
where P0 , . . . , Pm are n ? n matrices. If each of these matrices are positive definite, then the
optimization problem is convex. In general, however, solving QCQP is NP-hard, which can be
verified by easily reducing a 0 ? 1 integer programming problem (known to be NP-hard) to a QCQP
[4]. In spite of that challenge, they form an important class of optimization problems, since they
arise naturally in many engineering, scientific and web applications. Two famous examples of QCQP
include the max-cut and boolean optimization [11]. Other examples include alignment of kernels
in semi-supervised learning [29], learning the kernel matrix in discriminant analysis [28] as well as
more general learning of kernel matrices [21], steering direction estimation for radar detection [15],
several applications in signal processing [20], the triangulation in computer vision [3] among others.
Internet applications handling large scale of data, often model trade-offs between key utilities using
constrained optimization formulations [1, 2]. When there is independence among the expected utilities
(e.g., click, time spent, revenue obtained) of items, the objective or the constraints corresponding to
those utilities are linear. However, in most real life scenarios, there is dependence among expected
utilities of items presented together on a web page or mobile app. Examples of such dependence are
abundant in newsfeeds, search result pages and most lists of recommendations on the internet. If
this dependence is expressed through a linear model, it makes the corresponding objective and/or
constraint quadratic. This makes the constrained optimization problem a very large scale QCQP, if
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the dependence matrix (often enumerated by a very large number of members or updates) is positive
definite with co-dependent utilities [6].
Although there are a plethora of such applications, solving this problem on a large scale is still
extremely challenging. There are two main relaxation techniques that are used to solve a QCQP,
namely, semi-definite programming (SDP) and reformulation-linearization technique (RLT) [11].
However, both of them introduce a new variable X = xxT so that the problem becomes linear in X.
Then they relax the condition X = xxT by different means. Doing so unfortunately increases the
number of variables from n to O(n2 ). This makes these methods prohibitively expensive for most
large scale applications. There is literature comparing these methods which also provides certain
combinations and generalizations[4, 5, 22]. However, they all suffer from the same curse of dealing
with O(n2 ) variables. Even when the problem is convex, there are techniques such as second order
cone programming [23], which can be efficient, but scalability still remains an important issue with
prior QCQP solvers.
The focus of this paper is to introduce a novel approximate solution to the convex QCQP problem
which can tackle such large-scale situations. We devise an algorithm which approximates the
quadratic constraints by a set of linear constraints, thus converting the problem into a quadratic
program (QP) [11]. In doing so, we remain with a problem having n variables instead of O(n2 ). We
then apply efficient QP solvers such as Operator Splitting or ADMM [10, 26] which are well adapted
for distributed computing, to get the final solution for problems of much larger scale. We theoretically
prove the convergence of our technique to the true solution in the limit. We also provide experiments
comparing our algorithm to existing state-of-the-art QCQP solvers to show comparative solutions for
smaller data size as well as significant scalability in practice, particularly in the large data regime
where existing methods fail to converge. To the best of our knowledge, this technique is new and has
not been previously explored in the optimization literature.
Notation: Throughout the paper, bold small case letters refer to vectors while bold large-case letters
refer to matrices.
The rest of the paper is structured as follows. In Section 2, we describe the approximate problem,
important concepts to understand the sampling scheme as well as the approximation algorithm
to convert the problem into a QP. Section 3 contains the proof of convergence, followed by the
experimental results in Section 4. Finally, we conclude with some discussion in Section 5.
2
QCQP to QP Approximation
For sake of simplicity throughout the paper, we deal with a QCQP having a single quadratic constraint.
The procedure detailed in this paper can be easily generalized to multiple constraints. Thus, for the
rest of the paper, without loss of generality we consider the problem of the form,
Minimize
(x ? a)T A(x ? a)
subject to
(x ? b)T B(x ? b) ? ?b,
Cx = c.
x
(2)
This is a special case of the general formulation in (1). For this paper, we restrict our case to A,
B ? Rn?n being positive definite matrices so that the objective function is strongly convex.
In this section, we describe the linearization technique to convert the quadratic constraint into a set of
N linear constraints. The main idea behind this approximation, is the fact that given any convex set
in the Euclidean plane, there exists a convex polytope that covers the set. Let us begin by introducing
a few notations. Let P denote the optimization problem (2). Define,
S := {x ? Rn : (x ? b)T B(x ? b) ? ?b}.
(3)
Let ?S denote the boundary of the ellipsoid S. To generate the N linear constraints for this one
quadratic constraint, we generate a set of N points, XN = {x1 , . . . , xN } such that each xj ? ?S for
j = 1, . . . , N . The sampling technique to select the point set is given in Section 2.1. Corresponding
to these N points we get the following set of N linear constraints,
(x ? b)T B(xj ? b) ? ?b
for j = 1, . . . , N.
(4)
Looking at it geometrically, it is not hard to see that each of these linear constraints are just tangent
planes to S at xj for j = 1, . . . , N . Figure 1 shows a set of six linear constraints for a ellipsoidal
2
feasible set in two dimensions. Thus, using these N linear constraints we can write the approximate
optimization problem, P(XN ), as follows.
Minimize
(x ? a)T A(x ? a)
subject to
(x ? b)T B(xj ? b) ? ?b
Cx = c.
x
for j = 1, . . . , N
(5)
Now instead of solving P, we solve P(XN ) for a large enough value of N . Note that as we sample
more points (N ? ?), our approximation keeps getting better.
x2
x1
x3
S
x6
x5
T
x4
Figure 1: Converting a quadratic constraint into linear constraints. The tangent planes through the 6
points x1 , . . . , x6 create the approximation to S.
2.1
Sampling Scheme
The accuracy of the solution of P(XN ) solely depends on the choice of XN . The tangent planes to
S at those N points create a cover of S. We use the notion of a bounded cover, which we define as
follows.
Definition 1. Let T be the convex polytope generated by the tangent planes to S at the points
x1 , . . . , xN ? ?S. T is said to be a bounded cover of S if,
d(T , S) := sup d(t, S) < ?,
t?T
where d(t, S) := inf x?S kt ? xk and k ? k denotes the Euclidean distance.
The first result shows that there exists a bounded cover with only n + 1 points.
Lemma 1. Let S be a n dimensional ellipsoid as defined in (3). Then there exists a bounded cover
with n + 1 points.
Proof. Note that since S is a compact convex
in Rn , there exists a location translated version of
Pbody
n
an n-dimensional simplex T = {x ? Rn+ : i=1 xi = K} such that S is contained in the interior of
T . We can always shrink T such that each edge touches S tangentially. Since there are n + 1 faces,
we will get n + 1 points whose tangent surface creates a bounded cover.
Although Lemma 1 gives a simple constructive proof of a bounded cover, it is not what we are truly
interested in. What we want is to construct a bounded cover T which is as close as possible to S, thus
leading to a better approximation. However note that, choosing the points via a naive sampling can
lead to arbitrarily bad enlargements of the feasible set and in the worst case might even create a cover
which is not bounded. Hence we need an optimal set of points which creates an optimal bounded
cover. Formally,
Definition 2. T ? = T (x?1 , . . . , x?N ) is said to be an optimal bounded cover, if
sup d(t, S) ? sup d(t, S)
t?T ?
t?T
for any bounded cover T generated by any other N -point sets. Moreover, {x?1 , . . . , x?N } are defined
to be the optimal N -point set.
Note that we can think of the optimal N -point set as that set of N points which minimize the
maximum distance between T and S, i.e.
T ? = argmin d(T , S).
T
3
It is not hard to see that the optimal N -point set on the unit circle in two dimensions are the N -th
roots of unity, unique up to rotation. This point set also has a very good property. It has been shown
that the N -th roots of unity minimize the discrete Riesz energy for the unit circle [14, 17]. The
concept of Reisz energy also exists in higher dimensions. Thus, generalizing this result, we choose
our optimal N -point set on ?S which tries to minimize the Reisz energy. We briefly describe it below.
2.1.1
Riesz Energy
PN
Riesz energy of a point set AN = {x1 , . . . , xN } is defined as Es (AN ) := i6=j=1 kxi ? xj k?s for
a positive real parameter s. There is a vast literature on Riesz energy and its association with ?good?
configuration of points. It is well known that the measures associated to the optimal point set that
minimizes the Riesz energy on ?S converge to the normalized surface measure of ?S [17]. Thus
using this fact, we can associate the optimal N -point set to the set of N points that minimize the
Riesz energy on ?S. For more details see [18, 19] and the references therein. To describe these good
configurations of points, we introduce the concept of equidistribution. We begin with a ?good? or
equidistributed point set in the unit hypercube (described in Section 2.1.2) and map it to ?S such that
the equidistribution property still holds (described in Section 2.1.3).
2.1.2
Equidistribution
Informally, a set of points in the unit hypercube is said to be equidistributed, if the expected number
of points inside any axis-parallel subregion, matches the true number of points. One such point set
in [0, 1]n is called the (t, m, n)-net in base ?, which is defined as a set of N = ? m points in [0, 1]n
such that any axis parallel ?-adic box with volume ? t?m would contain exactly ? t points. Formally,
it is a point set that can attain the optimal integration error of O((log(N ))n?1 /N ) [16] and is usually
referred to as a low-discrepancy point set. There is vast literature on easy construction of these point
sets. For more details on nets we refer to [16, 24].
2.1.3
Area preserving map to ?S
Now once we have a point set on [0, 1]n we try to map it to ?S using a measure preserving transformation so that the equidistribution property remains intact. We describe the mapping in two steps.
First we map the point set from [0, 1]n to the hyper-sphere Sn = {x ? Rn+1 : xT x = 1}. Then we
map it to ?S. The mapping from [0, 1]n to Sn is based on [12].
The cylindrical coordinates of the n-sphere, can be written as
q
p
x = xn = ( 1 ? t2n xn?1 , tn ), . . . , x2 = ( 1 ? t22 x1 , t2 ), x1 = (cos ?, sin ?)
where 0 ? ? ? 2?, ?1 ? td ? 1, xd ? Sd and d = (1, . . . , n). Thus, an arbitrary point x ? Sn can
be represented through angle ? and heights t2 , . . . , tn as,
x = x(?, t2 , . . . , tn ),
0 ? ? ? 2?, ?1 ? t2 , . . . , tn ? 1.
We map a point y = (y1 , . . . , yn ) ? [0, 1)n to x ? Sn using
?1 (y1 ) = 2?y1 ,
?d (yd ) = 1 ? 2yd (d = 2, . . . , n)
and cylindrical coordinates x = ?n (y) = x(?1 (y1 ), ?2 (y2 ), . . . , ?n (yn )). The fact that ?n :
[0, 1)n ? Sn is an area preserving map has been proved in [12].
Remark. Instead of using (t, m, n)-nets and mapping to Sn , we could have also used spherical
t-designs, the existence of which was proved in [9]. However, construction of such sets is still a tough
problem in high dimensions. We refer to [13] for more details.
Finally, we consider the map ? to translate the point set from Sn?1 to ?S. Specifically we define,
p
?(x) = ?bB?1/2 x + b.
(6)
From the definition of S in (3), it is easy to see that ?(x) ? ?S. The next result shows that this is
also an area-preserving map, in the sense of normalized surface measures.
Lemma 2. Let ? be a mapping from Sn?1 ? ?S as defined in (6). Then for any set A ? ?S,
?n (A) = ?n (? ?1 (A))
where, ?n , ?n are the normalized surface measure of ?S and Sn?1 respectively.
4
Proof. Pick any A ? ?S. Then we can write,
(
)
1 1/2
?1
? (A) = p B (x ? b) : x ? A .
?b
Now since the linear shift does not change the surface area, we have,
(
)!
)!
(
1 1/2
1 1/2
?1
p B (x ? b) : x ? A
p B x:x?A
?n (? (A)) = ?n
= ?n
= ?n (A),
?b
?b
where the
plast equality follows from the definition of normalized surface measures and noting that
B1/2 x/ ?b ? Sn?1 . This completes the proof.
Using Lemma 2 we see that the map ? ? ?n?1 : [0, 1)n?1 ? ?S, is a measure preserving map.
Using this map and the (t, m, n ? 1) net in base ?, we derive the optimal ? m -point set on ?S. Figure
2 shows how we transform a (0, 7, 2)-net in base 2 to a sphere and then to an ellipsoid. For more
general geometric constructions we refer to [7, 8].
Figure 2: The left panel shows a (0, 7, 2)-net in base 2 which is mapped to a sphere in 3 dimensions
(middle panel) and then mapped to the ellipsoid as seen in the right panel.
2.2
Algorithm and Efficient Solution
From the description in the previous section we are now at a stage to describe the approximation
algorithm. We approximate the problem P by P(XN ) using a set of points x1 , . . . , xN as described
in Algorithm 1. Once we formulate the problem P as P(XN ), we solve the large scale QP via
Algorithm 1 Point Simulation on ?S
1: Input : B, b, ?
b to specify S and N = ? m points
2: Output : x1 , . . . , xN ? ?S
3: Generate y1 , . . . , yN as a (t, m, n ? 1)-net in base ?.
4: for i ? 1, . . . , N do
5:
xi = ? ? ?n?1 (yi )
6: end for
7: return x1 , . . . , xN
state-of-the-art solvers such as Operator Splitting or Block Splitting approaches [10, 25, 26].
3
Convergence of P(XN ) to P
In this section, we shall show that if we follow Algorithm 1 to generate the approximate problem
P(XN ), then we converge to the original problem P as N ? ?. We shall also prove some finite
sample results to give error bounds on the solution to P(XN ). We start by introducing some notation.
5
Let x? , x? (N ) denote the solution to P and P(XN ) respectively and f (?) denote the strongly convex
objective function in (2), i.e., for ease of notation
f (x) = (x ? a)T A(x ? a).
We begin with our main result.
Theorem 1. Let P be the QCQP defined in (2) and P(XN ) be the approximate QP problem defined in
(5) via Algorithm 1. Then, P(XN ) ? P as N ? ? in the sense that limN ?? kx? (N ) ? x? k = 0.
Proof. Fix any N . Let TN denote the optimal bounded cover constructed with N points on ?S. Note
that to prove the result, it is enough to show that TN ? S as N ? ?. This guarantees that linear
constraints of P(XN ) converge to the quadratic constraint of P, and hence the two problems match.
Now since S ? TN for all N , it is easy to see that S ? limN ?? TN .
To prove the converse, let t0 ? limN ?? TN but t0 6? S. Thus, d(t0 , S) > 0. Let t1 denote the
projection of t0 onto S. Thus, t0 6= t1 ? ?S. Choose to be arbitrarily small and consider any
region A around t1 on ?S such that d(x, t1 ) ? for all x ? A . Here d denotes the surface distance
function. Now, by the equidistribution property of Algorithm 1 as N ? ?, there exists a point
t? ? A , the tangent plane through which cuts the plane joining t0 and t1 . Thus, t0 6? limN ?? TN .
Hence, we get a contradiction and the result is proved.
As a simple Corollary to Theorem 1 it is easy to see that as limN ?? |f (x? (N )) ? f (x? )| = 0. We
now move to some finite sample results.
Theorem 2. Let g : N ? R such that limn?? g(n) = 0. Further assume that kx? (N ) ? x? k ?
C1 g(N ) for some constant C1 > 0. Then, |f (x? (N )) ? f (x? )| ? C2 g(N ) where C2 > 0 is a
constant.
Proof. We begin by bounding the kx? k. Note that since x? satisfies the constraint of the optimization
problem, we have, ?b ? (x? ? b)T B(x? ? b) ? ?min (B)kx? ? bk2 , where ?min (B) denotes the
smallest singular value of B. Thus,
s
?b
.
(7)
kx? k ? kbk +
?min (B)
Now, since f (x) = (x ? a)T A(x ? a) and ?f (x) = 2A(x ? a), we can write
Z 1
?
f (x) = f (x ) +
h?f (x? + t(x ? x? )), x ? x? idt
0
?
?
?
= f (x ) + h?f (x ), x ? x i +
Z
0
1
h?f (x? + t(x ? x? )) ? ?f (x? ), x ? x? idt
= I1 + I2 + I3 (say) .
Now, we can bound the last term as follows. Observe that using Cauchy-Schwarz inequality,
Z 1
|I3 | ?
|h?f (x? + t(x ? x? )) ? ?f (x? ), x ? x? i| dt
0
?
Z
0
1
k?f (x? + t(x ? x? )) ? ?f (x? )kkx ? x? kdt
? 2?max (A)
Z
0
1
kt(x ? x? )kkx ? x? kdt = ?max (A)kx ? x? k2 ,
where ?max (A) denotes the largest singular value of A. Thus, we have
? ? x? k2
f (x) = f (x? ) + h?f (x? ), x ? x? i + Ckx
? ? ?max (A). Furthermore,
where |C|
|h?f (x? ), x? (N ) ? x? i| = |h2A(x? ? a), x? (N ) ? x? i|
? 2?max (A)(kx? k + kak)kx? (N ) ? x? k
?
?s
?b
+ kbk + kak? g(N ),
? 2C1 ?max (A) ?
?min (B)
6
(8)
(9)
where the last line inequality follows from (7). Combining (8) and (9) the result follows.
Note that the function g gives us an idea about how fast x? (N ) converges x? . To help, identify the
function g we state the following results.
Lemma 3. If f (x? ) = f (x? (N )), then x? = x? (N ). Furthermore, if f (x? ) ? f (x? (N )), then
x? ? ?U and x? (N ) 6? U, where U = S ? {x : Cx = c} is the feasible set for (2).
Proof. Let V = TN ? {x : Cx = c}. It is easy to see that U ? V. Assume f (x? ) = f (x? (N )), but
x? 6= x? (N ). Note that x? , x? (N ) ? V. Since V is convex, consider a line joining x? and x? (N ).
For any point ?t = tx? + (1 ? t)x? (N ),
f (?t ) ? tf (x? ) + (1 ? t)f (x? (N )) = f (x? (N )).
Thus, f is constant on the line joining x? and x? (N ). But, it is known that f is strongly convex
since A is positive definite [27]. Thus, there exists only one unique minimum. Hence, we have
a contradiction, which proves x? = x? (N ). Now let us assume that f (x? ) ? f (x? (N )). Clearly,
?
? ? ?U denote the point on the line joining
x? (N ) 6? U. Suppose x? ? U, the interior of U. Let x
? = tx? + (1 ? t)x? (N ) for some t > 0. Thus, f (?
x? and x? (N ). Clearly, x
x) < tf (x? ) + (1 ?
t)f (x? (N )) ? f (x? ). But x? is the minimizer over U. Thus, we have a contradiction, which gives
x? ? ?U. This completes the proof.
Lemma 4. Following the notation of Lemma 3, if x? (N ) 6? U, then x? lies on ?U and no point on
the line joining x? and x? (N ) lies in S.
Proof. Since the gradient of f is linear, the result follows from a similar argument to Lemma 3.
Based on the above two results we can identify the function g by considering the maximum distance
of the points lying on the conic cap to the hyperplanes forming it. That is g(N ) is the maximum
distance between a point x ? ?S and a point in t ? T such the line joining x and t do not intersect
S and hence, lie completely within the conic section. This is highly dependent on the shape of S and
on the cover TN . For example, if S is the unit circle in two dimensions, then the optimal N -point set
are the N -th roots of unity. In which case, there are N equivalent conic sections C1 , . . . , CN which
are created by the intersections of ?S with TN . Figure 3 elaborates these regions.
C2
C1
?/6
C6
C3
C5
C4
Figure 3: The shaded region shows the 6 equivalent conic regions, C1 , . . . , C6 .
To formally define g(N ) in this situation, let us define A(t, x) to be the set of all points in the line
joining t ? T and x ? ?S. Now, it is easy to see that,
?
1
g(N ) := max
sup
kt ? xk = tan
=O
,
(10)
i=1,...,N t,x:A(t,x)?Ci
N
N
where the bound follows from using the Taylor series expansion of tan(x). Combining this observation with Theorem 2 shows that in order to get an objective value within of the true optimal, we
would need N to be a constant multiplier of ?1 . More such results can be achieved by such explicit
calculations over various different domains S.
7
4
Experimental Results
We compare our proposed technique to the current state-of-the-art solvers of QCQP. Specifically,
we compare it to the SDP and RLT relaxation procedures as described in [4]. For small enough
problems, we also compare our method to the exact solution by interior point methods. Furthermore,
we provide empirical evidence to show that our sampling technique is better than other simpler
sampling procedures such as uniform sampling on the unit square or on the unit sphere and then
mapping it subsequently to our domain as in Algorithm 1. We begin by considering a very simple
QCQP for the form
Minimize xT Ax
x
subject to
(11)
(x ? x0 )T B(x ? x0 ) ? ?b,
l ? x ? u.
We randomly sample A, B, x0 and ?b keeping the problem convex. The lower bound, l and upper
bounds u are chosen in a way such that they intersect the ellipsoid. We vary the dimension n of the
problem and tabulate the final objective value as well as the time taken for the different procedures
to converge in Table 1. The stopping criteria throughout our simulation is same as that of Operator
Splitting algorithm as presented in [26].
Table 1: The Optimal Objective Value and Convergence Time
n
5
10
20
50
100
1000
105
106
Our Method
3.00
(4.61s)
206.85
(5.04s)
6291.4
(6.56s)
99668
(15.55s)
1.40 ? 106
(58.41s)
2.24 ? 107
(14.87m)
3.10 ? 108
(25.82m)
3.91 ? 109
(38.30m)
Sampling
on [0, 1]n
2.99
(4.74s)
205.21
(5.65s)
4507.8
(6.28s)
15122
(18.98s)
69746
(1.03m)
8.34 ? 106
(15.63m)
7.12 ? 107
(24.59m)
2.69 ? 108
(39.15m)
Sampling
on Sn
2.95
(6. 11s)
206.5
(5.26s)
5052.2
(6.69s)
26239
(17.32s)
1.24 ? 106
(54.69s)
9.02 ? 106
(15.32m)
8.39 ? 107
(27.23m)
7.53 ? 108
(37.21m)
SDP
RLT
Exact
3.07
(0.52s)
252.88
(0.53s)
6841.6
(2.05s)
1.11 ? 105
(4.31s)
1.62 ? 106
(30.41s)
3.08
(0.51s)
252.88
(0.51s)
6841.6
(1.86s)
1.08 ? 105
(2.96s)
1.52 ? 106
(15.36s)
3.07
(0.49)
252.88
(0.51)
6841.6
(0.54)
1.11 ? 105
(0.64)
1.62 ? 106
(2.30s)
NA
NA
NA
NA
NA
NA
NA
NA
NA
Throughout our simulations, we have chosen ? = 2 and the number of optimal points as N =
max(1024, 2m ), where m is the smallest integer such that 2m ? 10n. Note that even though the
SDP and the interior point methods converge very efficiently for small values of n, they cannot
scale to values of n ? 1000, which is where the strength of our method becomes evident. From
Table 1 we observe that the relaxation procedures SDP and RLT fail to converge within an hour of
computation time for n ? 1000, whereas all the approximation procedures can easily scale up to
n = 106 variables. Moreover, since the A, B were randomly sampled, we have seen that the true
optimal solution occurred at the boundary. Therefore, relaxing the constraint to linear forced the
solution to occur outside of the feasible set, as seen from the results in Table 1 as well as from Lemma
3. However, that is not a concern, since increasing N will definitely bring us closer to the feasible
set. The exact choice of N differs from problem to problem but can be computed as we did with the
small example in (10). Finally, the last column in Table 1 is obtained by solving the problem using
cvx in MATLAB using via SeDuMi and SDPT3, which gives the true x? .
Furthermore, our procedure gives the best approximation result when compared to the remaining
two sampling schemes. Lemma 3 shows that if the both the objective values are the same then we
indeed get the exact solution. To see how much the approximation deviates from the truth, we also
plot the log of the relative squared error, i.e. log(kx? (N ) ? x? k2 /kx? k2 ) for each of the sampling
8
log (Relative Square Error)
procedures in Figure 4. Throughout this simulation, we keep N fixed at 1024. This is why we see
that the error level increases with the increase in dimension. We omit SDP and RLT results in Figure
2
1
Method
Low Discrepancy Sampling
0
Uniform Sampling on Square
Uniform Sampling on Sphere
?1
?2
?3
5
10
20
50
100
Dimension of the Problem
Figure 4: The log of the relative squared error log(kx? (N ) ? x? k2 /kx? k2 ) with N fixed at 1024
and varying dimension n.
4 since both of them produce a solution very close to the exact minimum for small n. If we grow this
N with the dimension, then we see that the increasing trend vanishes and we get much more accurate
results as seen in Figure 5. We plot both the log of relative squared error as well as the log of the
feasibility error, where the feasibility error is defined as
Feasibility Error = (x? (N ) ? x0 )T B(x? (N ) ? x0 ) ? ?b
+
?2
?4
log(FeasibilityError)
log(Relative Squared Error)
where (x)+ denotes the positive part of x.
Dimension
?6
5
?8
10
?10
20
?12
?14
0
Dimension
5
?2
10
20
?4
?6
?16
2
3
4
5
6
7
2
log(Number of Constraints)
3
4
5
6
7
log(Number of Constraints)
Figure 5: The plot on the left panel and the right panel shows the decay in the relative squared error
and the feasibility error respectively, for our method, as we increase N for various dimensions.
From these results, it is clear that our procedure gets the smallest relative error compared to the other
sampling schemes, and increasing N brings us closer to the feasible set, with better accurate results.
5
Discussion and Future Work
In this paper, we look at the problem of solving a large scale QCQP problem by relaxing the quadratic
constraint by a near-optimal sampling scheme. This approximate method can scale up to very large
problem sizes, while generating solutions which have good theoretical properties of convergence.
Theorem 2 gives us an upper bound as a function of g(N ), which can be explicitly calculated for
different problems. To get the rate as a function of the dimension n, we need to understand how the
maximum and minimum eigenvalues of the two matrices A and B grow with n. One idea is to use
random matrix theory to come up with a probabilistic bound. Because of the nature of complexity of
these problems, we believe they deserve special attention and hence we leave them to future work.
We also believe that this technique can be immensely important in several applications. Our next step
is to do a detailed study where we apply this technique on some of these applications and empirically
compare it with other existing large-scale commercial solvers such as CPLEX and ADMM based
techniques for SDP.
9
Acknowledgment
We would sincerely like to thank the anonymous referees for their helpful comments which has
tremendously improved the paper. We would also like to thank Art Owen, Souvik Ghosh, Ya Xu and
Bee-Chung Chen for the helpful discussions.
References
[1] D. Agarwal, S. Chatterjee, Y. Yang, and L. Zhang. Constrained optimization for homepage
relevance. In Proceedings of the 24th International Conference on World Wide Web Companion,
pages 375?384. International World Wide Web Conferences Steering Committee, 2015.
[2] D. Agarwal, B.-C. Chen, P. Elango, and X. Wang. Personalized click shaping through lagrangian
duality for online recommendation. In Proceedings of the 35th international ACM SIGIR
conference on Research and development in information retrieval, pages 485?494. ACM, 2012.
[3] C. Aholt, S. Agarwal, and R. Thomas. A QCQP Approach to Triangulation. In Proceedings of
the 12th European Conference on Computer Vision - Volume Part I, ECCV?12, pages 654?667,
Berlin, Heidelberg, 2012. Springer-Verlag.
[4] K. M. Anstreicher. Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming. Journal of Global
Optimization, 43(2):471?484, 2009.
[5] X. Bao, N. V. Sahinidis, and M. Tawarmalani. Semidefinite relaxations for quadratically
constrained quadratic programming: A review and comparisons. Mathematical Programming,
129(1):129?157, 2011.
[6] K. Basu, S. Chatterjee, and A. Saha. Constrained Multi-Slot Optimization for Ranking Recommendations. arXiv:1602.04391, 2016.
[7] K. Basu and A. B. Owen. Low discrepancy constructions in the triangle. SIAM Journal on
Numerical Analysis, 53(2):743?761, 2015.
[8] K. Basu and A. B. Owen. Scrambled Geometric Net Integration Over General Product Spaces.
Foundations of Computational Mathematics, 17(2):467?496, Apr. 2017.
[9] A. V. Bondarenko, D. Radchenko, and M. S. Viazovska. Optimal asymptotic bounds for
spherical designs. Annals of Mathematics, 178(2):443?452, 2013.
[10] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical
learning via the alternating direction method of multipliers. Foundations and Trends in Machine
Learning, 3(1):1?122, 2011.
[11] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[12] J. S. Brauchart and J. Dick. Quasi?Monte Carlo rules for numerical integration over the unit
sphere S2 . Numerische Mathematik, 121(3):473?502, 2012.
[13] J. S. Brauchart and P. J. Grabner. Distributing many points on spheres: minimal energy and
designs. Journal of Complexity, 31(3):293?326, 2015.
[14] J. S. Brauchart, D. P. Hardin, and E. B. Saff. The riesz energy of the nth roots of unity: an
asymptotic expansion for large n. Bulletin of the London Mathematical Society, 41(4):621?633,
2009.
[15] A. De Maio, Y. Huang, D. P. Palomar, S. Zhang, and A. Farina. Fractional QCQP with
applications in ML steering direction estimation for radar detection. IEEE Transactions on
Signal Processing, 59(1):172?185, 2011.
[16] J. Dick and F. Pillichshammer. Digital sequences, discrepancy and quasi-Monte Carlo integration. Cambridge University Press, Cambridge, 2010.
[17] M. G?tz. On the Riesz energy of measures. Journal of Approximation Theory, 122(1):62?78,
2003.
[18] P. J. Grabner. Point sets of minimal energy. In Applications of Algebra and Number Theory
(Lectures on the Occasion of Harald Niederreiter?s 70th Birthday) (edited by G. Larcher, F.
Pillichshammer, A. Winterhof, and C. Xing), pages 109?125, 2014.
[19] D. Hardin and E. Saff. Minimal Riesz energy point configurations for rectifiable d-dimensional
manifolds. Advances in Mathematics, 193(1):174?204, 2005.
10
[20] Y. Huang and D. P. Palomar. Randomized algorithms for optimal solutions of double-sided
QCQP with applications in signal processing. IEEE Transactions on Signal Processing,
62(5):1093?1108, 2014.
[21] G. R. G. Lanckriet, N. Cristianini, P. L. Bartlett, L. E. Ghaoui, and M. I. Jordan. Learning the
kernel matrix with semi-definite programming. In Machine Learning, Proceedings of (ICML
2002), pages 323?330, 2002.
[22] J. B. Lasserre. Semidefinite programming vs. LP relaxations for polynomial programming.
Mathematics of Operations Research, 27(2):347?360, 2002.
[23] Y. Nesterov and A. Nemirovskii. Interior-point polynomial algorithms in convex programming.
SIAM, 1994.
[24] H. Niederreiter. Random Number Generation and Quasi-Monte Carlo Methods. SIAM,
Philadelphia, PA, 1992.
[25] B. O?Donoghue, E. Chu, N. Parikh, and S. Boyd. Conic optimization via operator splitting
and homogeneous self-dual embedding. Journal of Optimization Theory and Applications,
169(3):1042?1068, 2016.
[26] N. Parikh and S. Boyd. Block splitting for distributed optimization. Mathematical Programming
Computation, 6(1):77?102, 2014.
[27] R. T. Rockafellar. Convex analysis, 1970.
[28] J. Ye, S. Ji, and J. Chen. Learning the kernel matrix in discriminant analysis via quadratically
constrained quadratic programming. In Proceedings of the 13th ACM SIGKDD 2007, pages
854?863, 2007.
[29] X. Zhu, J. Kandola, J. Lafferty, and Z. Ghahramani. Graph kernels by spectral transforms.
Semi-supervised learning, pages 277?291, 2006.
11
| 6824 |@word cylindrical:2 briefly:1 version:1 polynomial:2 middle:1 simulation:4 p0:2 pick:1 t2n:1 configuration:3 contains:1 series:1 tabulate:1 existing:3 current:1 com:1 comparing:2 chu:2 written:1 numerical:2 shape:1 plot:3 update:1 v:1 item:2 plane:7 xk:2 provides:1 location:1 hyperplanes:1 c6:2 simpler:1 zhang:2 height:1 elango:1 mathematical:3 constructed:1 c2:3 prove:5 inside:1 introduce:3 x0:5 theoretically:1 indeed:1 expected:3 sdp:7 multi:1 spherical:2 td:1 curse:1 solver:7 considering:2 becomes:2 begin:5 increasing:3 notation:5 bounded:12 moreover:2 panel:5 homepage:1 what:2 mountain:1 argmin:1 minimizes:1 ghosh:1 transformation:1 corporation:1 guarantee:1 niederreiter:2 tackle:2 xd:1 exactly:1 prohibitively:1 k2:6 unit:8 converse:1 omit:1 yn:3 positive:6 t1:5 engineering:1 sd:1 limit:1 joining:7 solely:1 yd:2 birthday:1 might:1 therein:1 challenging:1 shaded:1 co:2 relaxing:2 ease:1 unique:2 acknowledgment:1 practice:2 block:2 definite:6 differs:1 x3:1 procedure:9 area:4 intersect:2 empirical:1 attain:1 projection:1 boyd:4 spite:1 get:9 onto:1 interior:5 close:2 operator:4 plast:1 cannot:1 applying:1 equivalent:2 map:12 lagrangian:1 attention:1 convex:15 sigir:1 formulate:1 numerische:1 simplicity:1 splitting:6 contradiction:3 rule:1 vandenberghe:1 embedding:1 notion:1 coordinate:2 linkedin:2 annals:1 construction:4 suppose:1 tan:2 commercial:1 exact:5 programming:15 palomar:2 homogeneous:1 lanckriet:1 associate:1 trend:2 referee:1 expensive:1 particularly:1 pa:1 cut:2 solved:1 wang:1 worst:1 region:4 trade:1 edited:1 equidistribution:5 vanishes:1 complexity:2 cristianini:1 nesterov:1 radar:2 solving:6 algebra:1 creates:2 completely:1 triangle:1 translated:1 easily:3 represented:1 tx:2 various:2 xxt:2 forced:1 fast:1 describe:6 london:1 monte:3 hyper:1 choosing:1 outside:1 whose:1 pillichshammer:2 larger:1 solve:3 say:1 relax:1 elaborates:1 think:1 transform:1 final:2 online:1 sequence:2 eigenvalue:1 reisz:2 net:8 hardin:2 product:1 combining:2 translate:1 description:1 bao:1 scalability:3 getting:1 convergence:6 double:1 plethora:1 produce:1 comparative:1 generating:1 converges:1 leave:1 spent:1 derive:1 develop:1 help:1 subregion:1 come:1 riesz:9 direction:3 subsequently:1 fix:1 generalization:1 anonymous:1 enumerated:1 hold:1 lying:1 around:1 immensely:1 mapping:5 vary:1 smallest:3 estimation:2 radchenko:1 schwarz:1 largest:1 create:3 t22:1 tf:2 offs:1 clearly:2 always:1 i3:2 pn:1 mobile:1 varying:1 corollary:1 ax:2 focus:1 tremendously:1 sigkdd:1 sense:2 helpful:2 dependent:2 stopping:1 quasi:3 transformed:1 i1:1 interested:1 issue:1 among:3 dual:1 development:1 constrained:10 art:5 special:2 integration:4 construct:1 once:2 having:2 beach:1 sampling:17 x4:1 look:1 icml:1 discrepancy:6 simplex:1 np:2 others:1 t2:4 idt:2 few:1 saha:2 future:2 randomly:2 kandola:1 cplex:1 detection:2 highly:1 alignment:1 truly:1 semidefinite:3 behind:1 kt:3 accurate:2 edge:1 closer:2 sedumi:1 euclidean:2 taylor:1 abundant:1 circle:3 theoretical:1 minimal:3 column:1 boolean:1 cover:15 introducing:2 uniform:3 kxi:1 st:1 definitely:1 international:3 siam:3 randomized:1 probabilistic:1 together:1 na:9 squared:5 choose:2 huang:2 tz:1 chung:1 leading:1 return:1 de:1 bold:2 rockafellar:1 explicitly:1 ranking:1 depends:1 view:1 root:4 try:2 doing:2 sup:4 start:1 xing:1 parallel:2 minimize:8 adic:1 square:3 accuracy:1 tangentially:1 efficiently:1 ckx:1 identify:2 famous:1 carlo:3 app:1 definition:4 energy:13 naturally:2 proof:10 associated:1 sampled:1 proved:3 knowledge:1 cap:1 fractional:1 shaping:1 higher:1 dt:1 supervised:2 x6:2 follow:1 specify:1 improved:2 formulation:2 shrink:1 strongly:3 generality:1 box:1 just:1 stage:1 furthermore:4 though:1 web:5 touch:1 brings:1 quality:1 scientific:2 believe:2 usa:1 ye:1 concept:3 true:6 normalized:4 contain:1 y2:1 hence:6 equality:1 multiplier:2 alternating:1 i2:1 deal:1 x5:1 sin:1 self:1 kak:2 criterion:1 generalized:1 occasion:1 evident:1 qti:1 enlargement:1 tn:13 bring:1 novel:1 parikh:3 rotation:1 qp:6 empirically:1 ji:1 volume:2 equidistributed:2 association:1 occurred:1 approximates:1 significant:1 refer:5 cambridge:3 pm:1 i6:1 mathematics:4 surface:7 base:5 triangulation:2 inf:1 scenario:1 certain:1 verlag:1 nonconvex:1 inequality:2 arbitrarily:2 life:1 yi:1 devise:1 preserving:5 seen:4 minimum:3 steering:3 r0:1 converting:2 converge:7 kdt:2 signal:4 semi:4 multiple:1 match:2 calculation:1 long:1 sphere:8 retrieval:1 feasibility:4 scalable:1 vision:2 arxiv:1 kernel:6 agarwal:3 achieved:1 harald:1 c1:6 whereas:1 want:1 completes:2 singular:2 grow:2 limn:6 rest:2 comment:1 subject:4 member:1 tough:1 lafferty:1 jordan:1 integer:2 near:1 noting:1 yang:1 enough:3 easy:6 independence:1 xj:5 restrict:1 click:2 idea:3 cn:1 donoghue:1 shift:1 ankan:1 t0:7 six:1 sdpt3:1 utility:5 distributing:1 bartlett:1 suffer:1 remark:1 matlab:1 detailed:2 informally:1 clear:1 transforms:2 anstreicher:1 ellipsoidal:1 generate:4 write:3 discrete:1 shall:2 key:1 reformulation:2 verified:1 vast:2 graph:1 relaxation:5 geometrically:1 cone:1 convert:2 angle:1 letter:2 throughout:5 cvx:1 bound:9 internet:2 followed:1 quadratic:17 adapted:1 occur:2 strength:1 constraint:24 ri:1 x2:2 personalized:1 qcqp:18 sake:1 argument:1 extremely:1 min:4 structured:1 combination:1 remain:1 smaller:1 unity:4 lp:1 kbk:2 ghaoui:1 sided:1 taken:1 remains:2 previously:1 sincerely:1 mathematik:1 fail:2 committee:1 end:1 operation:1 apply:2 observe:2 spectral:1 existence:1 original:1 thomas:1 denotes:5 remaining:1 include:2 ghahramani:1 prof:1 grabner:2 hypercube:2 society:1 objective:8 move:1 dependence:4 said:3 qt0:1 gradient:1 distance:5 thank:2 mapped:2 berlin:1 polytope:2 manifold:1 cauchy:1 discriminant:2 ellipsoid:5 dick:2 mostly:1 unfortunately:1 design:3 upper:2 observation:1 finite:3 situation:2 looking:1 nemirovskii:1 y1:5 rn:5 arbitrary:1 peleato:1 namely:1 eckstein:1 c3:1 kkx:2 c4:1 quadratically:6 hour:1 nip:1 deserve:1 below:1 usually:1 regime:1 challenge:1 program:3 max:9 nth:1 zhu:1 scheme:5 conic:5 axis:2 created:1 naive:1 saff:2 philadelphia:1 sn:11 deviate:1 prior:1 literature:4 geometric:2 tangent:6 bee:1 review:1 relative:7 asymptotic:2 loss:1 lecture:1 generation:1 versus:1 revenue:1 foundation:2 digital:1 bk2:1 pi:1 eccv:1 last:3 keeping:1 understand:2 basu:4 wide:2 face:1 bulletin:1 distributed:3 boundary:2 dimension:15 xn:22 calculated:1 world:2 c5:1 transaction:2 bb:1 approximate:8 compact:1 keep:2 dealing:1 ml:1 global:1 b1:1 conclude:1 xi:2 scrambled:1 search:1 why:1 table:5 lasserre:1 nature:1 ca:2 heidelberg:1 expansion:2 european:1 domain:2 did:1 apr:1 main:3 bounding:1 s2:1 arise:1 n2:3 x1:10 xu:1 referred:1 rlt:5 explicit:1 lie:3 theorem:5 companion:1 bad:1 xt:2 list:1 explored:1 decay:1 evidence:1 concern:1 exists:7 ci:1 linearization:3 chatterjee:3 kx:12 chen:3 cx:4 generalizing:1 intersection:1 forming:1 expressed:1 contained:1 recommendation:3 springer:1 minimizer:1 satisfies:1 truth:1 acm:3 slot:1 owen:3 admm:2 feasible:6 hard:4 change:1 specifically:2 reducing:1 lemma:10 called:2 duality:1 experimental:3 e:1 ya:1 rectifiable:1 intact:1 select:1 formally:3 relevance:1 constructive:1 handling:1 |
6,440 | 6,825 | Inhomogeneous Hypergraph Clustering with
Applications
Pan Li
Department ECE
UIUC
[email protected]
Olgica Milenkovic
Department ECE
UIUC
[email protected]
Abstract
Hypergraph partitioning is an important problem in machine learning, computer
vision and network analytics. A widely used method for hypergraph partitioning
relies on minimizing a normalized sum of the costs of partitioning hyperedges
across clusters. Algorithmic solutions based on this approach assume that different
partitions of a hyperedge incur the same cost. However, this assumption fails
to leverage the fact that different subsets of vertices within the same hyperedge
may have different structural importance. We hence propose a new hypergraph
clustering technique, termed inhomogeneous hypergraph partitioning, which assigns different costs to different hyperedge cuts. We prove that inhomogeneous
partitioning produces a quadratic approximation to the optimal solution if the
inhomogeneous costs satisfy submodularity constraints. Moreover, we demonstrate
that inhomogenous partitioning offers significant performance improvements in
applications such as structure learning of rankings, subspace segmentation and
motif clustering.
1
Introduction
Graph partitioning or clustering is a ubiquitous learning task that has found many applications in
statistics, data mining, social science and signal processing [1, 2]. In most settings, clustering is
formally cast as an optimization problem that involves entities with different pairwise similarities
and aims to maximize the total ?similarity? of elements within clusters [3, 4, 5], or simultaneously
maximize the total similarity within cluster and dissimilarity between clusters [6, 7, 8]. Graph
partitioning may be performed in an agnostic setting, where part of the optimization problem is to
automatically learn the number of clusters [6, 7].
Although similarity among entities in a class may be captured via pairwise relations, in many realworld problems it is necessary to capture joint, higher-order relations between subsets of objects. From
a graph-theoretic point of view, these higher-order relations may be described via hypergraphs, where
objects correspond to vertices and higher-order relations among objects correspond to hyperedges.
The vertex clustering problem aims to minimize the similarity across clusters and is referred to as
hypergraph partitioning. Hypergraph clustering has found a wide range of applications in network
motif clustering, semi-supervised learning, subspace clustering and image segmentation. [8, 9, 10,
11, 12, 13, 14, 15].
Classical hypergraph partitioning approaches share the same setup: A nonnegative weight is assigned
to every hyperedge and if the vertices in the hyperedge are placed across clusters, a cost proportional
to the weight is charged to the objective function [9, 11]. We refer to this clustering procedure
as homogenous hyperedge clustering and refer to the corresponding partition as a homogeneous
partition (H-partition). Clearly, this type of approach prohibits the use of information regarding
how different vertices or subsets of vertices belonging to a hyperedge contribute to the higher-order
relation. A more appropriate formulation entails charging different costs to different cuts of the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
InH-??partition Graph
?partition
c(M3)
H-??partition
M3(Product)
c(M1)
6
1
2
7
5
c(M2)
3
M1(Reactant)
M2(Reactant)
8
4
10
9
Figure 1: Clusters obtained using homogenous and inhomogeneous hypergraph partitioning and
graph partitioning (based on pairwise relations). Left: Each reaction is represented by a hyperedge.
Three different cuts of a hyperedge are denoted by c(M3 ), c(M1 ), and c(M2 ), based on which vertex
is ?isolated? by the cut. The graph partition only takes into account pairwise relations between
reactants, corresponding to w(c(M3 )) = 0. The homogenous partition enforces the three cuts to
have the same weight, w(c(M3 )) = w(c(M1 )) = w(c(M2 )), while an inhomogenous partition is
not required to satisfy this constraint. Right: Three different clustering results based on optimally
normalized cuts for a graph partition, a homogenous partition (H-partition) and an inhomogenous
partition (InH-partition) with 0.01 w(c(M1 )) ? w(c(M3 )) ? 0.44 w(c(M1 )).
hyperedges, thereby endowing hyperedges with vector weights capturing these costs. To illustrate
the point, consider the example of metabolic networks [16]. In these networks, vertices describe
metabolites while edges describe transformative, catalytic or binding relations. Metabolic reactions
are usually described via equations that involve more than two metabolites, such as M1 + M2 ? M3 .
Here, both metabolites M1 and M2 need to be present in order to complete the reaction that leads
to the creation of the product M3 . The three metabolites play different roles: M1 , M2 are reactants,
while M3 is the product metabolite. A synthetic metabolic network involving reactions with three
reagents as described above is depicted in Figure 1, along with three different partitions induced by a
homogeneous, inhomogeneous and classical graph cut. As may be seen, the hypergraph cuts differ in
terms of how they split or group pairs of reagents. The inhomogeneous clustering preserves all but
one pairing, while the homogenous clustering splits two pairings. The graph partition captures only
pairwise relations between reactants and hence, the optimal normalized cut over the graph splits six
reaction triples. The differences between inhomogenous, homogenous, and pairwise-relation based
cuts are even more evident for large graphs and they may lead to significantly different partitioning
performance in a number of important partitioning applications.
The problem of inhomogeneous hypergraph clustering has not been previously studied in the literature.
The main results of the paper are efficient algorithms for inhomogenous hypergraph partitioning
with theoretical performance guarantees and extensive testing of inhomogeneous partitioning in
applications such as hierarchical biological network studies, structure learning of rankings and
subspace clustering1 (All proofs and discussions of some applications are relegated to the Supplementary Material). The algorithmic methods are based on transforming hypergraphs into graphs
and subsequently performing spectral clustering based on the normalized Laplacian of the derived
graph. A similar approach for homogenous clustering has been used under the name of Clique
Expansion [14]. However, the projection procedure, which is the key step of Clique Expansion,
differs significantly from the projection procedure used in our work, as the inhomogenous clustering
algorithm allows non-uniform expansion of one hyperedge while Clique Expansion only allows
for uniform expansions. A straightforward analysis reveals that the normalized hypergraph cut
problem [11] and the normalized Laplacian homogeneous hypergraph clustering algorithms [9, 11]
are special cases of our proposed algorithm, where the costs assigned to the hyperedges take a very
special form. Furthermore, we show that when the costs of the proposed inhomogeneous hyperedge
clustering are submodular, the projection procedure is guaranteed to find a constant-approximation
solution for several graph-cut related entities. Hence, the inhomogeneous clustering procedure has
the same quadratic approximation properties as spectral graph clustering [17].
2
Preliminaries and Problem Formulation
A hypergraph H = (V, E) is described in terms of a vertex set V = {v1 , v2 , ..., vn } and a set of
hyperedges E. A hyperedge e ? E is a subset of vertices in V . For an arbitrary set S, we let |S|
stand for the cardinality of the set, and use ?(e) = |e| to denote the size of a hyperedge. If for all
e ? E, ?(e) equals a constant ?, the hypergraph is called a ?-uniform hypergraph.
1
The code for experiments can be found at https://github.com/lipan00123/InHclustering.
2
Let 2e denote the power set of e. An inhomogeneous hyperedge (InH-hyperedge) is a hyperedge
with an associated weight function we : 2e ? R?0 . The weight we (S) indicates the cost of
cutting/partitioning the hyperedge e into two subsets, S and e/S. A consistent weight we (S) satisfies
the following properties: we (?) = 0 and we (S) = we (e/S). The definition also allows we (?) to
be enforced only for a subset of 2e . However, for singleton
sets S = {v} ? e, we ({v}) has to be
P
specified. The degree of a vertex v is defined as dv = e: v?e we ({v}), while the volume of a subset
of vertices S ? V is defined as
X
volH (S) =
dv .
(1)
v?S
? be a partition of the vertices V . Define the hyperedge boundary of S as ?S = {e ?
Let (S, S)
E|e ? S 6= ?, e ? S? 6= ?} and the corresponding set volume as
X
X
volH (?S) =
we (e ? S) =
we (e ? S),
(2)
e?E
e??S
where the second equality holds since we (?) = we (e) = 0. The task of interest is to minimize the
normalized cut NCut of the hypergraph with InH-hyperedges, i.e., to solve the following optimization
problem
1
1
arg min NCutH (S) = arg min volH (?S)
+
(3)
? .
S
S
volH (S) volH (S)
One may also extend the notion of InH hypergraph partitioning to k-way InH-partition. For this purpose, we let (S1 , S2 , ..., Sk ) be a k-way partition of the vertices V , and define the k-way normalized
cut for inH-partition according to
NCutH (S1 , S2 , ..., Sk ) =
k
X
volH (?Si )
i=1
volH (Si )
.
(4)
Similarly, the goal of a k-way inH-partition is to minimize NCutH (S1 , S2 , ..., Sk ). Note that if
?(e) = 2 for all e ? E, the above definitions are consistent with those used for graphs [18].
3
Inhomogeneous Hypergraph Clustering Algorithms
Motivated by the homogeneous clustering approach of [14], we propose an inhomogeneous clustering
algorithm that uses three steps: 1) Projecting each InH-hyperedge onto a subgraph; 2) Merging
the subgraphs into a graph; 3) Performing classical spectral clustering based on the normalized
Laplacian (described in the Supplementary Material, along with the complexity of all algorithmic
steps). The novelty of our approach is in introducing the inhomogenous clustering constraints via the
projection step, and stating an optimization problem that provides the provably best weight splitting
for projections. All our theoretical results are stated for the NCut problem, but the proposed methods
may be used as heuristics for k-way NCuts.
Suppose that we are given a hypergraph with inhomogeneous hyperedge weights, H = (V, E, w).
For each InH-hyperedge (e, we ), we aim to find a complete subgraph Ge = (V (e) , E (e) , w(e) )
that ?best? represents this InH-hyperedge; here, V (e) = e, E (e) = {{v, v?}|v, v? ? e, v 6= v?}, and
w(e) : E (e) ? R denotes the hyperedge weight vector. The goal is to find the graph edge weights
that provide the best approximation to the split hyperedge weight according to:
X
(e)
min ? (e) s.t. we (S) ?
wv?v ? ? (e) we (S), for all S ? 2e s.t. we (S) is defined. (5)
w(e) ,? (e)
v?S,?
v ?e/S
Upon solving for the weights w(e) , we construct a graph G = (V, Eo , w), where V are the vertices of
the hypergraph, Eo is the complete set of edges, and where the weights wv?v , are computed via
X (e)
wv?v ,
wv?v , ?{v, v?} ? Eo .
(6)
e?E
3
This step represents the projection weight merging procedure, which simply reduces to the sum of
weights of all hyperedge projections on a pair of vertices. Due to the linearity of the volumes (1) and
boundaries (2) of sets S of vertices, for any S ? V , we have
VolH (?S) ? VolG (?S) ? ? ? VolH (?S), VolH (S) ? VolG (S) ? ? ? VolH (S),
(7)
where ? ? = maxe?E ? (e) . Applying spectral clustering on G = (V, Eo , w) produces the desired
partition (S ? , S?? ). The next result is a consequence of combining the bounds of (7) with the
approximation guarantees of spectral graph clustering (Theorem 1 [17]).
Theorem 3.1. If the optimization problem (5) is feasible for all InH-hyperedges and the weights
wv?v obtained from (6) are nonnegative for all {v, v?} ? Eo , then ?? = NCutH (S ? ) satisfies
(?? )2
?2
? H.
8
8
where ?H is the optimal value of normalized cut of the hypergraph H.
(? ? )3 ?H ?
(8)
There are no guarantees that the wv?v will be nonnegative: The optimization problem (5) may result
in solutions w(e) that are negative. The performance of spectral methods in the presence of negative
edge weights is not well understood [19, 20]; hence, it would be desirable to have the weights
wv?v generated from (6) be nonnegative. Unfortunately, imposing nonngativity constraints in the
optimization problem may render it infeasible. In practice, one may use (wv?v )+ = max{wv?v , 0} to
P
(e)
remove negative weights (other choices, such as (wv?v )+ = e (wv?v )+ do not appear to perform
well). This change invalidates the theoretical result of Theorem 3.1, but provides solutions with very
good empirical performance. The issues discussed are illustrated by the next example.
Example 3.1. Let e = {1, 2, 3}, (we ({1}), we ({2}), we ({3})) = (0, 0, 1). The solution to the
(e)
(e)
(e)
weight optimization problem is (? (e) , w12 , w13 , w23 ) = (1, ?1/2, 1/2, 1/2). If all components
(e)
w are constrained to be nonnegative, the optimization problem is infeasible. Nevertheless, the above
choice of weights is very unlikely to be encountered in practice, as we ({1}), we ({2}) = 0 indicates
that vertices 1 and 2 have no relevant connections within the given hyperedge e, while we ({3}) = 1
indicates that vertex 3 is strongly connected to 1 and 2, which is a contradiction. Let us assume
(e)
(e)
(e)
next that the negative weight is set to zero. Then, we adjust the weights ((w12 )+ , w13 , w23 ) =
(0, 1/2, 1/2), which produce clusterings ((1,3)(2)) or ((2,3)(1)); both have zero costs based on we .
Another problem is that arbitrary choices for we may cause the optimization problem to be infeasible (5) even if negative weights of w(e) are allowed, as illustrated by the following example.
Example 3.2. Let e = {1, 2, 3, 4}, with we ({1, 4}) = we ({2, 3}) = 1 and we (S) = 0 for all other
(e)
choices of sets S. To force the weights to zero, we require wv?v = 0 for all pairs v?
v , which fails to
work for we ({1, 4}), we ({2, 3}). For a hyperedge e, the degrees of freedom for we are 2?(e)?1 ? 1,
as two values of we are fixed, while the other values are paired up by symmetry. When ?(e) > 3, we
have ?(e)
< 2?(e)?1 ? 1, which indicates that the problem is overdetermined/infeasible.
2
In what follows, we provide sufficient conditions for the optimization problem to have a feasible
solution with nonnegative values of the weights w(e) . Also, we provide conditions for the weights
we that result in a small constant ? ? and hence allow for quadratic approximations of the optimum
solution. Our results depend on the availability of information about the weights we : In practice, the
weights have to be inferred from observable data, which may not suffice to determine more than the
weight of singletons or pairs of elements.
Only the values of we ({v}) are known. In this setting, we are only given information about how
much each node contributes to a higher-order relation, i.e., we are only given the values of we ({v}),
v ? V . Hence, we have ?(e) costs (equations) and ?(e) ? 3 variables, which makes the problem
underdetermined and easy to solve. The optimal ? e = 1 is attained by setting for all edges {v, v?}
X
1
1
(e)
wv?v =
[we ({v}) + we ({?
v })] ?
we ({v 0 }).
(9)
?(e) ? 2
(?(e) ? 1)(?(e) ? 2) 0
v ?e
The components of we (?) with positive coefficients in (3) are precisely those associated with the
endpoints of edges v?
v . Using simple algebraic manipulations, one can derive the conditions under
(e)
which the values wv?v are nonnegative, and these are presented in the Supplementary Material.
4
The solution to (9) produces a perfect projection with ? (e) = 1. Unfortunately, one cannot guarantee
that the solution is nonnegative. Hence, the question of interest is to determine for what types of
cuts can one can deviate from a perfect projection but ensure that the weights are nonnegative. The
proposed approach is to set the unspecified values of we (?) so that the weight function becomes
e
submodular, which guarantees nonnegative weights wv?
v that can constantly approximate we (?),
although with a larger approximation constant ?.
Submodular weights we (S). As previously discussed, when ?(e) > 3, the optimization problem (5)
may not have any feasible solutions for arbitrary choices of weights. However, we show next that if
the weights we are submodular, then (5) always has a nonnegative solution. We start by recalling the
definition of a submodular function.
Definition 3.2. A function we : 2e ? R?0 that satisfies
we (S1 ) + we (S2 ) ? we (S1 ? S2 ) + we (S1 ? S2 ) for all S1 , S2 ? 2e ,
is termed submodular.
Theorem 3.3. If we is submodular, then
X
we (S)
?(e)
wv?v =
1|{v,?v}?S|=1
2|S|(?(e)
? |S|)
e
(10)
S?2 /{?,e}
we (S)
we (S)
1|{v,?v}?S|=0 ?
1|{v,?v}?S|=2
?
2(|S| + 1)(?(e) ? |S| ? 1)
2(|S| ? 1)(?(e) ? |S| + 1)
is nonnegative. For 2 ? ?(e) ? 7, the function above is a feasible solution for the optimization
problem (5) with parameters ? (e) listed in Table 1.
Table 1: Feasible values of ? (e) for ? (e)
|?(e)|
?
2
1
3
1
4
3/2
5
2
6
4
7
6
Theorem 3.3 also holds when some weights in the set we are not specified, but may be completed to
satisfy submodularity constraints (See Example 3.3).
Example 3.3. Let e = {1, 2, 3, 4}, (we ({1}), we ({2}), we ({3}), we ({4})) = (1/3, 1/3, 1, 1). Solv(e)
ing (9) yields w12 = ?1/9 and ? (e) = 1. By completing the missing components in we as
(we ({1, 2}), we ({1, 3}), we ({1, 4})) = (2/3, 1, 1) leads to submodular weights (Observe that com(e)
pletions are not necessarily unique). Then, the solution of (10) gives w12 = 0 and ? (e) ? (1, 2/3],
which is clearly larger than one.
Remark 3.1. It is worth pointing out that ? = 1 when ?(e) = 3, which asserts that homogeneous
triangle clustering may be performed via spectral methods on graphs without any weight projection
distortion [9]. The above results extend this finding to the inhomogeneous case whenever the weights
are submodular. In addition, triangle clustering based on random walks [21] may be extended to the
inhomogeneous case.
Also, (10) lead to an optimal approximation ratio ? (e) if we restrict w(e) to be a linear mapping of
we , which is formally stated next.
(e)
Theorem 3.4. Suppose that for all pairs of {v, v?} ? Eo , wv?v is a linear function of we , denoted by
(e)
wv?v = fv?v (we ), where {fv?v }{v?v?E (e) } depends on ?(e) but not on we . Then, when ?(e) ? 7, the
optimal values of ? for the following optimization problem depend only on ?(e), and are equal to
those listed in Table 1.
min
{fvv? }{v,?
v }?Eo ,?
max
submodular we
s.t. we (S) ?
?
(11)
X
v?S,?
v ?e/S
fv?v (we ) ? ?we (S),
for all S ? 2e .
Remark 3.2. Although we were able to prove feasibility (Theorem 3.3) and optimality of linear
solutions (Theorem 3.4) only for small values of ?(e), we conjecture the results to be true for all ?(e).
5
The following theorem shows that if the weights we of hyperedges in a hypergraph are generated
from graph cuts of a latent weighted graph, then the projected weights of hyperedges are proportional
to the corresponding weights in the latent graph.
Theorem 3.5. Suppose that Ge = (V (e) , E (e) , w(e) ) is a latent graph that generates hyperedge
P
(e)
weights we according to the following procedure: for any S ? e, we (S) = v?S,?v?e/S wv?v . Then,
?(e)
(e)
equation (10) establishes that wv?v = ? (e) wv?v , for all v?
v ? E (e) , with ? (e) =
2?(e) ?2
?(e)(?(e)?1) .
Theorem 3.5 establishes consistency of the linear map (10), and also shows that the min-max optimal
approximation ratio for linear functions equals ?(2?(e) /?(e)2 ). An independent line of work [22],
based on Gomory-Hu trees (non-linear), established that submodular functions represent nonnegative
solutions of the optimization problem (5) with ? (e) = ?e ? 1. Therefore, an unrestricted solution of
the optimization problem (5) ensures that ? (e) ? ?e ? 1.
As practical applications almost exclusively involve hypergraphs with small, constant ?(e), the
Gomory-Hu tree approach in this case is suboptimal in approximation ratio compared to (10). The
expression (10) can be rewritten as w?(e) = M we , where M is a matrix that only depends on ?(e).
Hence, the projected weights can be computed in a very efficient and simple manner, as opposed
to constructing the Gomory-Hu tree or solving (5) directly. In the rare case that one has to deal
with hyperedges for which ?(e) is large, the Gomory-Hu tree approach and a solution of (5) may be
preferred.
4
Related Work and Discussion
One contribution of our work is to introduce the notion of an inhomogenous partition of hyperedges
and a new hypergraph projection method that accompanies the procedure. Subsequent edge weight
merging and spectral clustering are standardly used in hypergraph clustering algorithms, and in
particular in Zhou?s normalized hypergraph cut approach [11], Clique Expansion, Star Expansion and
Clique Averaging [14]. The formulation closest to ours is Zhou?s method [11]. In the aforementioned
hypergraph clustering method for H-hyperedges, each hyperedge e is assigned a scalar weight weH .
For the projection step, Zhou used weH /?(e) for the weight of each pair of endpoints of e. If we
view the H-hyperedge as an InH-hyperedge with weight function we , where we (S) = weH |S|(?(e) ?
|S|)/?(e) for all S ? 2e , then our definition of the volume/cost of the boundary (2) is identical to
(e)
that of Zhou?s. With this choice of we , the optimization problem (5) outputs wv?v = weH /?(e), with
(e)
? = 1, which are the same values as those obtained via Zhou?s projection. The degree of a vertex
P
P
?(e)
in [11] is defined as dv = e?E h(e, v)weH = e?E ?(e)?1
we ({v}), which is a weighted sum of
the we ({v}) and thus takes a slightly different form when compared to our definition. As a matter of
fact, for uniform hypergraphs, the two forms are same. Some other hypergraph clustering algorithms,
such as Clique expansion and Star expansion, as shown by Agarwal et al. [23], represent special cases
of our method for uniform hypergraphs as well.
The Clique Averaging method differs substantially from all the aforedescribed methods. Instead
of projecting each hyperedge onto a subgraph and then combining the subgraphs into a graph, the
algorithm performs a one-shot projection of the whole hypergraph onto a graph. The projection
is based on a `2 -minimization rule, which may not allow for constant-approximation solutions. It
is unknown if the result of the procedure can provide a quadratic approximation for the optimum
solution. Clique Averaging also has practical implementation problems and high computational
complexity, as it is necessary to solve a linear regression with n2 variable and n?(e) observations.
In the recent work on network motif clustering [9], the hyperedges are deduced from a graph where
they represent so called motifs. Benson et. al [9] proved that if the motifs have three vertices, resulting
in a three-uniform hypergraph, their proposed algorithm satisfies the Cheeger inequality for motifs2 .
In the described formulation, when cutting an H-hyperedge with weight weH , one is required to pay
weH . Hence, recasting this model within our setting, we arrive at inhomogenous weights we (S) =
2
(e)
weH , for all S ? 2e , for which (5) yields wv?v = weH /(?(e) ? 1) and ? (e) = b ? 4(e) c/(?(e) ? 1),
2
The Cheeger inequality [17] arises in the context of minimizing the conductance of a graph, which is related
to the normalized cut.
6
identical to the solution of [9]. Furthermore, given the result of our Theorem 3.1, one can prove that
the algorithm of [9] offers a quadratic-factor approximation for motifs involving more than three
vertices, a fact that was not established in the original work [9].
All the aforementioned algorithms essentially learn the spectrum of Laplacian matrices obtained
through hypergraph projection. The ultimate goal of projections is to avoid solving the NP-hard
problem of learning the spectrum of certain hypergraph Laplacians [24]. Methods that do not rely on
hypergraph projection, including optimization with the total variance of hypergraphs [12, 13], tensor
spectral methods [25] and nonlinear Laplacian spectral methods [26], have also been reported in the
literature. These techniques were exclusively applied in homogeneous settings, and they typically
have higher complexity and smaller spectral gaps than the projection-based methods. A future line
of work is to investigate whether these methods can be extended to the inhomogeneous case. Yet
another relevant line of work pertains to the statistical analysis of hypergraph partitioning methods
for generalized stochastic block models [27, 28].
5
Applications
Network motif clustering. Real-world networks exhibit rich higher-order connectivity patterns
frequently referred to as network motifs [29]. Motifs are special subgraphs of the graph and may be
viewed as hyperedges of a hypergraph over the same set of vertices. Recent work has shown that
hypergraph clustering based on motifs may be used to learn hidden high-order organization patterns
in networks [9, 8, 21]. However, this approach treats all vertices and edges within the motifs in the
same manner, and hence ignores the fact that each structural unit within the motif may have a different
relevance or different role. As a result, the vertices of the motifs are partitioned with a uniform
cost. However, this assumption is hardly realistic as in many real networks, only some vertices of
higher-order structures may need to be clustered together. Hence, inhomogenous hyperedges are
expected to elucidate more subtle high-order organizations of network. We illustrate the utility of
InH-partition on the Florida Bay foodweb [30] and compare our findings to those of [9].
The Florida Bay foodweb comprises 128 vertices corresponding to different species or organisms
that live in the Bay, and 2106 directed edges indicating carbon exchange between two species. The
Foodweb essentially represents a layered flow network, as carbon flows from so called producers
organisms to high-level predators. Each layer of the network consists of ?similar? species that play
the same role in the food chain. Clustering of the species may be performed by leveraging the
layered structure of the interactions. As a network motif, we use a subgraph of four species, and
correspondingly, four vertices denoted by vi , for i = 1, 2, 3, 4. The motif captures, among others,
relations between two producers and two consumers: The producers v1 and v2 both transmit carbons
to v3 and v4 , and all types of carbon flow between v1 and v2 , v3 and v4 are allowed (see Figure 2
Left). Such a motif is the smallest structural unit that captures the fact that carbon exchange occurs in
uni-direction between layers, while is allowed freely within layers. The inhomogeneous hyperedge
costs are assigned according to the following heuristics: First, as v1 and v2 share two common
carbon recipients (predators) while v3 and v4 share two common carbon sources (preys), we set
we ({vi }) = 1 for i = 1, 2, 3, 4, and we ({v1 , v2 }) = 0, we ({v1 , v3 }) = 2, and we ({v1 , v4 }) = 2.
Based on the solution of the optimization problem (5), one can construct a weighted subgraph whose
costs of cuts match the inhomogeneous costs, with ? (e) = 1. The graph is depicted in Figure 2 (left).
Our approach is to perform hierarchical clustering via iterative application of the InH-partition
method. In each iteration, we construct a hypergraph by replacing the chosen motif subnetwork by an
hyperedge. The result is shown in Figure 2. At the first level, we partitioned the species into three
clusters corresponding to producers, primary consumers and secondary consumers. The producer
cluster is homogeneous in so far that it contains only producers, a total of nine of them. At the second
level, we partitioned the obtained primary-consumer cluster into two clusters, one of which almost
exclusively comprises invertebrates (28 out of 35), while the other almost exclusively comprises
forage fishes. The secondary-consumer cluster is partitioned into two clusters, one of which comprises
top-level predators, while the other cluster mostly consists of predatory fishes and birds. Overall,
we recovered five clusters that fit five layers ranging from producers to top-level consumers. It is
easy to check that the producer, invertebrate and top-level predator clusters exhibit high functional
similarity of species (> 80%). An exact functional classification of forage and predatory fishes is not
known, but our layered network appears to capture an overwhelmingly large number of prey-predator
relations among these species. Among the 1714 edges, obtained after removing isolated vertices and
detritus species vertices, only five edges point in the opposite direction from a higher to a lower-level
7
v3 Projection v1
Motif: v1
v2
Producers
Invertebrates
1
v2
v4
Primary consumers
Forage fishes
0
v3
0
Motif
(Benson?16):
1
0
v4
0
Microfauna
Projection
Pelagic fishes
Secondary consumers
Predatory fishes & Birds
Top-??level Predators
Crabs &
Benthic fishes
Macroinvertebrates
Figure 2: Motif clustering in the Florida Bay food web. Left: InHomogenous case. Left-top: Hyperedge (network motif) & the weighted induced subgraph; Left-bottom: Hierarchical clustering
structure and five clusters via InH-partition. The vertices belonging to different clusters are distinguished by the colors of vertices. Edges with a uni-direction (right to left) are colored black while
other edges are kept blue. Right: Homogenous partitioning [9] with four clusters. Grey vertices are
not connected by motifs and thus unclassified.
cluster, two of which go from predatory fishes to forage fishes. Detailed information about the species
and clusters is provided in the Supplementary Material.
In comparison, the related work of Benson et al. [9] which used homogenous hypergraph clustering
and triangular motifs reported a very different clustering structure. The corresponding clusters
covered less than half of the species (62 out of 128) as many vertices were not connected by the
triangle motif; in contrast, 127 out of 128 vertices were covered by our choice of motif. We attribute
the difference between our results and the results of [9] to the choices of the network motif. A triangle
motif, used in [9] leaves a large number of vertices unclustered and fails to enforce a hierarchical
network structure. On the other hand, our fan motif with homogeneous weights produces a giant
cluster as it ties all the vertices together, and the hierarchical decomposition is only revealed when the
fan motif is used with inhomogeneous weights. In order to identify hierarchical network structures,
instead of hypergraph clustering, one may use topological sorting to rank species based on their
carbon flows [31]. Unfortunately, topological sorting cannot use biological side information and
hence fails to automatically determine the boundaries of the clusters.
Learning the Riffled Independence Structure of Ranking Data. Learning probabilistic models
for ranking data has attracted significant interest in social and political sciences as well as in machine
learning [32, 33]. Recently, a probabilistic model, termed the riffled-independence model, was shown
to accurately describe many benchmark ranked datasets [34]. In the riffled independence model, one
first generates two rankings over two disjoint sets of element independently, and then riffle shuffles
the rankings to arrive at an interleaved order. The structure learning problem in this setting reduces to
distinguishing the two categories of elements based on limited ranking data. More precisely, let Q
be the set of candidates to be ranked, with |Q| = n. A full ranking is a bijection ? : Q ? [n], and
for an a ? Q, ?(a) denotes the position of candidate a in the ranking ?. We use ?(a) < (>)?(b)
to indicate that a is ranked higher (lower) than b in ?. If S ? Q, we use ?S : S ? [|S|] to denote
the ranking ? projected onto the set S. We also use S(?) , {?(a)|a ? S} to denote the subset of
positions of elements in S. Let P(E) denote the probability of the event E. Riffled independence
asserts that there exists a riffled-independent set S ? Q, such that for a fixed ranking ? 0 over [n],
0
P(? = ? 0 ) = P(?S = ?S0 )P(?Q/S = ?Q/S
)P(S(?) = S(? 0 )).
Suppose that we are given a set of rankings ? = {? (1) , ? (2) , ..., ? (m) } drawn independently according
to some probability distribution P. If P has a riffled-independent set S ? , the structure learning problem
is to find S ? . In [34], the described problem was cast as an optimization problem over all possible
subsets of Q, with the objective of minimizing the Kullback-Leibler divergence between the ranking
distribution with riffled independence and the empirical distribution of ? [34]. A simplified version
of the optimization problem reads as
X
X
arg min F(S) ,
Ii;j,k +
Ii;j,k ,
(12)
S?Q
(i,j,k)??cross
?
S,S
(i,j,k)??cross
?
S,S
where ?cross
A,B , {(i, j, k)|i ? A, j, k ? B}, and where Ii;j,k denotes the estimated mutual information between the position of the candidate i and two ?comparison candidates? j, k. If 1?(j)<?(k)
8
{1,2,3,4,5,6,7,8,9,10,11,12,13,14}
Fianna F?
ail
{1,4,13}
{2,3,5,6,7,8,9,10,11,12,14}
...
Fine Gael
{2,5,6}
{3,7,8,9,10,11,12,14}
...
Independent
{7,8,9}
{3,10,11,12,14}
...
1
1
0.9
0.9
0.8
0.8
0.7
0.7
Success Rate
Candidates
1,4,13
2,5,6
3,7,8,9
10, 11,12,14
Success Rate
Party
Fianna F?il
Fine Gael
Independent
Others
0.6
0.5
0.4
0.3
0.2
0.6
0.5
0.4
InH-Par-F.F.
InH-Par-F.G.
InH-Par-Ind.
InH-Par-All
Apar-F.F.
Apar-F.G.
Apar-Ind.
Apar-All
0.3
0.2
0.1
0.1
0
10 1
10
2
10
3
0
Sample Complexity m
0
0.2
0.4
0.6
0.8
1
Triple-Sampling Probability r
Figure 3: Election dataset. Left-top: parties and candidates; Left-bottom: hierarchical partitioning
structure of Irish election detected by InH-Par; Middle: Success rate vs Sample Complexity; Right:
Success rate vs Triple-sampling Rate.
denotes the indicator function of the underlying event, we may write
?
Ii;j,k , I(?(i);
1?(j)<?(k) ) =
X
X
?(i) 1?(j)<?(k)
?
P(?(i),
1?(j)<?(k) )
?
P(?(i),
1?(j)<?(k) ) log
, (13)
?
P(?(i))P(1
?(j)<?(k) )
? denotes an estimate of the underlying probability. If i and j, k are in different riffledwhere P
?
independent sets, the estimated mutual information I(?(i);
1?(j)<?(k) ) converges to zero as the
number of samples increases. When the number of samples is small, one may use mutual information
estimators described in [35, 36, 37].
One may recast the above problem as an InH-partition problem over a hypergraph where each
candidate represents a vertex in the hypergraph, and Ii;j,k represents the inhomogeneous cost we ({i})
?
for the hyperedge e = {i, j, k}. Note that as mutual information I(?(i);
1?(j)<?(k) ) is in general
asymmetric, one would not have been able to use H-partitions. The optimization problem reduces to
minS volH (?S). The two optimization tasks are different, and we illustrate next that the InH-partition
outperforms the original optimization approach AnchorsPartition (Apar) [34] both on synthetic data
and real data. Due to space limitations, synthetic data and a subset of the real dataset results are listed
in the Supplementary Material.
Here, we analyzed the Irish House of Parliament election dataset (2002) [38]. The dataset consists
of 2490 ballots fully ranking 14 candidates. The candidates were from a number of parties, where
Fianna F?il (F.F.) and Fine Gael (F.G.) are the two largest (and rival) Irish political parties. Using InHpartition (InH-Par), one can split the candidates iteratively into two sets (See Figure 3) which yields
to meaningful clusters that correspond to large parties: {1, 4, 13} (F.F.), {2, 5, 6} (F.G.), {7, 8, 9}
(Ind.). We compared InH-partition with Apar based on their performance in detecting these three
clusters using a small training set: We independently sampled m rankings 100 times and executed
both algorithms to partition the set of candidates iteratively. During the partitioning procedure,
?party success? was declared if one exactly detected one of the three party clusters (?F.F.?, ?F.G.? &
?Ind.?). ?All? was used to designate that all three party clusters were detected completely correctly.
InH-partition outperforms Apar in recovering the cluster Ind. and achieved comparable performance
for cluster F.F., although it performs a little worse than Apar for cluster F.G.; InH-partition also
offers superior overall performance compared to Apar. We also compared InH-partition with APar
in the large sample regime (m = 2490), using only a subset of triple comparisons (hyperedges)
sampled independently with probability r (This strategy significantly reduces the complexity of both
algorithms). The average is computed over 100 independent runs. The results are shown in Figure 3,
highlighting the robustness of InH-partition with respect to missing triples. Additional test on ranking
data are described in the Supplementary Material, along with new results on subspace clustering,
motion segmentation and others.
6
Acknowledgement
The authors gratefully acknowledge many useful suggestions by the reviewers. They are also indebted
to the reviewers for providing many additional and relevant references. This work was supported in
part by the NSF grant CCF 1527636.
9
References
[1] A. K. Jain, M. N. Murty, and P. J. Flynn, ?Data clustering: a review,? ACM computing surveys
(CSUR), vol. 31, no. 3, pp. 264?323, 1999.
[2] A. Y. Ng, M. I. Jordan, and Y. Weiss, ?On spectral clustering: Analysis and an algorithm,? in
Advances in Neural Information Processing Systems (NIPS), 2002, pp. 849?856.
[3] S. R. Bul? and M. Pelillo, ?A game-theoretic approach to hypergraph clustering,? in Advances
in Neural Information Processing Systems (NIPS), 2009, pp. 1571?1579.
[4] M. Leordeanu and C. Sminchisescu, ?Efficient hypergraph clustering,? in International Conference on Artificial Intelligence and Statistics (AISTATS), 2012, pp. 676?684.
[5] H. Liu, L. J. Latecki, and S. Yan, ?Robust clustering as ensembles of affinity relations,? in
Advances in Neural Information Processing Systems (NIPS), 2010, pp. 1414?1422.
[6] N. Bansal, A. Blum, and S. Chawla, ?Correlation clustering,? in The 43rd Annual IEEE
Symposium on Foundations of Computer Science (FOCS), 2002, pp. 238?247.
[7] N. Ailon, M. Charikar, and A. Newman, ?Aggregating inconsistent information: ranking and
clustering,? Journal of the ACM (JACM), vol. 55, no. 5, p. 23, 2008.
[8] P. Li, H. Dau, G. Puleo, and O. Milenkovic, ?Motif clustering and overlapping clustering for
social network analysis,? in IEEE Conference on Computer Communications (INFOCOM),
2017, pp. 109?117.
[9] A. R. Benson, D. F. Gleich, and J. Leskovec, ?Higher-order organization of complex networks,?
Science, vol. 353, no. 6295, pp. 163?166, 2016.
[10] H. Yin, A. R. Benson, J. Leskovec, and D. F. Gleich, ?Local higher-order graph clustering,? in
Proceedings of the 23rd ACM International Conference on Knowledge Discovery and Data
Mining (SIGKDD), 2017, pp. 555?564.
[11] D. Zhou, J. Huang, and B. Sch?lkopf, ?Learning with hypergraphs: Clustering, classification,
and embedding,? in Advances in neural information processing systems, 2007, pp. 1601?1608.
[12] M. Hein, S. Setzer, L. Jost, and S. S. Rangapuram, ?The total variation on hypergraphs-learning
on hypergraphs revisited,? in Advances in Neural Information Processing Systems (NIPS), 2013,
pp. 2427?2435.
[13] C. Zhang, S. Hu, Z. G. Tang, and T. H. Chan, ?Re-revisiting learning on hypergraphs: confidence
interval and subgradient method,? in International Conference on Machine Learning (ICML),
2017, pp. 4026?4034.
[14] S. Agarwal, J. Lim, L. Zelnik-Manor, P. Perona, D. Kriegman, and S. Belongie, ?Beyond
pairwise clustering,? in IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
vol. 2, 2005, pp. 838?845.
[15] S. Kim, S. Nowozin, P. Kohli, and C. D. Yoo, ?Higher-order correlation clustering for image
segmentation,? in Advances in Neural Information Processing Systems (NIPS), 2011, pp. 1530?
1538.
[16] H. Jeong, B. Tombor, R. Albert, Z. N. Oltvai, and A.-L. Barab?si, ?The large-scale organization
of metabolic networks,? Nature, vol. 407, no. 6804, pp. 651?654, 2000.
[17] F. R. Chung, ?Four proofs for the cheeger inequality and graph partition algorithms,? in
Proceedings of ICCM, vol. 2, 2007, p. 378.
[18] J. Shi and J. Malik, ?Normalized cuts and image segmentation,? IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888?905, 2000.
[19] J. Kunegis, S. Schmidt, A. Lommatzsch, J. Lerner, E. W. De Luca, and S. Albayrak, ?Spectral
analysis of signed graphs for clustering, prediction and visualization,? in SIAM International
Conference on Data Mining (ICDM), 2010, pp. 559?570.
[20] A. V. Knyazev, ?Signed laplacian for spectral clustering revisited,? arXiv preprint
arXiv:1701.01394, 2017.
[21] C. Tsourakakis, J. Pachocki, and M. Mitzenmacher, ?Scalable motif-aware graph clustering,?
arXiv preprint arXiv:1606.06235, 2016.
10
[22] N. R. Devanur, S. Dughmi, R. Schwartz, A. Sharma, and M. Singh, ?On the approximation of
submodular functions,? arXiv preprint arXiv:1304.4948, 2013.
[23] S. Agarwal, K. Branson, and S. Belongie, ?Higher order learning with graphs,? in International
Conference on Machine Learning (ICML). ACM, 2006, pp. 17?24.
[24] G. Li, L. Qi, and G. Yu, ?The z-eigenvalues of a symmetric tensor and its application to spectral
hypergraph theory,? Numerical Linear Algebra with Applications, vol. 20, no. 6, pp. 1001?1029,
2013.
[25] A. R. Benson, D. F. Gleich, and J. Leskovec, ?Tensor spectral clustering for partitioning higherorder network structures,? in Proceedings of the 2015 SIAM International Conference on Data
Mining (ICDM), 2015, pp. 118?126.
[26] A. Louis, ?Hypergraph markov operators, eigenvalues and approximation algorithms,? in
Proceedings of the forty-seventh annual ACM symposium on Theory of computing (STOC),
2015, pp. 713?722.
[27] D. Ghoshdastidar and A. Dukkipati, ?Consistency of spectral partitioning of uniform hypergraphs under planted partition model,? in Advances in Neural Information Processing Systems
(NIPS), 2014, pp. 397?405.
[28] ??, ?Consistency of spectral hypergraph partitioning under planted partition model,? arXiv
preprint arXiv:1505.01582, 2015.
[29] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, ?Network motifs:
simple building blocks of complex networks,? Science, vol. 298, no. 5594, pp. 824?827, 2002.
[30] ?Florida bay trophic exchange matrix,? http://vlado.fmf.uni-lj.si/pub/networks/data/bio/
foodweb/Florida.paj.
[31] S. Allesina, A. Bodini, and C. Bondavalli, ?Ecological subsystems via graph theory: the role of
strongly connected components,? Oikos, vol. 110, no. 1, pp. 164?176, 2005.
[32] P. Awasthi, A. Blum, O. Sheffet, and A. Vijayaraghavan, ?Learning mixtures of ranking models,?
in Advances in Neural Information Processing Systems (NIPS), 2014, pp. 2609?2617.
[33] C. Meek and M. Meila, ?Recursive inversion models for permutations,? in Advances in Neural
Information Processing Systems (NIPS), 2014, pp. 631?639.
[34] J. Huang, C. Guestrin et al., ?Uncovering the riffled independence structure of ranked data,?
Electronic Journal of Statistics, vol. 6, pp. 199?230, 2012.
[35] J. Jiao, K. Venkat, Y. Han, and T. Weissman, ?Maximum likelihood estimation of functionals of
discrete distributions,? IEEE Transactions on Information Theory, vol. 63, no. 10, pp. 6774?
6798, 2017.
[36] Y. Bu, S. Zou, Y. Liang, and V. V. Veeravalli, ?Estimation of KL divergence: optimal minimax
rate,? arXiv preprint arXiv:1607.02653, 2016.
[37] W. Gao, S. Oh, and P. Viswanath, ?Demystifying fixed k-nearest neighbor information estimators,? in IEEE International Symposium on Information Theory (ISIT), 2017, pp. 1267?1271.
[38] I. C. Gormley and T. B. Murphy, ?A latent space model for rank data,? in Statistical Network
Analysis: Models, Issues, and New Directions. Springer, 2007, pp. 90?102.
11
| 6825 |@word kohli:1 milenkovic:2 version:1 inversion:1 middle:1 hu:5 grey:1 zelnik:1 decomposition:1 sheffet:1 thereby:1 shot:1 liu:1 contains:1 exclusively:4 pub:1 ours:1 outperforms:2 reaction:5 recovered:1 com:2 si:4 yet:1 attracted:1 subsequent:1 partition:39 recasting:1 realistic:1 numerical:1 remove:1 v:2 half:1 leaf:1 intelligence:2 knyazev:1 colored:1 provides:2 detecting:1 contribute:1 node:1 bijection:1 revisited:2 zhang:1 five:4 along:3 symposium:3 pairing:2 focs:1 prove:3 consists:3 introduce:1 manner:2 pairwise:7 expected:1 frequently:1 uiuc:2 automatically:2 food:2 election:3 little:1 cardinality:1 becomes:1 provided:1 latecki:1 moreover:1 linearity:1 suffice:1 agnostic:1 underlying:2 what:2 unspecified:1 substantially:1 prohibits:1 ail:1 flynn:1 finding:2 giant:1 guarantee:5 every:1 tie:1 exactly:1 schwartz:1 partitioning:25 unit:2 grant:1 bio:1 appear:1 louis:1 positive:1 understood:1 aggregating:1 treat:1 local:1 consequence:1 black:1 signed:2 bird:2 studied:1 branson:1 limited:1 analytics:1 range:1 directed:1 unique:1 practical:2 enforces:1 testing:1 practice:3 block:2 recursive:1 differs:2 procedure:10 empirical:2 yan:1 significantly:3 murty:1 projection:21 confidence:1 onto:4 cannot:2 layered:3 operator:1 subsystem:1 context:1 applying:1 live:1 map:1 charged:1 missing:2 reviewer:2 shi:1 straightforward:1 go:1 demystifying:1 independently:4 devanur:1 survey:1 shen:1 splitting:1 assigns:1 m2:7 subgraphs:3 contradiction:1 rule:1 estimator:2 oh:1 embedding:1 notion:2 variation:1 transmit:1 elucidate:1 play:2 suppose:4 exact:1 homogeneous:8 us:1 distinguishing:1 overdetermined:1 element:5 recognition:1 asymmetric:1 viswanath:1 cut:21 bodini:1 bottom:2 role:4 rangapuram:1 preprint:5 capture:5 revisiting:1 ensures:1 connected:4 kashtan:1 shuffle:1 cheeger:3 transforming:1 complexity:6 hypergraph:47 kriegman:1 dukkipati:1 trophic:1 depend:2 solving:3 singh:1 algebra:1 incur:1 creation:1 upon:1 completely:1 triangle:4 joint:1 represented:1 jiao:1 jain:1 describe:3 detected:3 artificial:1 newman:1 solv:1 whose:1 heuristic:2 widely:1 supplementary:6 solve:3 larger:2 distortion:1 cvpr:1 triangular:1 statistic:3 albayrak:1 eigenvalue:2 propose:2 interaction:1 product:3 relevant:3 combining:2 subgraph:6 asserts:2 cluster:32 optimum:2 produce:5 perfect:2 converges:1 object:3 illustrate:3 derive:1 stating:1 alon:1 nearest:1 pelillo:1 dughmi:1 recovering:1 involves:1 metabolite:5 indicate:1 differ:1 direction:4 submodularity:2 inhomogeneous:22 attribute:1 subsequently:1 stochastic:1 material:6 reagent:2 require:1 exchange:3 clustered:1 preliminary:1 isit:1 biological:2 underdetermined:1 designate:1 hold:2 crab:1 algorithmic:3 mapping:1 pointing:1 smallest:1 purpose:1 estimation:2 largest:1 establishes:2 weighted:4 minimization:1 awasthi:1 clearly:2 always:1 aim:3 manor:1 zhou:6 avoid:1 overwhelmingly:1 gormley:1 derived:1 catalytic:1 improvement:1 rank:2 indicates:4 check:1 likelihood:1 contrast:1 political:2 sigkdd:1 kim:1 motif:32 unlikely:1 typically:1 lj:1 hidden:1 relation:14 perona:1 relegated:1 provably:1 issue:2 arg:3 overall:2 classification:2 denoted:3 among:5 aforementioned:2 uncovering:1 constrained:1 special:4 mutual:4 homogenous:9 equal:3 construct:3 aware:1 beach:1 sampling:2 irish:3 identical:2 represents:5 ng:1 yu:1 icml:2 future:1 weh:9 np:1 others:3 producer:9 lerner:1 simultaneously:1 preserve:1 divergence:2 murphy:1 itzkovitz:1 recalling:1 freedom:1 conductance:1 organization:4 interest:3 mining:4 investigate:1 adjust:1 analyzed:1 mixture:1 chain:1 edge:13 necessary:2 tree:4 walk:1 desired:1 re:1 reactant:5 isolated:2 hein:1 theoretical:3 leskovec:3 ghoshdastidar:1 cost:18 introducing:1 vertex:38 subset:11 rare:1 uniform:8 seventh:1 optimally:1 reported:2 synthetic:3 st:1 deduced:1 international:7 siam:2 bu:1 v4:6 probabilistic:2 together:2 connectivity:1 opposed:1 huang:2 worse:1 chung:1 li:3 account:1 singleton:2 de:1 orr:1 star:2 availability:1 coefficient:1 matter:1 satisfy:3 ranking:18 depends:2 vi:2 performed:3 view:2 infocom:1 start:1 predator:6 contribution:1 minimize:3 il:2 variance:1 ensemble:1 correspond:3 yield:3 identify:1 lkopf:1 accurately:1 worth:1 indebted:1 whenever:1 definition:6 pp:30 proof:2 associated:2 sampled:2 proved:1 dataset:4 w23:2 color:1 knowledge:1 lim:1 ubiquitous:1 segmentation:5 subtle:1 gleich:3 appears:1 higher:14 attained:1 supervised:1 wei:1 formulation:4 mitzenmacher:1 strongly:2 furthermore:2 correlation:2 hand:1 web:1 replacing:1 veeravalli:1 nonlinear:1 overlapping:1 dau:1 building:1 usa:1 name:1 normalized:13 true:1 csur:1 ncuts:1 ccf:1 hence:12 assigned:4 equality:1 read:1 symmetric:1 leibler:1 iteratively:2 illustrated:2 deal:1 ind:5 during:1 game:1 clustering1:1 generalized:1 bansal:1 evident:1 theoretic:2 demonstrate:1 complete:3 performs:2 motion:1 image:3 ranging:1 recently:1 endowing:1 common:2 superior:1 functional:2 endpoint:2 volume:4 extend:2 hypergraphs:11 m1:9 discussed:2 organism:2 significant:2 refer:2 imposing:1 rd:2 meila:1 consistency:3 similarly:1 illinois:2 submodular:12 gratefully:1 entail:1 similarity:6 han:1 closest:1 recent:2 chan:1 termed:3 manipulation:1 certain:1 ecological:1 inequality:3 hyperedge:37 wv:23 success:5 captured:1 seen:1 unrestricted:1 additional:2 guestrin:1 eo:7 freely:1 sharma:1 novelty:1 maximize:2 determine:3 v3:6 signal:1 semi:1 forage:4 desirable:1 full:1 reduces:4 ii:5 ing:1 match:1 offer:3 long:1 w13:2 cross:3 luca:1 icdm:2 weissman:1 paired:1 laplacian:6 feasibility:1 prediction:1 involving:2 regression:1 jost:1 barab:1 vision:2 essentially:2 scalable:1 qi:1 albert:1 iteration:1 represent:3 arxiv:10 agarwal:3 achieved:1 addition:1 fine:3 chklovskii:1 interval:1 hyperedges:17 source:1 sch:1 induced:2 vijayaraghavan:1 flow:4 leveraging:1 invalidates:1 jordan:1 inconsistent:1 structural:3 leverage:1 presence:1 revealed:1 split:5 easy:2 independence:6 fit:1 restrict:1 suboptimal:1 opposite:1 regarding:1 whether:1 six:1 motivated:1 expression:1 utility:1 ultimate:1 setzer:1 fmf:1 render:1 algebraic:1 accompanies:1 cause:1 hardly:1 remark:2 nine:1 useful:1 gael:3 detailed:1 involve:2 listed:3 covered:2 rival:1 category:1 http:2 nsf:1 fish:9 estimated:2 disjoint:1 correctly:1 blue:1 write:1 milo:1 discrete:1 vol:12 group:1 key:1 four:4 nevertheless:1 blum:2 drawn:1 prey:2 kept:1 v1:9 graph:37 subgradient:1 sum:3 enforced:1 realworld:1 unclassified:1 run:1 ballot:1 arrive:2 almost:3 electronic:1 vn:1 w12:4 comparable:1 interleaved:1 capturing:1 bound:1 layer:4 meek:1 guaranteed:1 completing:1 pay:1 fan:2 quadratic:5 topological:2 encountered:1 nonnegative:13 annual:2 constraint:5 precisely:2 invertebrate:3 generates:2 declared:1 min:7 optimality:1 performing:2 conjecture:1 department:2 ailon:1 according:5 charikar:1 belonging:2 across:3 slightly:1 pan:1 smaller:1 kunegis:1 partitioned:4 s1:7 benson:6 dv:3 projecting:2 inhomogenous:11 equation:3 visualization:1 previously:2 lommatzsch:1 ge:2 rewritten:1 observe:1 hierarchical:7 v2:7 appropriate:1 spectral:18 enforce:1 chawla:1 distinguished:1 schmidt:1 robustness:1 florida:5 original:2 recipient:1 denotes:5 clustering:64 ensure:1 top:6 completed:1 classical:3 tensor:3 objective:2 malik:1 question:1 occurs:1 strategy:1 primary:3 planted:2 exhibit:2 subnetwork:1 affinity:1 subspace:4 higherorder:1 entity:3 consumer:8 code:1 forty:1 ratio:3 minimizing:3 providing:1 liang:1 setup:1 unfortunately:3 mostly:1 executed:1 carbon:8 stoc:1 paj:1 stated:2 negative:5 implementation:1 tsourakakis:1 unknown:1 perform:2 observation:1 datasets:1 markov:1 benchmark:1 acknowledge:1 extended:2 communication:1 inh:29 arbitrary:3 inferred:1 standardly:1 cast:2 required:2 pair:6 extensive:1 specified:2 connection:1 jeong:1 kl:1 fv:3 established:2 pachocki:1 nip:9 able:2 beyond:1 usually:1 pattern:4 laplacians:1 unclustered:1 regime:1 recast:1 max:3 including:1 charging:1 power:1 event:2 ranked:4 force:1 rely:1 indicator:1 minimax:1 github:1 deviate:1 review:1 literature:2 acknowledgement:1 discovery:1 riffle:1 par:6 fully:1 permutation:1 suggestion:1 limitation:1 proportional:2 triple:5 foundation:1 degree:3 sufficient:1 consistent:2 olgica:1 s0:1 parliament:1 metabolic:4 nowozin:1 share:3 placed:1 supported:1 infeasible:4 side:1 allow:2 wide:1 neighbor:1 correspondingly:1 boundary:4 stand:1 world:1 rich:1 ignores:1 author:1 projected:3 simplified:1 far:1 party:8 social:3 transaction:2 functionals:1 approximate:1 observable:1 uni:3 cutting:2 preferred:1 kullback:1 clique:8 reveals:1 belongie:2 spectrum:2 latent:4 iterative:1 sk:3 bay:5 table:3 learn:3 nature:1 robust:1 ca:1 symmetry:1 contributes:1 sminchisescu:1 expansion:9 necessarily:1 complex:2 constructing:1 zou:1 aistats:1 main:1 s2:7 whole:1 oltvai:1 n2:1 allowed:3 referred:2 venkat:1 fails:4 position:3 comprises:4 candidate:11 house:1 tang:1 theorem:12 removing:1 exists:1 merging:3 importance:1 dissimilarity:1 gap:1 sorting:2 gomory:4 depicted:2 yin:1 simply:1 jacm:1 gao:1 ncut:2 highlighting:1 scalar:1 leordeanu:1 binding:1 springer:1 satisfies:4 relies:1 constantly:1 acm:5 goal:3 viewed:1 bul:1 feasible:5 change:1 hard:1 averaging:3 total:5 called:3 specie:12 ece:2 secondary:3 m3:9 meaningful:1 maxe:1 formally:2 indicating:1 arises:1 pertains:1 relevance:1 riffled:8 yoo:1 |
6,441 | 6,826 | Differentiable Learning of Logical Rules for
Knowledge Base Reasoning
Fan Yang
Zhilin Yang
William W. Cohen
School of Computer Science
Carnegie Mellon University
{fanyang1,zhiliny,wcohen}@cs.cmu.edu
Abstract
We study the problem of learning probabilistic first-order logical rules for knowledge base reasoning. This learning problem is difficult because it requires learning
the parameters in a continuous space as well as the structure in a discrete space.
We propose a framework, Neural Logic Programming, that combines the parameter
and structure learning of first-order logical rules in an end-to-end differentiable
model. This approach is inspired by a recently-developed differentiable logic called
TensorLog [5], where inference tasks can be compiled into sequences of differentiable operations. We design a neural controller system that learns to compose
these operations. Empirically, our method outperforms prior work on multiple
knowledge base benchmark datasets, including Freebase and WikiMovies.
1
Introduction
A large body of work in AI and machine learning has considered the problem of learning models
composed of sets of first-order logical rules. An example of such rules is shown in Figure 1. Logical
rules are useful representations for knowledge base reasoning tasks because they are interpretable,
which can provide insight to inference results. In many cases this interpretability leads to robustness
in transfer tasks. For example, consider the scenario in Figure 1. If new facts about more companies
or locations are added to the knowledge base, the rule about HasOfficeInCountry will still be
usefully accurate without retraining. The same might not be true for methods that learn embeddings
for specific knowledge base entities, as is done in TransE [3].
HasO?ceInCity(New York, Uber)
CityInCountry(USA, New York)
In which country Y
does X have o?ce?
X = Uber
Y = USA
HasO?ceInCountry(Y, X) ? HasO?ceInCity(Z, X), CityInCountry(Y, Z)
HasO?ceInCountry(Y, X) ?
X = Ly*
HasO?ceInCity(Paris, Ly*)
CityInCountry(France, Paris)
Y = France
Figure 1: Using logical rules (shown in the box) for knowledge base reasoning.
Learning collections of relational rules is a type of statistical relational learning [7], and when the
learning involves proposing new logical rules, it is often called inductive logic programming [18]
. Often the underlying logic is a probabilistic logic, such as Markov Logic Networks [22] or
ProPPR [26]. The advantage of using a probabilistic logic is that by equipping logical rules with
probability, one can better model statistically complex and noisy data. Unfortunately, this learning
problem is quite difficult ? it requires learning both the structure (i.e. the particular sets of rules
included in a model) and the parameters (i.e. confidence associated with each rule). Determining
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the structure is a discrete optimization problem, and one that involves search over a potentially large
problem space. Many past learning systems have thus used optimization methods that interleave
moves in a discrete structure space with moves in parameter space [12, 13, 14, 27].
In this paper, we explore an alternative approach: a completely differentiable system for learning
models defined by sets of first-order rules. This allows one to use modern gradient-based programming
frameworks and optimization methods for the inductive logic programming task. Our approach
is inspired by a differentiable probabilistic logic called TensorLog [5]. TensorLog establishes a
connection between inference using first-order rules and sparse matrix multiplication, which enables
certain types of logical inference tasks to be compiled into sequences of differentiable numerical
operations on matrices. However, TensorLog is limited as a learning system because it only learns
parameters, not rules. In order to learn parameters and structure simultaneously in a differentiable
framework, we design a neural controller system with an attention mechanism and memory to learn
to sequentially compose the primitive differentiable operations used by TensorLog. At each stage of
the computation, the controller uses attention to ?softly? choose a subset of TensorLog?s operations,
and then performs the operations with contents selected from the memory. We call our approach
neural logic programming, or Neural LP.
Experimentally, we show that Neural LP performs well on a number of tasks. It improves the
performance in knowledge base completion on several benchmark datasets, such as WordNet18
and Freebase15K [3]. And it obtains state-of-the-art performance on Freebase15KSelected [25],
a recent and more challenging variant of Freebase15K. Neural LP also performs well on standard
benchmark datasets for statistical relational learning, including datasets about biomedicine and
kinship relationships [12]. Since good performance on many of these datasets can be obtained using
short rules, we also evaluate Neural LP on a synthetic task which requires longer rules. Finally, we
show that Neural LP can perform well in answering partially structured queries, where the query is
posed partially in natural language. In particular, Neural LP also obtains state-of-the-art results on the
KB version of the W IKI M OVIES dataset [16] for question-answering against a knowledge base. In
addition, we show that logical rules can be recovered by executing the learned controller on examples
and tracking the attention.
To summarize, the contributions of this paper include the following. First, we describe Neural LP,
which is, to our knowledge, the first end-to-end differentiable approach to learning not only the
parameters but also the structure of logical rules. Second, we experimentally evaluate Neural LP on
several types of knowledge base reasoning tasks, illustrating that this new approach to inductive logic
programming outperforms prior work. Third, we illustrate techniques for visualizing a Neural LP
model as logical rules.
2
Related work
Structure embedding [3, 24, 29] has been a popular approach to reasoning with a knowledge base.
This approach usually learns a embedding that maps knowledge base relations (e.g CityInCountry)
and entities (e.g. USA) to tensors or vectors in latent feature spaces. Though our Neural LP system can
be used for similar tasks as structure embedding, the methods are quite different. Structure embedding
focuses on learning representations of relations and entities, while Neural LP learns logical rules.
In addition, logical rules learned by Neural LP can be applied to entities not seen at training time.
This is not achievable by structure embedding, since its reasoning ability relies on entity-dependent
representations.
Neural LP differs from prior work on logical rule learning in that the system is end-to-end differentiable, thus enabling gradient based optimization, while most prior work involves discrete search
in the problem space. For instance, Kok and Domingos [12] interleave beam search, using discrete
operators to alter a rule set, with parameter learning via numeric methods for rule confidences. Lao
and Cohen [13] introduce all rules from a restricted set, then use lasso-style regression to select a
subset of predictive rules. Wang et al. [27] use an Iterative Structural Gradient algorithm that alternate
gradient-based search for parameters of a probabilistic logic ProPPR [26], with structural additions
suggested by the parameter gradients.
Recent work on neural program induction [21, 20, 1, 8] have used attention mechanism to ?softly
choose? differentiable operators, where the attentions are simply approximations to binary choices.
The main difference in our work is that attentions are treated as confidences of the logical rules and
2
have semantic meanings. In other words, Neural LP learns a distribution over logical rules, instead of
an approximation to a particular rule. Therefore, we do not use hardmax to replace softmax during
inference time.
3
Framework
3.1
Knowledge base reasoning
Knowledge bases are collections of relational data of the format Relation (head, tail), where
head and tail are entities and Relation is a binary relation between entities. Examples of such
data tuple are HasOfficeInCity (New York, Uber) and CityInCountry (USA, New York).
The knowledge base reasoning task we consider here consists of a query1 , an entity tail that the
query is about, and an entity head that is the answer to the query. The goal is to retrieve a ranked list
of entities based on the query such that the desired answer (i.e. head) is ranked as high as possible.
To reason over knowledge base, for each query we are interested in learning weighted chain-like
logical rules of the following form, similar to stochastic logic programs [19],
? query (Y, X)?Rn (Y, Zn ) ? ? ? ? ? R1 (Z1 , X)
(1)
where ? ? [0, 1] is the confidence associated with this rule, and R1 , . . . , Rn are relations in the
knowledge base. During inference, given an entity x, the score of each y is defined as sum of the
confidence of rules that imply query (y, x), and we will return a ranked list of entities where higher
the score implies higher the ranking.
3.2
TensorLog for KB reasoning
We next introduce TensorLog operators and then describe how they can be used for KB reasoning.
Given a knowledge base, let E be the set of all entities and R be the set of all binary relations. We map
all entities to integers, and each entity i is associated with a one-hot encoded vector vi ? {0, 1}|E|
such that only the i-th entry is 1. TensorLog defines an operator MR for each relation R. Concretely,
MR is a matrix in {0, 1}|E|?|E| such that its (i, j) entry is 1 if and only if R (i, j) is in the knowledge
base, where i is the i-th entity and similarly for j.
We now draw the connection between TensorLog operations and a restricted case of logical rule
inference. Using the operators described above, we can imitate logical rule inference R (Y, X)? P (Y,
.
Z) ? Q (Z, X) for any entity X = x by performing matrix multiplications MP ? MQ ? vx = s. In other
words, the non-zero entries of the vector s equals the set of y such that there exists z that P (y, z) and
Q (z, x) are in the KB. Though we describe the case where rule length is two, it is straightforward to
generalize this connection to rules of any length.
Using TensorLog operations, what we want to learn for each query is shown in Equation 2,
X
?l ?k??l MRk
(2)
l
where l indexes over all possible rules, ?l is the confidence associated with rule l and ?l is an ordered
list of all relations in this particular rule. During inference, given an entity vx , the score of each
retrieved entity is then equivalent to the entries in the vector s, as shown in Equation 3.
X
s=
(?l (?k??l MRk vx )) , score(y | x) = vyT s
(3)
l
To summarize, we are interested in the following learning problem for each query.
!
X
X
X
T
max
score(y | x) = max
vy
(?l (?k??l MRk vx ))
{?l ,?l }
{x,y}
{?l ,?l }
{x,y}
(4)
l
where {x, y} are entity pairs that satisfy the query, and {?l , ?l } are to be learned.
1
In this work, the notion of query refers to relations, which differs from conventional notion, where query
usually contains relation and entity.
3
Figure 2: The neural controller system.
3.3
Learning the logical rules
We will now describe the differentiable rule learning process, including learnable parameters and
the model architecture. As shown in Equation 2, for each query, we need to learn the set of rules
that imply it and the confidences associated with these rules. However, it is difficult to formulate a
differentiable process to directly learn the parameters and the structure {?l , ?l }. This is because each
parameter is associated with a particular rule, and enumerating rules is an inherently discrete task. To
overcome this difficulty, we observe that a different way to write Equation 2 is to interchange the
summation and product, resulting the following formula with a different parameterization,
|R|
T X
Y
t=1
akt MRk
(5)
k
where T is the max length of rules and |R| is the number of relations in the knowledge base. The key
parameterization difference between Equation 2 and Equation 5 is that in the latter we associate each
relation in the rule with a weight. This combines the rule enumeration and confidence assignment.
However, the parameterization in Equation 5 is not sufficiently expressive, as it assumes that all rules
are of the same length. We address this limitation in Equation 6-8, where we introduce a recurrent
formulation similar to Equation 3.
In the recurrent formulation, we use auxiliary memory vectors ut . Initially the memory vector is set
to the given entity vx . At each step as described in Equation 7, the model first computes a weighted
average of previous memory vectors using the memory attention vector bt . Then the model ?softly?
applies the TensorLog operators using the operator attention vector at . This formulation allows the
model to apply the TensorLog operators on all previous partial inference results, instead of just the
last step?s.
u0 = vx
(6)
|R|
ut =
X
akt MRk
!
b?t u?
for 1 ? t ? T
(7)
? =0
k
uT+1 =
t?1
X
T
X
b?T +1 u?
(8)
? =0
Finally, the model computes a weighted average of all memory vectors, thus using attention to select
the proper rule length. Given the above recurrent formulation, the learnable parameters for each
query are {at | 1 ? t ? T } and {bt | 1 ? t ? T + 1}.
We now describe a neural controller system to learn the operator and memory attention vectors.
We use recurrent neural networks not only because they fit with our recurrent formulation, but also
because it is likely that current step?s attentions are dependent on previous steps?. At every step
t ? [1, T + 1], the network predicts operator and memory attention vectors using Equation 9, 10,
4
and 11. The input is the query for 1 ? t ? T and a special END token when t = T + 1.
ht = update (ht?1 , input)
(9)
at = softmax (W ht + b)
(10)
bt = softmax [h0 , . . . , ht?1 ]T ht
(11)
The system then performs the computation in Equation 7 and stores ut into the memory. The memory
holds each step?s partial inference results, i.e. {u0 , . . . , ut , . . . , uT+1 }. Figure 2 shows an overview
of the system. The final inference result u is just the last vector in memory, i.e. uT+1 . As discussed
in Equation 4, the objective is to maximize vyT u. In particular, we maximize log vyT u because the
nonlinearity empirically improves the optimization performance. We also observe that normalizing
the memory vectors (i.e. ut ) to have unit length sometimes improves the optimization.
To recover logical rules from the neural controller system, for each query we can write rules and their
confidences {?l , ?l } in terms of the attention vectors {at , bt }. Based on the relationship between
Equation 3 and Equation 6-8, we can recover rules by following Equation 7 and keep track of the
coefficients in front of each matrix MRk . The detailed procedure is presented in Algorithm 1.
Algorithm 1 Recover logical rules from attention vectors
Input: attention vectors {at | t = 1, . . . , T } and {bt | t = 1, . . . , T + 1}
Notation: Let Rt = {r1 , . . . , rl } be the set of partial rules at step t. Each rule rl is represented by
a pair of (?, ?) as described in Equation 1, where ? is the confidence and ? is an ordered list of
relation indexes.
Initialize: R0 = {r0 } where r0 = (1, ( )).
for t ? 1 to T + 1 do
ct = ?, a placeholder for storing intermediate results.
Initialize: R
for ? ? 0 to t ? 1 do
for rule (?, ?) in R? do
ct .
Update ?0 ? ? ? b?t . Store the updated rule (?0 , ?) in R
if t ? T then
Initialize: Rt = ?
ct do
for rule (?, ?) in R
for k ? 1 to |R| do
Update ?0 ? ? ? akt , ? 0 ? ? append k. Add the updated rule (?0 , ? 0 ) to Rt .
else
ct
Rt = R
return RT +1
4
Experiments
To test the reasoning ability of Neural LP, we conduct experiments on statistical relation learning, grid
path finding, knowledge base completion, and question answering against a knowledge base. For all
the tasks, the data used in the experiment are divided into three files: facts, train, and test. The facts
file is used as the knowledge base to construct TensorLog operators {MRk | Rk ? R}. The train and
test files contain query examples query (head, tail). Unlike in the case of learning embeddings,
we do not require the entities in train and test to overlap, since our system learns rules that are entity
independent.
Our system is implemented in TensorFlow and can be trained end-to-end using gradient methods.
The recurrent neural network used in the neural controller is long short-term memory [9], and the
hidden state dimension is 128. The optimization algorithm we use is mini-batch ADAM [11] with
batch size 64 and learning rate initially set to 0.001. The maximum number of training epochs is 10,
and validation sets are used for early stopping.
4.1
Statistical relation learning
We conduct experiments on two benchmark datasets [12] in statistical relation learning. The first
dataset, Unified Medical Language System (UMLS), is from biomedicine. The entities are biomedical
5
concepts (e.g. disease, antibiotic) and relations are like treats and diagnoses. The second
dataset, Kinship, contains kinship relationships among members of the Alyawarra tribe from Central
Australia [6]. Datasets statistics are shown in Table 1. We randomly split the datasets into facts, train,
test files as described above with ratio 6:2:1. The evaluation metric is Hits@10. Experiment results
are shown in Table 2. Comparing with Iterative Structural Gradient (ISG) [27], Neural LP achieves
better performance on both datasets. 2 We conjecture that this is mainly because of the optimization
strategy used in Neural LP, which is end-to-end gradient-based, while ISG?s optimization alternates
between structure and parameter search.
Table 1: Datasets statistics.
UMLS
Kinship
# Data
# Relation
# Entity
5960
9587
46
25
135
104
Table 2: Experiment results. T indicates the maximum rule length.
ISG
UMLS
Kinship
Figure 3: Accuracy on grid path finding.
4.2
Neural LP
T =2
T =3
T =2
T =3
43.5
59.2
43.3
59.0
92.0
90.2
93.2
90.1
Grid path finding
Since in the previous tasks the rules learned are of length at most three, we design a synthetic task
to test if Neural LP can learn longer rules. The experiment setup includes a knowledge base that
contains location information about a 16 by 16 grid, such as North ((1,2), (1,1)) and SouthEast
((0,2), (1,1)) The query is randomly generated by combining a series of directions, such as
North_SouthWest. The train and test examples are pairs of start and end locations, which are
generated by randomly choosing a location on the grid and then following the queries. We classify
the queries into four classes based on the path length (i.e. Hamming distance between start and
end), ranging from two to ten. Figure 3 shows inference accuracy of this task for learning logical
rules using ISG [27] and Neural LP. As the path length and learning difficulty increase, the results
show that Neural LP can accurately learn rules of length 6-8 for this task, and is more robust than
ISG in terms of handling longer rules.
4.3
Knowledge base completion
We also conduct experiments on the canonical knowledge base completion task as described in [3].
In this task, the query and tail are part of a missing data tuple, and the goal is to retrieve the
related head. For example, if HasOfficeInCountry (USA, Uber) is missing from the knowledge
base, then the goal is to reason over existing data tuples and retrieve USA when presented with
query HasOfficeInCountry and Uber. To represent the query as a continuous input to the neural
controller, we jointly learn an embedding lookup table for each query. The embedding has dimension
128 and is randomly initialized to unit norm vectors.
The knowledge bases in our experiments are from WordNet [17, 10] and Freebase [2]. We use the
datasets WN18 and FB15K, which are introduced in [3]. We also considered a more challenging
dataset, FB15KSelected [25], which is constructed by removing near-duplicate and inverse relations
from FB15K. We use the same train/validation/test split as in prior work and augment data files with
reversed data tuples, i.e. for each relation, we add its inverse inv_relation. In order to create a
2
We use the implementation of ISG available at https://github.com/TeamCohen/ProPPR. In Wang
et al. [27], ISG is compared with other statistical relational learning methods in a different experiment setup, and
ISG is superior to several methods including Markov Logic Networks [12].
6
facts file which will be used as the knowledge base, we further split the original train file into facts
and train with ratio 3:1. 3 The dataset statistics are summarized in Table 3.
Table 3: Knowledge base completion datasets statistics.
Dataset
# Facts
# Train
# Test
# Relation
# Entity
WN18
FB15K
FB15KSelected
106,088
362,538
204,087
35,354
120,604
68,028
5,000
59,071
20,466
18
1,345
237
40,943
14,951
14,541
The attention vector at each step is by default applied to all relations in the knowledge base. Sometimes
this creates an unnecessarily large search space. In our experiment on FB15K, we use a subset of
operators for each query. The subsets are chosen by including the top 128 relations that share common
entities with the query. For all datasets, the max rule length T is 2.
The evaluation metrics we use are Mean Reciprocal Rank (MRR) and Hits@10. MRR computes an
average of the reciprocal rank of the desired entities. Hits@10 computes the percentage of how many
desired entities are ranked among top ten. Following the protocol in Bordes et al. [3], we also use
filtered rankings. We compare the performance of Neural LP with several models, summarized in
Table 4.
Table 4: Knowledge base completion performance comparison. TransE [4] and Neural Tensor
Network [24] results are extracted from [29]. Results on FB15KSelected are from [25].
WN18
Neural Tensor Network
TransE
D IST M ULT [29]
Node+LinkFeat [25]
Implicit ReasoNets [23]
Neural LP
FB15K
FB15KSelected
MRR
Hits@10
MRR
Hits@10
MRR
Hits@10
0.53
0.38
0.83
0.94
0.94
66.1
90.9
94.2
94.3
95.3
94.5
0.25
0.32
0.35
0.82
0.76
41.4
53.9
57.7
87.0
92.7
83.7
0.25
0.23
0.24
40.8
34.7
36.2
Neural LP gives state-of-the-art results on WN18, and results that are close to the state-of-the-art on
FB15K. It has been noted [25] that many relations in WN18 and FB15K have inverse also defined,
which makes them easy to learn. FB15KSelected is a more challenging dataset, and on it, Neural LP
substantially improves the performance over Node+LinkFeat [25] and achieves similar performance
as D IST M ULT [29] in terms of MRR. We note that in FB15KSelected, since the test entities are rarely
directly linked in the knowledge base, the models need to reason explicitly about compositions of
relations. The logical rules learned by Neural LP can very naturally capture such compositions.
Examples of rules learned by Neural LP are shown in Table 5. The number in front each rule is the
normalized confidence, which is computed by dividing by the maximum confidence of rules for each
relation. From the examples we can see that Neural LP successfully combines structure learning
and parameter learning. It not only induce multiple logical rules to capture the complex structure in
the knowledge base, but also learn to distribute confidences on rules.
To demonstrate the inductive learning advantage of Neural LP, we conduct experiments where training
and testing use disjoint sets of entities. To create such setting, we first randomly select a subset of
the test tuples to be the test set. Secondly, we filter the train set by excluding any tuples that share
entities with selected test tuples. Table 6 shows the experiment results in this inductive setting.
3
We also make minimal adjustment to ensure that all query relations in test appear at least once in train and
all entities in train and test are also in facts. For FB15KSelected, we also ensure that entities in train are not
directly linked in facts.
7
Table 5: Examples of logical rules learned by Neural LP on FB15KSelected. The letters A,B,C are
ungrounded logic variables.
1.00 partially_contains(C, A) ? contains (B, A) ? contains (B, C)
0.45 partially_contains(C, A) ? contains (A, B) ? contains (B, C)
0.35 partially_contains(C, A) ? contains (C, B) ? contains (B, A)
1.00 marriage_location (C, A) ? nationality (C, B) ? contains (B, A)
0.35 marriage_location (B, A) ? nationality (B, A)
0.24 marriage_location (C, A) ? place_lived (C, B) ? contains (B, A)
1.00 film_edited_by (B, A)?nominated_for (A, B)
0.20 film_edited_by (C, A)?award_nominee (B, A) ? nominated_for (B, C)
Table 6: Inductive knowledge base completion. The metric is Hits@10.
TransE
Neural LP
WN18
FB15K
FB15KSelected
0.01
94.49
0.48
73.28
0.53
27.97
As expected, the inductive setting results in a huge decrease in performance for the TransE model4 ,
which uses a transductive learning approach; for all three datasets, Hits@10 drops to near zero, as
one could expect. In contrast, Neural LP is much less affected by the amount of unseen entities and
achieves performance at the same scale as the non-inductive setting. This emphasizes that our Neural
LP model has the advantage of being able to transfer to unseen entities.
4.4
Question answering against knowledge base
We also conduct experiments on a knowledge reasoning task where the query is ?partially structured?,
as the query is posed partially in natural language. An example of a partially structured query would
be ?in which country does x has an office? for a given entity x, instead of HasOfficeInCountry (Y,
x). Neural LP handles queries of this sort very naturally, since the input to the neural controller is a
vector which can encode either a structured query or natural language text.
We use the W IKI M OVIES dataset from Miller et al. [16]. The dataset contains a knowledge base and
question-answer pairs. Each question (i.e. the query) is about an entity and the answers are sets of
entities in the knowledge base. There are 196,453 train examples and 10,000 test examples. The
knowledge base has 43,230 movie related entities and nine relations. A subset of the dataset is shown
in Table 7.
Table 7: A subset of the W IKI M OVIES dataset.
Knowledge base
directed_by (Blade Runner, Ridley Scott)
written_by (Blade Runner, Philip K. Dick)
starred_actors (Blade Runner, Harrison Ford)
starred_actors (Blade Runner, Sean Young)
Questions
What year was the movie Blade Runner released?
Who is the writer of the film Blade Runner?
We process the dataset to match the input format of Neural LP. For each question, we identity the
tail entity by checking which words match entities in the knowledge base. We also filter the
words in the question, keeping only the top 100 frequent words. The length of each question is
limited to six words. To represent the query in natural language as a continuous input for the neural
controller, we jointly learn a embedding lookup table for all words appearing in the query. The
query representation is computed as the arithmetic mean of the embeddings of the words in it.
4
We use the implementation of TransE available at https://github.com/thunlp/KB2E.
8
We compare Neural LP with several embedding based QA models. The main difference between
these methods and ours is that Neural LP does not embed the knowledge base, but instead learns
to compose operators defined on the knowledge base. The comparison is summarized in Table 8.
Experiment results are extracted from Miller et al. [16].
Table 8: Performance comparison. Memory Network is from [28]. QA system is from [4].
Model
Figure 4: Visualization of learned logical rules.
Accuracy
Memory Network
QA system
Key-Value Memory Network [16]
Neural LP
78.5
93.5
93.9
94.6
To visualize the learned model, we randomly sample 650 questions from the test dataset and compute
the embeddings of each question. We use tSNE [15] to reduce the embeddings to the two dimensional
space and plot them in Figure 4. Most learned logical rules consist of one relation from the knowledge
base, and we use different colors to indicate the different relations and label some clusters by relation.
The experiment results show that Neural LP can successfully handle queries that are posed in natural
language by jointly learning word representations as well as the logical rules.
5
Conclusions
We present an end-to-end differentiable method for learning the parameters as well as the structure
of logical rules for knowledge base reasoning. Our method, Neural LP, is inspired by a recent
probabilistic differentiable logic TensorLog [5]. Empirically Neural LP improves performance on
several knowledge base reasoning datasets. In the future, we plan to work on more problems where
logical rules are essential and complementary to pattern recognition.
Acknowledgments
This work was funded by NSF under IIS1250956 and by Google Research.
References
[1] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural
networks for question answering. In Proceedings of NAACL-HLT, pages 1545?1554, 2016.
[2] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a
collaboratively created graph database for structuring human knowledge. In Proceedings of
the 2008 ACM SIGMOD international conference on Management of data, pages 1247?1250.
ACM, 2008.
[3] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko.
Translating embeddings for modeling multi-relational data. In Advances in neural information
processing systems, pages 2787?2795, 2013.
[4] Antoine Bordes, Sumit Chopra, and Jason Weston. Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676, 2014.
[5] William W Cohen. Tensorlog: A differentiable deductive database.
arXiv:1605.06523, 2016.
arXiv preprint
[6] Woodrow W Denham. The detection of patterns in Alyawara nonverbal behavior. PhD thesis,
University of Washington, Seattle., 1973.
[7] Lise Getoor. Introduction to statistical relational learning. MIT press, 2007.
[8] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi?nska, Sergio G?mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,
et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538
(7626):471?476, 2016.
9
[9] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735?1780, 1997.
[10] Adam Kilgarriff and Christiane Fellbaum. Wordnet: An electronic lexical database, 2000.
[11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[12] Stanley Kok and Pedro Domingos. Statistical predicate invention. In Proceedings of the 24th
international conference on Machine learning, pages 433?440. ACM, 2007.
[13] Ni Lao and William W Cohen. Relational retrieval using a combination of path-constrained
random walks. Machine learning, 81(1):53?67, 2010.
[14] Ni Lao, Tom Mitchell, and William W Cohen. Random walk inference and learning in a large
scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural
Language Processing, pages 529?539. Association for Computational Linguistics, 2011.
[15] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine
Learning Research, 9(Nov):2579?2605, 2008.
[16] Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and
Jason Weston. Key-value memory networks for directly reading documents. arXiv preprint
arXiv:1606.03126, 2016.
[17] George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):
39?41, 1995.
[18] Stephen Muggleton, Ramon Otero, and Alireza Tamaddoni-Nezhad. Inductive logic programming, volume 38. Springer, 1992.
[19] Stephen Muggleton et al. Stochastic logic programs. Advances in inductive logic programming,
32:254?264, 1996.
[20] Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent
programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015.
[21] Arvind Neelakantan, Quoc V Le, Martin Abadi, Andrew McCallum, and Dario Amodei.
Learning a natural language interface with neural programmer. arXiv preprint arXiv:1611.08945,
2016.
[22] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine learning, 62(1-2):
107?136, 2006.
[23] Yelong Shen, Po-Sen Huang, Ming-Wei Chang, and Jianfeng Gao. Implicit reasonet: Modeling
large-scale structured relationships with shared memory. arXiv preprint arXiv:1611.04642,
2016.
[24] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural
tensor networks for knowledge base completion. In Advances in neural information processing
systems, pages 926?934, 2013.
[25] Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and
text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and
their Compositionality, pages 57?66, 2015.
[26] William Yang Wang, Kathryn Mazaitis, and William W Cohen. Programming with personalized
pagerank: a locally groundable first-order probabilistic logic. In Proceedings of the 22nd ACM
international conference on Information & Knowledge Management, pages 2129?2138. ACM,
2013.
[27] William Yang Wang, Kathryn Mazaitis, and William W Cohen. Structure learning via parameter
learning. In CIKM 2014, 2014.
[28] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint
arXiv:1410.3916, 2014.
[29] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and
relations for learning and inference in knowledge bases. In ICLR, 2015.
10
| 6826 |@word illustrating:1 version:1 achievable:1 interleave:2 norm:1 retraining:1 nd:1 duran:1 iki:3 mrk:7 jacob:1 yih:1 blade:6 contains:12 score:5 series:1 ours:1 document:1 kurt:1 reynolds:1 outperforms:2 past:1 existing:1 recovered:1 current:1 comparing:1 com:2 diederik:1 john:1 evans:1 numerical:1 otero:1 enables:1 drop:1 interpretable:1 update:3 plot:1 kristina:1 selected:2 imitate:1 parameterization:3 ivo:1 amir:1 mccallum:1 reciprocal:2 short:3 filtered:1 node:2 location:4 constructed:1 yelong:1 abadi:1 consists:1 combine:3 compose:4 dan:1 introduce:3 expected:1 behavior:1 multi:1 inspired:3 ming:1 company:1 enumeration:1 underlying:1 notation:1 kinship:5 what:2 substantially:1 developed:1 proposing:1 unified:1 finding:3 every:1 usefully:1 hit:8 wayne:1 unit:2 ly:2 medical:1 appear:1 danihelka:1 treat:1 tensorlog:16 path:6 might:1 challenging:3 limited:2 statistically:1 ridley:1 acknowledgment:1 testing:1 differs:2 procedure:1 empirical:1 yakhnenko:1 confidence:13 word:9 refers:1 induce:1 close:1 operator:13 equivalent:1 map:2 conventional:1 missing:2 lexical:2 jesse:1 primitive:1 attention:16 straightforward:1 sepp:1 jimmy:1 formulate:1 shen:1 rule:82 insight:1 retrieve:3 mq:1 embedding:10 handle:2 notion:2 updated:2 alyawarra:1 programming:9 us:2 kathryn:2 domingo:3 associate:1 recognition:1 predicts:1 database:4 observed:1 preprint:8 wang:4 capture:2 decrease:1 disease:1 dynamic:1 trained:1 predictive:1 creates:1 writer:1 dodge:1 completely:1 po:1 represented:1 train:14 ramalho:1 describe:5 query:39 jianfeng:2 choosing:1 h0:1 quite:2 encoded:1 posed:3 film:1 ability:2 statistic:4 unseen:2 richardson:1 transductive:1 jointly:3 noisy:1 ford:1 final:1 sequence:2 differentiable:17 advantage:3 sen:1 propose:1 jamie:1 product:1 frequent:1 combining:1 subgraph:1 fisch:1 inducing:1 seattle:1 sutskever:1 cluster:1 darrell:1 r1:3 adam:4 executing:1 tim:2 illustrate:1 recurrent:6 completion:8 andrew:2 school:1 edward:1 dividing:1 auxiliary:1 c:1 involves:3 implies:1 indicate:1 implemented:1 direction:1 laurens:1 filter:2 stochastic:3 kb:4 vx:6 australia:1 human:1 programmer:2 translating:1 require:1 secondly:1 summation:1 hold:1 sufficiently:1 considered:2 visualize:1 matthew:1 rgen:1 achieves:3 early:1 collaboratively:1 released:1 label:1 southeast:1 deductive:1 create:2 establishes:1 successfully:2 weighted:3 mit:1 freebase:3 office:1 encode:1 structuring:1 focus:1 lise:1 model4:1 rank:2 indicates:1 mainly:1 contrast:1 inference:16 dependent:2 stopping:1 softly:3 bt:5 initially:2 hidden:1 relation:33 france:2 interested:2 karimi:1 among:2 hossein:1 augment:1 plan:1 art:4 oksana:1 softmax:3 special:1 initialize:3 equal:1 construct:1 once:1 constrained:1 beach:1 washington:1 ng:1 unnecessarily:1 alter:1 future:1 duplicate:1 richard:1 wen:1 modern:1 randomly:6 composed:1 simultaneously:1 william:8 harley:1 detection:1 huge:1 evaluation:2 runner:6 chain:1 accurate:1 tuple:2 partial:3 conduct:5 taylor:1 initialized:1 desired:3 walk:2 minimal:1 instance:1 classify:1 modeling:2 zn:1 assignment:1 subset:7 entry:4 predicate:1 sumit:2 front:2 answer:4 synthetic:2 st:1 international:3 probabilistic:7 fb15k:8 ilya:1 thesis:1 central:1 management:2 denham:1 choose:2 huang:1 external:1 style:1 return:2 li:1 distribute:1 lookup:2 summarized:3 includes:1 coefficient:1 north:1 satisfy:1 mp:1 wcohen:1 ranking:2 vi:1 explicitly:1 jason:4 linked:2 start:2 recover:3 sort:1 contribution:1 ni:2 accuracy:3 vyt:3 greg:1 who:1 miller:4 generalize:1 accurately:1 emphasizes:1 biomedicine:2 trevor:1 hlt:1 against:3 naturally:2 associated:6 hamming:1 nonverbal:1 dataset:13 popular:1 logical:32 mitchell:1 knowledge:54 ut:8 improves:5 color:1 stanley:1 sean:1 fellbaum:1 higher:2 tom:1 wei:1 formulation:5 done:1 box:1 though:2 mez:1 just:2 stage:1 equipping:1 biomedical:1 implicit:2 expressive:1 christopher:1 tsne:1 google:1 defines:1 xiaodong:1 usa:7 naacl:1 contain:1 true:1 concept:1 normalized:1 inductive:10 christiane:1 dario:1 umls:3 semantic:1 visualizing:2 during:3 noted:1 mazaitis:2 demonstrate:1 performs:4 interface:1 reasoning:16 meaning:1 ranging:1 recently:1 superior:1 common:1 empirically:3 overview:1 rl:2 cohen:7 volume:1 tail:6 discussed:1 association:1 he:1 mellon:1 composition:2 ai:1 rd:1 nationality:2 grid:5 similarly:1 nonlinearity:1 language:8 funded:1 longer:3 compiled:2 base:50 add:2 sergio:1 recent:3 mrr:6 retrieved:1 sturge:1 scenario:1 store:2 certain:1 schmidhuber:1 binary:3 der:1 seen:1 george:1 mr:2 deng:1 r0:3 maximize:2 colin:1 u0:2 arithmetic:1 multiple:2 stephen:2 match:2 muggleton:2 long:3 retrieval:1 arvind:2 divided:1 alberto:1 variant:1 regression:1 controller:11 cmu:1 metric:3 arxiv:16 sometimes:2 represent:2 alireza:1 hochreiter:1 beam:1 bishan:1 addition:3 want:1 else:1 harrison:1 country:2 unlike:1 nska:1 file:7 member:1 call:1 integer:1 structural:3 near:2 yang:5 chopra:2 intermediate:1 split:3 embeddings:7 easy:1 fit:1 architecture:1 lasso:1 reduce:1 andreas:1 enumerating:1 six:1 york:4 nine:1 useful:1 detailed:1 amount:1 kok:2 ten:2 neelakantan:2 locally:1 antibiotic:1 http:2 percentage:1 vy:1 canonical:1 nsf:1 cikm:1 disjoint:1 track:1 klein:1 diagnosis:1 carnegie:1 discrete:6 write:2 affected:1 ist:2 key:3 four:1 zhilin:1 ce:1 ht:5 invention:1 graph:1 sum:1 year:1 inverse:3 letter:1 electronic:1 transe:6 draw:1 maaten:1 ct:4 fan:1 alex:1 personalized:1 performing:1 format:2 conjecture:1 martin:1 structured:5 alternate:2 amodei:1 combination:1 manning:1 lp:41 quoc:2 praveen:1 restricted:2 equation:17 visualization:1 mechanism:2 end:15 usunier:1 available:2 operation:8 apply:1 observe:2 appearing:1 alternative:1 robustness:1 batch:2 original:1 assumes:1 top:3 include:1 ensure:2 linguistics:1 tiago:1 placeholder:1 sigmod:1 tensor:4 move:2 objective:1 added:1 question:13 strategy:1 rt:5 antoine:4 gradient:9 iclr:1 distance:1 reversed:1 entity:44 philip:1 reason:3 induction:1 marcus:1 length:13 index:2 relationship:4 mini:1 ratio:2 dick:1 grabskabarwi:1 difficult:3 unfortunately:1 setup:2 potentially:1 sne:1 append:1 ba:1 design:3 implementation:2 proper:1 perform:1 datasets:15 markov:3 benchmark:4 enabling:1 descent:1 relational:8 excluding:1 head:6 hinton:1 communication:1 rn:2 compositionality:1 introduced:1 pair:4 paris:2 connection:3 z1:1 learned:10 tensorflow:1 kingma:1 nip:1 qa:3 address:1 able:1 suggested:1 usually:2 pattern:2 scott:1 reading:1 summarize:2 woodrow:1 program:4 pagerank:1 including:5 tau:1 ramon:1 interpretability:1 memory:22 max:4 hot:1 overlap:1 getoor:1 natural:7 treated:1 ranked:4 difficulty:2 hybrid:1 github:2 movie:2 lao:3 imply:2 created:1 text:2 prior:5 epoch:1 checking:1 multiplication:2 determining:1 graf:1 expect:1 limitation:1 geoffrey:1 versus:1 validation:2 storing:1 share:2 bordes:5 token:1 last:2 keeping:1 tribe:1 english:1 paritosh:1 sparse:1 van:1 overcome:1 dimension:2 default:1 numeric:1 computes:4 concretely:1 collection:2 interchange:1 agnieszka:1 nov:1 obtains:2 logic:21 keep:1 sequentially:1 tuples:5 agapiou:1 continuous:4 search:6 latent:3 iterative:2 table:18 nature:1 learn:13 transfer:2 robust:1 ca:1 inherently:1 nicolas:1 complex:2 protocol:1 main:2 complementary:1 body:1 akt:3 answering:6 third:1 learns:7 young:1 formula:1 rk:1 removing:1 embed:1 specific:1 learnable:2 list:4 normalizing:1 exists:1 consist:1 essential:1 socher:1 query1:1 toutanova:1 workshop:1 phd:1 danqi:2 chen:2 garcia:1 simply:1 explore:1 likely:1 rohrbach:1 gao:2 ordered:2 adjustment:1 tracking:1 partially:5 chang:1 applies:1 springer:1 bollacker:1 pedro:2 relies:1 extracted:2 acm:6 weston:4 grefenstette:1 goal:3 identity:1 replace:1 shared:1 content:1 experimentally:2 included:1 wordnet:3 called:3 uber:5 rarely:1 select:3 colmenarejo:1 latter:1 ult:2 alexander:1 evaluate:2 wn18:6 malcolm:1 handling:1 |
6,442 | 6,827 | Deep Multi-task Gaussian Processes for
Survival Analysis with Competing Risks
Ahmed M. Alaa
Electrical Engineering Department
University of California, Los Angeles
[email protected]
Mihaela van der Schaar
Department of Engineering Science
University of Oxford
[email protected]
Abstract
Designing optimal treatment plans for patients with comorbidities requires accurate cause-specific mortality prognosis. Motivated by the recent availability of
linked electronic health records, we develop a nonparametric Bayesian model for
survival analysis with competing risks, which can be used for jointly assessing a
patient?s risk of multiple (competing) adverse outcomes. The model views a patient?s survival times with respect to the competing risks as the outputs of a deep
multi-task Gaussian process (DMGP), the inputs to which are the patients? covariates. Unlike parametric survival analysis methods based on Cox and Weibull models, our model uses DMGPs to capture complex non-linear interactions between
the patients? covariates and cause-specific survival times, thereby learning flexible patient-specific and cause-specific survival curves, all in a data-driven fashion
without explicit parametric assumptions on the hazard rates. We propose a variational inference algorithm that is capable of learning the model parameters from
time-to-event data while handling right censoring. Experiments on synthetic and
real data show that our model outperforms the state-of-the-art survival models.
1
Introduction
Designing optimal treatment plans for elderly patients or patients with comorbidities is a challenging
problem: the nature (and the appropriate level of invasiveness) of the best therapeutic intervention
for a patient with a specific clinical risk depends on whether this patient suffers from, or is susceptible to other "competing risks" [1-3]. For instance, the decision on whether a diabetic patient who
also has a renal disease should receive dialysis or a renal transplant must be based on a joint prognosis of diabetes-related complications and end-stage renal failure; overlooking the diabetes-related
risks may lead to misguided therapeutic decisions [1]. The same problem arises in nephrology,
where a typical patient?s competing risks are peritonitis, death, kidney transplantation and transfer
to haemodialysis [2]. An even more common encounter with competing risks realizes in oncology
and cardiovascular medicine, where the risk of a cardiac disease may alter the decision on whether a
cancer patient should undergo chemotherapy or a particular type of surgery [3]. Since conventional
methods for survival analysis, such as the Kaplan-Meier method and standard Cox proportional hazards regression, are not equipped to handle competing risks, alternate variants of those methods that
rely on cumulative incidence estimators have been proposed and used in clinical research [1-7].
According to the most recent data brief by the Office of National Coordinator (ONC)1 , electronic
health records (EHRs) are currently deployed in more than 75% of hospitals in the United States
[8]. The increasing availability of data in EHRs has stimulated a great deal of research efforts
that used machine learning to conduct clinical risk prognosis and survival analysis. In particular,
1
https://www.healthit.gov/sites/default/files/briefs/
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
various recent works have proposed novel methods for survival analysis based on Gaussian
processes [9], "temporal" logistic regression [10], ranking [11], and deep neural networks [12].
All these works have were restricted to the conventional survival analysis problem in which
there is only one event of interest rather than a set of competing risks. (A detailed overview of
previous works is provided in Section 3.) The usage of machine learning to construct data-driven
survival models for patients with comorbidities is an important step towards precision medicine [13].
Contribution In the light of the discussion above, we develop a nonparametric Bayesian model for
survival analysis with competing risks using deep (multi-task) Gaussian processes (DMGPs) [15].
Our model relies on a novel conception of the competing risks problem as a multi-task learning problem; that is, we model the cause-specific survival times as the outputs of a random vector-valued
function [14], the inputs to which are the patients? covariates. This allows us to learn a "shared
representation" of the patients? survival times with respect to multiple related comorbidities. The
proposed model is Bayesian: we assign a prior distribution over a space of vector-valued functions of
the patients? covariates [16], and update the posterior distribution given a (potentially right-censored)
time-to-event dataset. This process gives rise to patient-specific multivariate survival distributions,
from which a patient-specific, cause-specific cumulative incidence function can be easily derived.
Such a patient-specific cumulative incidence function serves as actionable information, based upon
which clinicians can design personalized treatment plans. Unlike many existing parametric survival
models, our model neither assumes a parametric form for the interactions between the covariates and
the survival times, nor does it restrict the distribution of the survival times to a parametric model.
Thus, it can flexibly describe non-proportional hazard rates with complex interactions between covariates and survival times, which are common in many diseases with heterogeneous phenotypes
(such as cardiovascular diseases [2]). Inference of patient-specific posterior survival distribution is
conducted via a variational Bayes algorithm; we use inducing variables to derive a variational lower
bound on the marginal likelihood of the observed time-to-event data [17], which we maximize using
the adaptive moment estimation algorithm [18]. We conduct a set of experiments on synthetic and
real data showing that our model outperforms state-of-the-art survival models.
2
Preliminaries
We consider a dataset D comprising survival (time-to-event) data for n subjects who have been
followed up for a finite amount of time. Let D = {Xi , Ti , ki }ni=1 , where Xi ? X is a d-dimensional
vector of covariates associated with subject i, Ti ? R+ is the time until an event occurred, and
ki ? K is the type of event that occurred. The set K = {?, 1, . . ., K} is a finite set of K mutually
exclusive, competing events that could occur to subject i, where ? corresponds to right-censoring.
For simplicity of exposition, we assume
that only one event occurs for every patient; this corresponds, for instance, to
the case when the events in K correspond to deaths due to different causes.
This assumption does not simplify the
problem, in fact it implies the nonidentifiability of the event times? distribution parameters [6, 7], which makes the
problem more challenging. Figure 1 depicts a time-to-event dataset D with patients dying due to either cancer or cardiovascular diseases, or have their endpoints censored. Throughout this paper,
we assume independent censoring [1-7],
i.e. censoring times are independent of
clinical outcomes.
Patient 9
Patient 8
T7
Patient 7
k7 = 1
Patient 6
Patient 5
Patient 4
Cardiovascular
Patient 3
(k = 1)
Patient 2
(k = 2)
Patient 1
Cancer
Censored
(k = ?)
Time since Diagnosis
Figure 1: Depiction for the time-to-event data.
Define a multivariate random variable T = (T 1 , . . ., T K ), where T k , k ? K, denotes the net survival time with respect to event k, i.e. the survival time of the subject given that only event k can
occur. We assume that T is drawn from a conditional density function that depends on the sub2
ject?s covariates. For every subject i, we only observe the occurrence time for the earliest event, i.e.
Ti = min(Ti1 , . . ., TiK ) and ki = arg minj Tij .
The cause-specific hazard function ?k (t, X) represents the instantaneous risk of event k, and is
1
formally defined as ?k (t, X) = limdt?0 dt
P(t ? T k < t + dt, k | T?k ? t, X) [6]. By the law of
total probability, the overall hazard function is given by ?(t, X) = k?K ?k (t, X). This leads to
?t
the notion of a survival function S(t, X) = exp( 0 ?(u, X)du), which captures the probability of
a subject surviving all types of risk events up to time t. The Cumulative Incidence Function (CIF),
also known as the subdistribution function [2-7], is the
? t probability of occurrence of a particular
event in k ? K by time t, and is given by Fk (t, X) = 0 ?k (u, X) S(u, X)du. Our main goal is to
estimate the CIF function using the dataset D; through these estimates, treatment plans can be set
up for patients who suffer from comorbidities or are at risk of different types of diseases.
3
Survival Analysis using Deep Multi-task Gaussian Processes
We conduct patient-specific survival analysis by directly modeling the event times T as a function
of the patients? covariates through the generative probabilistic model described hereunder.
Deep Multi-task Gaussian Processes (DMGPs) We assume that the net survival times for a patient with covariates X are generated via a (nonparametric) multi-output random function g(.),
i.e. T = g(X), and we use Gaussian processes to model g(.). A simple model of the form
g(X) = f (X) + ?, with f (.) being a Gaussian process and ? a Gaussian noise, would constrain
T to have a symmetric Gaussian distribution with a restricted parametric form conditional on X
[Sec. 2, 19]. This may not be a realistic construct for many settings in which the survival times
display an asymmetric distribution (e.g. cancer survival times [2]). To that end, we model g(.) as
a Deep multi-task Gaussian Process (DMGP) [15]; a multi-layer cascade of vector-valued Gaussian
processes that confer a greater representational power and produce outputs that are generally nonGaussian. In particular, we assume that the net survival times T are generated via a DMGP with
two layers as follows
T = fT (Z) + ?T , ?T ? N (0, ?T2 I),
2
Z = fZ (X) + ?Z , ?Z ? N (0, ?Z
I),
(1)
where ?T and ?Z are the noise variances at the two layers, fT (.) and fZ (.) are two Gaussian processes with hyperparameters ?T and ?Z respectively, and Z is a hidden variable that the first layer
passes to the second. Based on (1), we have that g(X) = fT (fZ (X) + ?Z ) + ?T . The model in (1)
resembles a neural network with two layers and an infinite number of hidden nodes in each layer,
but with an output that can be described probabilistically in terms of a distribution. We assume that
fT (.) has K outputs, whereas fZ (.) has Q outputs. The use of a Gaussian processes with two layers
allows us to jointly represent complex survival distributions and complex interactions with the covariates in a data-driven fashion, without the need to assume a predefined non-linear transformation
on the output space as it is the case in warped Gaussian processes [19-20].
A dataset D comprising n
i.i.d instances can be sampled
from our model as follows:
fZ ? GP(0, K?Z ),
fT ? GP(0, K?T ),
2
Zi ? N (fZ (Xi ), ?Z
I),
Ti ? N (fT (Zi ), ?T2 I),
Ti = min(Ti1 , . . ., TiK ),
i ? {1, . . ., n}, where K?
is the Gaussian process kernel
with hyperparameters ?.
?Z
fZ
X
T
?T
Z
T1
fT
Covariates
(Parent node)
..
.
T
First Layer
K
Competing
events
times
T
Survival time
(Leaf node)
Second Layer
Figure 2: Graphical depiction for the probabilistic model.
Figure 2 provides a graphical depiction for our model (observable variables are in double-circled
nodes); patient?s covariates are the parent node; the survival time is the leaf node.
3
Survival Analysis as a Multi-task Learning Problem As can be seen in (1), the cause-specific net
survival times are viewed as the outputs of a vector-valued function g(.). This casts the competing
risks problem in a multi-task learning framework that allows finding a shared representation for the
subjects? survival behavior with respect to multiple correlated comorbidities, such as renal failure,
diabetes and cardiac diseases [1-3]. Such a shared representation is captured via the kernel functions
for the two DMGP layers (i.e. K?Z and K?T ). For both layers, we assume that the kernels follow
an intrinsic coregionalization model [14, 16], i.e.
K?Z (x, x? ) = AZ kZ (x, x? ),
K?T (x, x? ) = AT kT (x, x? ),
(2)
are positive semi-definite matrices, kZ (x, x? ) and
, AT ? RK?K
where AZ ? RQ?Q
+
+
?
kT (x,
x
)
are
radial
basis
functions
with
automatic
relevance determination, i.e. kZ (x, x? ) =
( 1
)
?1
exp ? 2 (x ? x? )T RZ (x ? x? ) , RZ = diag(?21,Z , ?22,Z , . . . , ?2d,Z ), with ?j,Z being the length
scale parameter of the j th feature (kT (x, x? ) can be defined similarly). Note that unlike regular
Gaussian processes, DMGPs are less sensitive to the selection of the parametric form of the kernel functions [15]. This because the output of the first layer undergoes a transformation through a
learned nonparametric function fZ (.), and hence the "overall smoothness" of the function g(X) is
governed by an "equivalent data-driven kernel" function describing the transformation fT (fZ (.)).
Our model adopts a Bayesian approach to multi-task learning: it posits a prior distribution on the
multi-output function g(X), and then conducts the survival analysis by updating the posterior distribution of the event times P(g(X) | D, ?Z , ?T ) given the evidential data in the time-to-event dataset
D. The distribution P(g(X) | D, ?Z , ?T ) does not commit to any predefined parametric form since
it is depends on a random variable transformation through a nonparametric function g(.). In Section
4, we propose an inference algorithm for computing the posterior distribution P(T | D, X? , ?Z , ?T )
for a given out-of-sample subject with covariates X? . Once P(T | X? , D) is computed, we can directly derive the CIF function Fk (t, X? ) for all events k ? K as explained in Section 2. A pictorial
visualization of the survival analysis procedure assuming 2 competing risks is provided in Fig. 3.
Figure 3: Pictorial depiction for survival analysis with 2 competing risks using deep multi-task Gaussian
processes. The posterior distribution of T given D is displayed in the top left panel, and the corresponding
cumulative incidence functions for a particular patient with covariates X? is displayed in the bottom left panel.
The posterior distributions on the two DMGP layers conditional on their inputs are depicted on the right panels.
Related Works Standard survival modeling in the statistical and medical research literature is
largely based on either the nonparametric Kaplan-Meier estimator [21], or the (parametric) Cox
proportional hazard model [22]. The former is capable of learning flexible ?and potentially nonproportional? survival curves but fails to incorporate patients? covariates, whereas the latter is capable of incorporating covariates, but is restricted to rigid parametric assumptions that impose proportional hazard curves. These limitations seems to have been inherited by various recently developed
Bayesian nonparametric survival models. For instance, [24] develops a Bayesian survival model
based on a Dirichlet prior, and [23] develops a model based on Gaussian latent fields, and proposes
an inference algorithm that utilizes nested Laplace approximations; however, neither model incorporates the individual patient?s covariates, and hence both are restricted to estimating a population-level
survival curves which cannot inform personalized treatment plans. Contrarily, our model does not
suffer from any such limitations since it learns patient-specific, nonparametric survival curves by
adopting a Bayesian prior over a function space that takes the patients? covariates as an input.
4
A lot of interest has been recently devoted to the problem of survival analysis by the machine learning community. Recently developed survival models include random survival forests [26], deep
exponential families [12], dependent logistic regressors [10], ranking algorithms [11], and semiparametric Bayesian models based on Gaussian processes [9]. All of these methods are capable of
incorporating the individual patient?s covariates, but none of them has considered the problem of
competing risks. The problem of survival analysis with competing risks has been only addressed
through two classical parametric models: (1) the Fine-Gray model, which modifies the traditional
proportional hazard model by direct transformation of the CIF [4], and (2) the threshold regression
(multi-state) models, which directly model net survival times as the first hitting times of a stochastic
process (e.g. Weiner process) [25]. Unlike our model, both models are limited by strong parametric
assumptions on both the hazard rates, and the nature of the interactions between the patient covariates and the survival curves. These limitations have been slightly alleviated in [19], which uses a
Gaussian process to model the interactions between survival times and covariates. However, this
model assumes a Gaussian distribution as a basis for an accelerated failure time model, which is
both unrealistic (since the distribution of survival times is often asymmetric), and also hinders the
nonparametric modeling of survival curves. The model in [19] can be ameliorated via a warped
Gaussian process that first transforms the survival times through a deterministic, monotonic nonlinear function, and then applies Gaussian process regression on the transformed survival times [20],
which would lead to more degrees of freedom in modeling the survival curves. Our model can be
thought of as a generalization of a warped Gaussian process in which the deterministic non-linear
transformation is replaced with another data-driven Gaussian process, which enables flexible nonparametric modeling of the survival curves. In Section 5, we demonstrate the superiority of our
model via experiments on synthetic and real datasets.
4
Inference
As discussed in Section 3, conducting survival analysis requires computing the posterior probability density dP(T? | D, X? , ?Z , ?T ) for a given out-of-sample point X? with T? = g(X? ). We
follow an empirical Bayes approach for updating the posterior on g(.). That is, we first tune the
hyperparameters ?Z and ?T using the offline dataset D, and then for any out-of-sample patient
with covariates X? , we evaluate dP(T? | D, X? , ?Z , ?T ) by direct Monte Carlo sampling.
We calibrate the hyperparameters by maximizing the marginal likelihood dP(D | ?Z , ?T ). Note
that for every subject i in D, we observe a "label" of the form (Ti , ki ), indicating the type of event
that occurred to the subject along with the time of its occurrence. Since Ti is the smallest element in
T, then the label (Ti , ki ) is informative of all the events (i.e. all the learning tasks) in K/{ki }; we
know that Tij ? Ti , ?j ? K/{ki }. We also note that the subject?s data may be right-censored, i.e.
ki = ?, which implies that Tij ? Ti , ?j ? K. Hence, the likelihood of the survival information in D
is
dP({Xi , Ti , ki }ni=1 | ?Z , ?T ) ? dP({Ti }ni=1 | {Xi }ni=1 , ?Z , ?T ),
where Ti is a set of events given by
{
{Tiki = Ti , {Tij ? Ti }j?K/{ki } }, ki ?= ?,
Ti =
(3)
{Tij ? Ti }j?K ,
ki = ?.
We can write the marginal likelihood in (3) as the conditional density by marginalizing over the
conditional distribution of the hidden ?
variable Zi as follows
dP({Ti }ni=1 | {Xi }ni=1 , ?Z , ?T ) =
dP({Ti }ni=1 | {Zi }ni=1 , ?T ) dP({Zi }ni=1 | {Xi }ni=1 , ?Z ).
(4)
Since the integral in (4) is intractable, we follow the variational inference scheme proposed in [15],
where we tune the hyperparameters by maximizing the following variational bound on (4):
)
(
?
dP({Ti }ni=1 , {Zi }ni=1 , {fz (Xi )}ni=1 , {fT (Zi )}ni=1 | {Xi }ni=1 , ?Z , ?T )
,
F=
Q ? log
Q
Z,fz ,fT
where Q is a variational distribution, and F ? log (dP({Ti }ni=1 | {Xi }ni=1 , ?Z , ?T )). Since the
event Ti happens with a probability that can be written in terms of a Gaussian density conditional on fZ and fT , we can obtain a tractable version of the variational bound F by introducing a set of M pseudo-inputs to the two layers of the DMGP, with corresponding function values U Z and U T at the first and second layers [15, 17], and setting the variational distribution to
5
Q = P(f T (Zi ) | U T , Zi ) q(U T ) q(Zi ) P(f Z (Xi ) | U Z , Xi ) q(U Z ), where q(Zi ) is a Gaussian distribution, whereas q(U T ) and q(U Z ) are free-form variational distributions. Given these settings,
the variational lower bound can be written as [Eq. 13, 15]
]
[
log(dP(U T ))
n
T
n
F = E log(dP({Ti }i=1 | {f (Zi )}i=1 )) +
q(U T )
[
]
log(dP(U Z ))
n
Z
n
+ E log(dP({Zi }i=1 | {f (Xi )}i=1 )) +
,
(5)
q(U Z )
where the first expectation is taken with respect to P(f T (Zi ) | U T , Zi ) q(U T ) q(Zi ) whereas the
second is taken with respect to P(f Z (Xi ) | U Z , Xi ) q(U Z ). Since all the densities involved in (5) are
Gaussian, F is tractable and can be written in closed-form. We use the adaptive moment estimation
(ADAM) algorithm to optimize F with respect to ?T and ?Z [18].
5
Experiments
In this Section, we validate our model by conducting a set of experiments on both a synthetic survival
model, and a real-world time-to-event dataset. In all experiments, we use the cause-specific concordance index (C-index), recently proposed in [27], as a performance metric. The cause-specific
C-index quantifies the goodness of a model in ranking the subjects? survival times with respect to
a particular cause/event based on their covariates: a higher C-index indicates a better performance.
Formally, we define the (time-dependent) C-index for a cause k ? K as follows [Sec. 2.3, 27]
Ck (t) := P(Fk (t, Xi ) > Fk (t, Xj ) | {ki = k} ? {Ti ? t} ? {Ti < Tj ? kj ?= k}),
(6)
where we have used the CIF Fk (t, X) as a natural choice for the prognostic score in [Eq. (2.3),
27]. The C-index defined in (6) corresponds to the probability that, for a time horizon t, a particular
survival analysis method prompts an assignment of CIF functions for subjects i and j that satisfy
Fk (t, Xi ) > Fk (t, Xj ), given that ki = k, Ti < Tj , and that subject i was not right-censored
by time t. A high C-index for cause k is achieved if the cause-specific CIF functions for a group
of subjects who encounter event k are likely to be "ordered" in accordance with the ordering of
their realized survival times. In all experiments, we estimate the C-index for the survival analysis
methods under consideration using the function cindex of the R-package pec2 [Sec. 3, 27].
We run the algorithm in Section 4 with Q = 3 outputs for the first layer of the DMGP, and we use
the default settings prescribed in [18] for the ADAM algorithm. We compare our model with four
benchmarks: the Fine-Gray proportional subdistribution hazards model (FG) [4, 28], the accelerated failure time model using multi-task Gaussian processes (MGP) [19], the cause-specific Cox
proportional hazards model (Cox) [27, 28], and the threshold-regression (multi-state) first-time hitting model with a multidimensional Wiener process (THR) [25]. The MGP benchmark is a special
case of our model with 1 layer and a deterministic linear transformation of the survival times to
Gaussian process outputs [Sec. 3, 19]. We run the FG and Cox benchmarks using the R libraries
cmprsk and survival, whereas for the THR benchmark, we use the R-package threg3 .
5.1
Synthetic Data
The goal of this Section is to
Model A
Model B
demonstrate the ability of our
model to cope with highly hetXi ? N (0, I),
Xi ? N (0, I),
erogeneous patient cohorts;
Ti1 ? exp(?1T Xi ),
Ti1 ? exp(cosh(?1T Xi )),
we demonstrate this by runTi2 ? exp(?2T Xi ),
Ti2 ? exp(|N (0, 1) + sinh(?2T Xi )|),
ning experiments on two synTi = min{Ti1 , Ti2 },
Ti = min{Ti1 , Ti2 },
k
thetic models with different
ki = arg mink?{1,2} Ti ,
ki = arg mink?{1,2} Tik ,
types of interactions between
i ? {1, . . ., n}.
i ? {1, . . ., n}.
survival times and covariates.
In particular, we run experiments using the synthetic survival models A and B described above;
the two models correspond to two patient cohorts that differ in terms of patients? heterogeneity. In
2
3
https://cran.r-project.org/web/packages/pec/index.html
https://cran.r-project.org/web/packages/threg/index.html
6
Model A
0.8
0.75
0.7
0.65
0.6
DMGP
MGP
THR
Cox
FG
2.5
5
7.5
10
Time Horizon t
Figure 4: Results for model A.
0.9
0.85
Model B
0.8
0.75
0.7
0.65
0.6
DMGP
MGP
THR
Cox
FG
2.5
5
7.5
10
Time Horizon t
Figure 5: Results for model B.
Cause-specific C-index C2 (t)
0.9
0.85
Cause-specific C-index C1 (t)
Cause-specific C-index C1 (t)
model A, we assume that survival times are exponentially distributed with a mean parameter that
comprises a simple linear function of the covariates, whereas in model B, we assume that the survival
distributions are not necessarily exponential, and that their parameters depend on the covariates in
a nonlinear fashion through the sinh and cosh functions. Both models have two competing risks,
i.e. K = {?, 1, 2}, and for both models we assume that each patient has d = 10 covariates that are
drawn from a standard normal distribution. The parameters ?1 and ?2 are 10-dimensional vectors,
the elements of which are drawn independently from a uniform distribution. Given a draw of ?1
and ?2 , a dataset D with n subjects can be sampled using the models described above. We run
10,000 repeated experiments using each model, where in each experiment we draw a new ?1 , ?2 ,
and a dataset D with 1000 subjects; we divide D into 500 subjects for training and 500 subjects
for out-of-sample testing. We compute the CIF function for the testing subjects via the different
benchmarks, and based on those functions we evaluate the cause-specific C-index for time horizons
[1, 2.5, 7.5, 10]. We average the C-indexes achieved by each benchmark over the 1000 experiments
and report the mean value and the 95% confidence interval at each time horizon. In all experiments,
we induce right-censoring on 100 subjects which we randomly pick from D; for a subject i, rightcensoring is induced by altering her survival time as follows: Ti ? uniform(0, Ti ).
0.9
0.85
Model B
0.8
0.75
0.7
0.65
0.6
DMGP
MGP
THR
Cox
FG
2.5
5
7.5
10
Time Horizon t
Figure 6: Results for model B.
Fig. 4, 5, and 6 depict the cause-specific C-indexes for all the survival methods under consideration
when applied to the data generated by models A and B (error bars correspond to the 95% confidence
intervals). As we can see, the DMGP model outperforms all other benchmarks for survival data
generated by both models. For model A, we only depict C1 (t) in Fig. 4 since the results on C2 (t)
are almost identical due to the symmetry of model A with respect to the two competing risks. Fig. 4
shows that, for all time horizons, the DMGP model already confers a gain in the C-index even when
the data is generated by model A, which displays simple linear interactions between the covariates
and the parameters of the survival time distribution. Fig. 5 and 6 show that the performance gains
achieved by the DMGP are even larger under model B (for both C1 (t) and C2 (t)). This is because
model B displays a highly nonlinear relationship between covariates and survival times, and in
addition, it assumes a complicated form for the distributions of the survival times, all of which
are features that can be captured well by a DMGP but not by the other benchmarks which posit
strict parametric assumptions. The superiority of DMGPs to MGPs shows the value of the extra
representational power attained by adding multiple layers to conventional MGPs.
5.2
Real Data
More than 30 million patients in the U.S. are diagnosed with either cardiovascular disease (CVD) or
cancer [1, 2, 29]. Mounting evidence suggests that CVD and cancer share a number of risk factors,
and possess various biological similarities and (possible) interactions; in addition, many of the existing cancer therapies increase a patient?s risk for CVD [2, 29]. Therefore, it is important that patients
who are at risk of both cancer and CVD be provided with a joint prognosis of mortality due to the
two competing diseases in order to properly manage therapeutic interventions. This is a challenging
problem since CVD patient cohorts are very heterogeneous; CVD exhibits complex phenotypes for
which mortality rates can vary as much as 10-fold among patients in the same phenotype [1, 2]. The
goal of this Section is to investigate the ability of our model to accurately model survival of patients
in such a highly heterogeneous cohort, with CVD and cancer as competing risks.
We conducted experiments on a real-world patient cohort extracted from a publicly accessible dataset
provided by the Surveillance, Epidemiology, and End Results Program 4 (SEER). The extracted
cohort contains data on survival of breast cancer patients over the years from 1992-2007. The
total number of subjects in the cohort is 61,050, with a follow-up period restricted to 10 years.
4
https://seer.cancer.gov/causespecific/
7
The mortality rate of the subjects within the 10-year follow-up period is 25.56%. We divided the
mortality causes into: (1) death due to breast cancer (13.64%), (2) death due to CVD (4.62%), and
(3) death due to other causes (7.3%), i.e. K = {?, 1, 2, 3}. Every subject is associated with 20
covariates including: age, race, gender, morphology information (Lymphoma subtype, histological
type, etc), diagnostic confirmation, therapy information (surgery, type of surgery, etc), tumor size
and type, etc. We divide the dataset into training and testing sets, and report the C-index results
obtained for all benchmarks via 10-fold cross-validation.
1
Breast cancer
CVD
Other causes
0.9
C-index
0.8
0.7
0.6
0.5
DMGP
MGP
FG
Cox
THR
Figure 7: Boxplot for the cause-specific C-indexes of various methods. The x-axis contains the methods?
names, and with each method, 3 boxplots corresponding to the C-indexes for the different causes are provided.
Fig. 7 depicts boxplots for the 10-year survival C-indexes (i.e. C1 (10), C2 (10) and C3 (10)) of
all benchmarks for the 3 competing risks. With respect to predicting survival times due to "other
causes", the gain provided by DMGPs is marginal. We believe that this due to the absence of
the covariates that are predictive of mortality due to causes other than breast cancer and CVD in
the SEER dataset. The median C-index of our model is larger than all other benchmarks for all
causes. In terms of the median C-index, our model provides a significant improvement in predicting
breast cancer survival times while maintaining a decent gain in the accuracy of predicting survival
times of CVD as well. This implies that DMGPs, by virtue of our nonparametric multi-task learning
formulation, are capable of accurately (and flexibly) capturing the "shared representation" of the two
"correlated" risks of breast cancer and CVD as a function of their shared risk factors (hypertension,
obesity, diabetes mellitus, age, etc). As expected, since CVD is a phenotype-rich disease, predictions
of breast cancer survival are more accurate than those for CVD for all benchmarks.
The competing multi-task modeling benchmark, MGP, is inferior to our model as it restricts the
survival times to an exponential-like parametric distribution (See [Eq. 13, 19]). Contrarily, our
model allows for a nonparametric model of the survival curves, which appears to be crucial for
modeling breast cancer survival. This is evident in the boxplots of the cause-specific Cox benchmark,
which is the only benchmark that performs better on CVD than breast cancer. Since the Cox model
is restricted to a proportional hazard model with parametric, non-crossing survival curves, its poor
performance on predicting breast cancer survival suggests that breast cancer patients have crossing
survival curves, which signals the need for a nonparametric survival model [9]. This explains the
gain achieved by DMGPs as compared to MGPs (and all other benchmarks), which posit strong
parametric assumptions on the patients? survival curves.
6
Discussion
The problem of survival analysis with competing risks has recently gained significant attention in
the medical community due to the realization that many chronic diseases possess a shared biology.
We have proposed a survival model for competing risks that hinges on a novel multi-task learning
conception of cause-specific survival analysis. Our model is liberated from the traditional parametric
restrictions imposed by previous models; it allows for nonparametric learning of patient-specific
survival curves and their interactions with the patients? covariates. This is achieved by modeling the
patients? cause-specific survival times as a function of the patients? covariates using deep multi-task
Gaussian processes. Through the personalized actionable prognoses offered by our model, clinicians
can design personalized treatment plans that (hopefully) save thousands of lives annually.
8
References
[1] H. J. Lim, X. Zhang, R. Dyck, and N. Osgood. Methods of Competing Risks Analysis of End-stage Renal
Disease and Mortality among People with Diabetes. BMC Medical Research Methodology, 10(1), 97, 2010.
[2] P. C. Lambert, P. W. Dickman, C. P. Nelson, and P. Royston. Estimating the Crude Probability of Death due
to Cancer and other Causes using Relative Survival Models. Statistics in Medicine, 29(7): 885-895, 2010.
[3] J. M. Satagopan, L. Ben-Porat, M. Berwick, M. Robson, D. Kutler, and A. Auerbach. A Note on Competing
Risks in Survival Data Analysis. British Journal of Cancer, 91(7): 1229-1235, 2004.
[4] J. P. Fine and R. J. Gray. A Proportional Hazards Model for the Subdistribution of a Competing Risk.
Journal of the American statistical association, 94(446): 496-509, 1999.
[5] M. J. Crowder. Classical competing risks. CRC Press, 2001.
[6] T. A. Gooley, W. Leisenring, J. Crowley, and B. E. Storer. Estimation of Failure Probabilities in the Presence
of Competing Risks: New Representations of Old Estimators. Statistics in Medicine, 18(6): 695-706, 1999.
[7] A. Tsiatis. A Non-identifiability Aspect of the Problem of Competing Risks. PNAS, 72(1): 20-22, 1975.
[8] J. Henry, Y. Pylypchuk, T. Searcy, and V. Patel. Adoption of Electronic Health Record Systems among US
Non-federal Acute Care Hospitals: 2008-2015. The Office of National Coordinator, 2016.
[9] T. Fernndez, N. Rivera, and Y. W. Teh. Gaussian Processes for Survival Analysis. In NIPS, 2016.
[10] C. N. Yu, R. Greiner, H. C. Lin, and V. Baracos. Learning Patient-specific Cancer Survival Distributions
as a Sequence of Dependent Regressors. In NIPS, 1845-1853, 2011.
[11] H. Steck, B. Krishnapuram, C. Dehing-oberije, P. Lambin and V. C. Raykar. On Ranking in Survival
Analysis: Bounds on the Concordance Index. In NIPS, 1209-1216, 2008.
[12] R. Ranganath, A. Perotte, N. Elhadad, and D. Blei. Deep Survival Analysis. arXiv:1608.02158, 2016.
[13] F. S. Collins and H. Varmus. A New Initiative on Precision Medicine. New England Journal of Medicine,
372(9): 793-795, 2015.
[14] M. A. Alvarez, L. Rosasco, N. D. Lawrence. Kernels for Vector-valued Functions: A Review. Foundations
R
and Trends ?in
Machine Learning, 4(3):195-266, 2012.
[15] A. Damianou and N. Lawrence. Deep Gaussian Processes. In AISTATS, 2013.
[16] E. V. Bonilla, K. M. Chai, and C. Williams. Multi-task Gaussian Process Prediction. In NIPS, 2007.
[17] M. K. Titsias and N. D. Lawrence. Bayesian Gaussian Process Latent Variable Model. In AISTATS, 2010.
[18] D. Kingma and J. Ba. ADAM: A Method for Stochastic Optimization. arXiv:1412.6980, 2014.
[19] J. E. Barrett and A. C. C. Coolen. Gaussian Process Regression for Survival Data with Competing Risks.
arXiv preprint arXiv:1312.1591, 2013.
[20] E. Snelson, C. E. Rasmussen, and Z. Ghahramani. Warped Gaussian Processes. In NIPS, 2004.
[21] E. L. Kaplan and P. Meier. Nonparametric Estimation from Incomplete Observations. Journal of the
American Statistical Association, 53(282):457-481, 1958.
[22] D. Cox. Regression Models and Life-tables. Journal of Royal Statistical Society, 34(2):187-220, 1972.
[23] M. De Iorio, W. O. Johnson, P. Mller, and G. L. Rosner. Bayesian Nonparametric Non-proportional
Hazards Survival Modeling. Biometrics, 65(3): 762-771, 2009.
[24] S. Martino, R. Akerkar, and H. Rue. Approximate Bayesian Inference for Survival Models. Scandinavian
Journal of Statistics, 38(3):514-528, 2011.
[25] M. L. T. Lee and A. G. Whitmore. Threshold Regression for Survival Analysis: Modeling Event Times by
a Stochastic Process Reaching a Boundary. Statistical Science, 501-513, 2006.
[26] H. Ishwaran, U. B. Kogalur, E. H. Blackstone, and M. S. Lauer. Random Survival Forests. The Annals of
Applied Statistics, 841-860, 2008.
[27] M. Wolbers, P. Blanche, M. T. Koller, J. C. Witteman and A. T. Gerds. Concordance for Prognostic Models
with Competing Risks. Biostatistics, 15(3): 526-539, 2014.
[28] P. C. Austin, D. S. Lee, and J. P. Fine. Introduction to the Analysis of Survival Data in the Presence of
Competing Risks. Circulation, 133(6): 601-609, 2016.
[29] R. Koene, et al. Shared Risk Factors in Cardiovascular Disease and Cancer. Circulation, 2016.
9
| 6827 |@word cox:13 version:1 seems:1 prognostic:2 steck:1 eng:1 k7:1 pick:1 thereby:1 rivera:1 moment:2 contains:2 score:1 united:1 t7:1 outperforms:3 existing:2 incidence:5 mihaela:2 must:1 written:3 realistic:1 informative:1 enables:1 update:1 depict:2 mounting:1 auerbach:1 generative:1 leaf:2 record:3 blei:1 provides:2 complication:1 node:6 org:2 zhang:1 perotte:1 along:1 c2:4 direct:2 initiative:1 elderly:1 expected:1 behavior:1 nor:1 multi:22 morphology:1 gov:2 equipped:1 increasing:1 provided:6 estimating:2 project:2 annually:1 panel:3 biostatistics:1 renal:5 weibull:1 developed:2 dying:1 finding:1 transformation:7 temporal:1 pseudo:1 every:4 multidimensional:1 ti:30 uk:1 subtype:1 medical:3 intervention:2 superiority:2 cardiovascular:6 t1:1 positive:1 engineering:2 accordance:1 oxford:1 pec:1 resembles:1 suggests:2 challenging:3 limited:1 adoption:1 testing:3 definite:1 procedure:1 empirical:1 mellitus:1 cascade:1 thought:1 alleviated:1 confidence:2 radial:1 regular:1 induce:1 krishnapuram:1 cannot:1 selection:1 risk:44 restriction:1 www:1 conventional:3 equivalent:1 deterministic:3 optimize:1 maximizing:2 modifies:1 confers:1 flexibly:2 kidney:1 independently:1 attention:1 williams:1 simplicity:1 estimator:3 crowley:1 population:1 handle:1 notion:1 laplace:1 annals:1 us:2 designing:2 diabetes:5 element:2 crossing:2 trend:1 updating:2 asymmetric:2 schaar:1 observed:1 ft:11 bottom:1 preprint:1 electrical:1 capture:2 thousand:1 hinders:1 ordering:1 subdistribution:3 disease:13 rq:1 covariates:34 depend:1 predictive:1 titsias:1 upon:1 basis:2 iorio:1 easily:1 joint:2 various:4 overlooking:1 describe:1 monte:1 baracos:1 outcome:2 lymphoma:1 larger:2 valued:5 vanderschaar:1 transplantation:1 ability:2 statistic:4 storer:1 commit:1 gp:2 jointly:2 sequence:1 net:5 propose:2 interaction:10 realization:1 representational:2 oberije:1 inducing:1 validate:1 az:2 los:1 chai:1 parent:2 double:1 assessing:1 produce:1 adam:3 ben:1 derive:2 develop:2 ac:1 eq:3 strong:2 implies:3 differ:1 posit:3 ning:1 stochastic:3 explains:1 crc:1 assign:1 generalization:1 preliminary:1 biological:1 therapy:2 considered:1 normal:1 exp:6 great:1 lawrence:3 mller:1 vary:1 smallest:1 estimation:4 robson:1 realizes:1 tik:3 currently:1 label:2 coolen:1 sensitive:1 nonidentifiability:1 federal:1 gaussian:38 rather:1 ck:1 reaching:1 surveillance:1 office:2 probabilistically:1 earliest:1 derived:1 properly:1 improvement:1 martino:1 likelihood:4 indicates:1 inference:7 dependent:3 rigid:1 hidden:3 coordinator:2 her:1 koller:1 transformed:1 comprising:2 arg:3 overall:2 flexible:3 html:2 among:3 proposes:1 plan:6 art:2 special:1 marginal:4 field:1 construct:2 once:1 beach:1 sampling:1 bmc:1 identical:1 represents:1 biology:1 yu:1 alter:1 t2:2 report:2 simplify:1 develops:2 randomly:1 national:2 individual:2 pictorial:2 replaced:1 thetic:1 freedom:1 interest:2 highly:3 investigate:1 chemotherapy:1 light:1 tj:2 devoted:1 predefined:2 accurate:2 kt:3 integral:1 capable:5 censored:5 biometrics:1 conduct:4 incomplete:1 divide:2 old:1 instance:4 modeling:10 goodness:1 altering:1 assignment:1 calibrate:1 introducing:1 uniform:2 conducted:2 johnson:1 crowder:1 synthetic:6 st:1 density:5 epidemiology:1 accessible:1 probabilistic:2 lee:2 nongaussian:1 ehrs:2 mortality:7 manage:1 rosasco:1 warped:4 american:2 concordance:3 de:1 sec:4 availability:2 satisfy:1 bonilla:1 ranking:4 depends:3 race:1 view:1 lot:1 closed:1 linked:1 bayes:2 complicated:1 inherited:1 identifiability:1 contribution:1 ni:17 accuracy:1 wiener:1 variance:1 who:5 largely:1 conducting:2 correspond:3 circulation:2 publicly:1 bayesian:11 lambert:1 accurately:2 none:1 carlo:1 evidential:1 minj:1 inform:1 suffers:1 damianou:1 failure:5 involved:1 associated:2 sampled:2 therapeutic:3 dataset:13 treatment:6 gain:5 lim:1 appears:1 higher:1 dt:2 attained:1 follow:5 methodology:1 alvarez:1 formulation:1 ox:1 diagnosed:1 stage:2 until:1 cran:2 web:2 lambin:1 nonlinear:3 hopefully:1 logistic:2 undergoes:1 gray:3 believe:1 usage:1 usa:1 name:1 former:1 hence:3 symmetric:1 death:6 dyck:1 deal:1 confer:1 raykar:1 inferior:1 transplant:1 evident:1 demonstrate:3 performs:1 variational:10 instantaneous:1 novel:3 recently:5 consideration:2 snelson:1 common:2 overview:1 endpoint:1 exponentially:1 ti2:3 million:1 discussed:1 occurred:3 association:2 dehing:1 sub2:1 significant:2 imposed:1 smoothness:1 automatic:1 fk:7 similarly:1 chronic:1 henry:1 scandinavian:1 similarity:1 depiction:4 acute:1 etc:4 posterior:8 multivariate:2 recent:3 driven:5 life:2 der:1 seen:1 captured:2 greater:1 care:1 impose:1 maximize:1 period:2 signal:1 semi:1 multiple:4 berwick:1 pnas:1 determination:1 ahmed:1 clinical:4 long:1 hazard:14 cross:1 divided:1 lin:1 england:1 prediction:2 variant:1 regression:8 heterogeneous:3 patient:64 expectation:1 metric:1 breast:11 rosner:1 dialysis:1 represent:1 kernel:6 adopting:1 arxiv:4 achieved:5 c1:5 receive:1 whereas:6 semiparametric:1 fine:4 addition:2 addressed:1 interval:2 liberated:1 median:2 crucial:1 extra:1 unlike:4 contrarily:2 posse:2 file:1 pass:1 subject:25 induced:1 undergo:1 strict:1 lauer:1 incorporates:1 surviving:1 presence:2 cohort:7 conception:2 decent:1 xj:2 zi:16 mgp:7 competing:35 prognosis:5 restrict:1 blanche:1 ti1:6 angeles:1 whether:3 motivated:1 weiner:1 effort:1 ject:1 suffer:2 cif:8 cause:32 deep:12 hypertension:1 tij:5 generally:1 detailed:1 tune:2 amount:1 nonparametric:16 transforms:1 cosh:2 http:4 fz:12 restricts:1 diagnostic:1 diagnosis:1 write:1 group:1 elhadad:1 four:1 threshold:3 drawn:3 neither:2 boxplots:3 year:4 run:4 package:4 throughout:1 family:1 almost:1 electronic:3 utilizes:1 draw:2 decision:3 capturing:1 bound:5 ki:16 layer:18 followed:1 sinh:2 display:3 fold:2 occur:2 constrain:1 boxplot:1 personalized:4 ucla:1 aspect:1 misguided:1 min:4 prescribed:1 department:2 according:1 alternate:1 cvd:15 poor:1 cardiac:2 slightly:1 happens:1 mgps:3 explained:1 restricted:6 handling:1 taken:2 mutually:1 visualization:1 describing:1 know:1 tractable:2 end:4 serf:1 ishwaran:1 observe:2 appropriate:1 occurrence:3 save:1 encounter:2 rz:2 actionable:2 assumes:3 denotes:1 top:1 dirichlet:1 include:1 graphical:2 maintaining:1 hinge:1 medicine:6 ghahramani:1 classical:2 society:1 surgery:3 already:1 realized:1 occurs:1 parametric:17 exclusive:1 traditional:2 exhibit:1 dp:14 nelson:1 assuming:1 length:1 index:25 relationship:1 susceptible:1 potentially:2 mink:2 kaplan:3 rise:1 ba:1 design:2 teh:1 observation:1 datasets:1 benchmark:16 finite:2 displayed:2 heterogeneity:1 oncology:1 community:2 prompt:1 meier:3 cast:1 c3:1 thr:6 california:1 learned:1 kingma:1 nip:6 bar:1 program:1 including:1 royal:1 power:2 comorbidities:6 event:32 unrealistic:1 rely:1 natural:1 predicting:4 scheme:1 brief:2 library:1 axis:1 obesity:1 health:3 kj:1 prior:4 circled:1 literature:1 review:1 marginalizing:1 relative:1 law:1 limitation:3 proportional:10 age:2 validation:1 foundation:1 degree:1 offered:1 share:1 austin:1 censoring:5 cancer:25 free:1 histological:1 rasmussen:1 offline:1 fg:6 van:1 distributed:1 curve:14 default:2 boundary:1 world:2 cumulative:5 rich:1 kz:3 adopts:1 adaptive:2 coregionalization:1 regressors:2 cope:1 ranganath:1 approximate:1 observable:1 ameliorated:1 patel:1 xi:22 latent:2 diabetic:1 quantifies:1 porat:1 table:1 stimulated:1 nature:2 transfer:1 learn:1 confirmation:1 ca:1 symmetry:1 forest:2 du:2 complex:5 necessarily:1 rue:1 diag:1 aistats:2 main:1 noise:2 hyperparameters:5 repeated:1 site:1 fig:6 depicts:2 fashion:3 deployed:1 precision:2 fails:1 comprises:1 explicit:1 exponential:3 governed:1 crude:1 learns:1 rk:1 british:1 specific:30 showing:1 barrett:1 virtue:1 survival:111 evidence:1 intrinsic:1 incorporating:2 intractable:1 adding:1 gained:1 horizon:7 phenotype:4 depicted:1 likely:1 greiner:1 hitting:2 ordered:1 monotonic:1 applies:1 gender:1 corresponds:3 nested:1 relies:1 extracted:2 conditional:6 goal:3 viewed:1 exposition:1 towards:1 shared:7 absence:1 adverse:1 typical:1 clinician:2 infinite:1 tumor:1 total:2 hospital:2 indicating:1 formally:2 alaa:1 people:1 latter:1 arises:1 collins:1 relevance:1 accelerated:2 incorporate:1 evaluate:2 correlated:2 |
6,443 | 6,828 | Masked Autoregressive Flow for Density Estimation
George Papamakarios
University of Edinburgh
[email protected]
Theo Pavlakou
University of Edinburgh
[email protected]
Iain Murray
University of Edinburgh
[email protected]
Abstract
Autoregressive models are among the best performing neural density estimators.
We describe an approach for increasing the flexibility of an autoregressive model,
based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the
random numbers of the next model in the stack, we obtain a type of normalizing
flow suitable for density estimation, which we call Masked Autoregressive Flow.
This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art
performance in a range of general-purpose density estimation tasks.
1
Introduction
The joint density p(x) of a set of variables x is a central object of interest in machine learning. Being
able to access and manipulate p(x) enables a wide range of tasks to be performed, such as inference,
prediction, data completion and data generation. As such, the problem of estimating p(x) from a set
of examples {xn } is at the core of probabilistic unsupervised learning and generative modelling.
In recent years, using neural networks for density estimation has been particularly successful. Combining the flexibility and learning capacity of neural networks with prior knowledge about the structure
of data to be modelled has led to impressive results in modelling natural images [4, 30, 37, 38] and
audio data [34, 36]. State-of-the-art neural density estimators have also been used for likelihood-free
inference from simulated data [21, 23], variational inference [13, 24], and as surrogates for maximum
entropy models [19].
Neural density estimators differ from other approaches to generative modelling?such as variational
autoencoders [12, 25] and generative adversarial networks [7]?in that they readily provide exact
density evaluations. As such, they are more suitable in applications where the focus is on explicitly
evaluating densities, rather than generating synthetic data. For instance, density estimators can learn
suitable priors for data from large unlabelled datasets, for use in standard Bayesian inference [39].
In simulation-based likelihood-free inference, conditional density estimators can learn models for
the likelihood [5] or the posterior [23] from simulated data. Density estimators can learn effective
proposals for importance sampling [22] or sequential Monte Carlo [8, 21]; such proposals can be
used in probabilistic programming environments to speed up inference [15, 16]. Finally, conditional
density estimators can be used as flexible inference networks for amortized variational inference and
as part of variational autoencoders [12, 25].
A challenge in neural density estimation is to construct models that are flexible enough to represent
complex densities, but have tractable density functions and learning algorithms. There are mainly
two families of neural density estimators that are both flexible and tractable: autoregressive models
[35] and normalizing flows [24]. Autoregressive models decompose the joint density as a product of
conditionals, and model each conditional in turn. Normalizing flows transform a base density (e.g. a
standard Gaussian) into the target density by an invertible transformation with tractable Jacobian.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Our starting point is the realization (as pointed out by Kingma et al. [13]) that autoregressive models,
when used to generate data, correspond to a differentiable transformation of an external source of
randomness (typically obtained by random number generators). This transformation has a tractable
Jacobian by design, and for certain autoregressive models it is also invertible, hence it precisely
corresponds to a normalizing flow. Viewing an autoregressive model as a normalizing flow opens
the possibility of increasing its flexibility by stacking multiple models of the same type, by having
each model provide the source of randomness for the next model in the stack. The resulting stack of
models is a normalizing flow that is more flexible than the original model, and that remains tractable.
In this paper we present Masked Autoregressive Flow (MAF), which is a particular implementation of
the above normalizing flow that uses the Masked Autoencoder for Distribution Estimation (MADE)
[6] as a building block. The use of MADE enables density evaluations without the sequential loop
that is typical of autoregressive models, and thus makes MAF fast to evaluate and train on parallel
computing architectures such as Graphics Processing Units (GPUs). We show a close theoretical
connection between MAF and Inverse Autoregressive Flow (IAF) [13], which has been designed for
variational inference instead of density estimation, and show that both correspond to generalizations
of the successful Real NVP [4]. We experimentally evaluate MAF on a wide range of datasets, and
we demonstrate that (a) MAF outperforms Real NVP on general-purpose density estimation, and (b)
a conditional version of MAF achieves close to state-of-the-art performance on conditional image
modelling even with a general-purpose architecture.
2
2.1
Background
Autoregressive density estimation
Using the chain rule of probability, any Q
joint density p(x) can be decomposed into a product of
one-dimensional conditionals as p(x) = i p(xi | x1:i?1 ). Autoregressive density estimators [35]
model each conditional p(xi | x1:i?1 ) as a parametric density, whose parameters are a function of a
hidden state hi . In recurrent architectures, hi is a function of the previous hidden state hi?1 and the
ith input variable xi . The Real-valued Neural Autoregressive Density Estimator (RNADE) [32] uses
mixtures of Gaussian or Laplace densities for modelling the conditionals, and a simple linear rule for
updating the hidden state. More flexible approaches for updating the hidden state are based on Long
Short-Term Memory recurrent neural networks [30, 38].
A drawback of autoregressive models is that they are sensitive to the order of the variables. For
example, the order of the variables matters when learning the density of Figure 1a if we assume a
model with Gaussian conditionals. As Figure 1b shows, a model with order (x1 , x2 ) cannot learn
this density, even though the same model with order (x2 , x1 ) can represent it perfectly. In practice
is it hard to know which of the factorially many orders is the most suitable for the task at hand.
Autoregressive models that are trained to work with an order chosen at random have been developed,
and the predictions from different orders can then be combined in an ensemble [6, 33]. Our approach
(Section 3) can use a different order in each layer, and using random orders would also be possible.
Straightforward recurrent autoregressive models would update a hidden state sequentially for every
variable, requiring D sequential computations to compute the probability p(x) of a D-dimensional
vector, which is not well-suited for computation on parallel architectures such as GPUs. One way to
enable parallel computation is to start with a fully-connected model with D inputs and D outputs, and
drop out connections in order to ensure that output i will only be connected to inputs 1, 2, . . . , i?1.
Output i can then be interpreted as computing the parameters of the ith conditional p(xi | x1:i?1 ).
By construction, the resulting model will satisfy the autoregressive property, and at the same time
it will be able to calculate p(x) efficiently on a GPU. An example of this approach is the Masked
Autoencoder for Distribution Estimation (MADE) [6], which drops out connections by multiplying
the weight matrices of a fully-connected autoencoder with binary masks. Other mechanisms for
dropping out connections include masked convolutions [38] and causal convolutions [36].
2.2
Normalizing flows
A normalizing flow [24] represents p(x) as an invertible differentiable transformation f of a base
density ?u (u). That is, x = f (u) where u ? ?u (u). The base density ?u (u) is chosen such that it
can be easily evaluated for any input u (a common choice for ?u (u) is a standard Gaussian). Under
2
(a) Target density
(b) MADE with Gaussian conditionals
(c) MAF with 5 layers
Figure 1: (a) The density to be learnt, defined as p(x1 , x2 ) = N (x2 | 0, 4)N x1 | 14 x22 , 1 . (b) The
density learnt by a MADE with order (x1 , x2 ) and Gaussian conditionals. Scatter plot shows the train
data transformed into random numbers u; the non-Gaussian distribution indicates that the model is a
poor fit. (c) Learnt density and transformed train data of a 5 layer MAF with the same order (x1 , x2 ).
the invertibility assumption for f , the density p(x) can be calculated as
?1
?f
?1
.
p(x) = ?u f (x) det
?x
(1)
In order for Equation (1) to be tractable, the transformation f must be constructed such that (a) it
is easy to invert, and (b) the determinant of its Jacobian is easy to compute. An important point is
that if transformations f1 and f2 have the above properties, then their composition f1 ? f2 also has
these properties. In other words, the transformation f can be made deeper by composing multiple
instances of it, and the result will still be a valid normalizing flow.
There have been various approaches in developing normalizing flows. An early example is Gaussianization [2], which is based on successive application of independent component analysis. Enforcing
invertibility with nonsingular weight matrices has been proposed [1, 26], however in such approaches
calculating the determinant of the Jacobian scales cubicly with data dimensionality in general. Planar/radial flows [24] and Inverse Autoregressive Flow (IAF) [13] are models whose Jacobian is
tractable by design. However, they were developed primarily for variational inference and are not
well-suited for density estimation, as they can only efficiently calculate the density of their own samples and not of externally provided datapoints. The Non-linear Independent Components Estimator
(NICE) [3] and its successor Real NVP [4] have a tractable Jacobian and are also suitable for density
estimation. IAF, NICE and Real NVP are discussed in more detail in Section 3.
3
3.1
Masked Autoregressive Flow
Autoregressive models as normalizing flows
Consider an autoregressive model whose conditionals are parameterized as single Gaussians. That is,
the ith conditional is given by
p(xi | x1:i?1 ) = N xi | ?i , (exp ?i )2
where ?i = f?i (x1:i?1 ) and ?i = f?i (x1:i?1 ). (2)
In the above, f?i and f?i are unconstrained scalar functions that compute the mean and log standard
deviation of the ith conditional given all previous variables. We can generate data from the above
model using the following recursion:
xi = ui exp ?i + ?i
where ?i = f?i (x1:i?1 ), ?i = f?i (x1:i?1 ) and ui ? N (0, 1).
(3)
In the above, u = (u1 , u2 , . . . , uI ) is the vector of random numbers the model uses internally to
generate data, typically by making calls to a random number generator often called randn().
Equation (3) provides an alternative characterization of the autoregressive model as a transformation
f from the space of random numbers u to the space of data x. That is, we can express the model
as x = f (u) where u ? N (0, I). By construction, f is easily invertible. Given a datapoint x, the
random numbers u that were used to generate it are obtained by the following recursion:
ui = (xi ? ?i ) exp(??i ) where ?i = f?i (x1:i?1 ) and ?i = f?i (x1:i?1 ).
3
(4)
Due to the autoregressive structure, the Jacobian of f ?1 is triangular by design, hence its absolute
determinant can be easily obtained as follows:
?1
X
det ?f
= exp
?i
where ?i = f?i (x1:i?1 ).
(5)
i
?x
It follows that the autoregressive model can be equivalently interpreted as a normalizing flow, whose
density p(x) can be obtained by substituting Equations (4) and (5) into Equation (1). This observation
was first pointed out by Kingma et al. [13].
A useful diagnostic for assessing whether an autoregressive model of the above type fits the target
density well is to transform the train data {xn } into corresponding random numbers {un } using
Equation (4), and assess whether the ui ?s come from independent standard normals. If the ui ?s do
not seem to come from independent standard normals, this is evidence that the model is a bad fit. For
instance, Figure 1b shows that the scatter plot of the random numbers associated with the train data
can look significantly non-Gaussian if the model fits the target density poorly.
Here we interpret autoregressive models as a flow, and improve the model fit by stacking multiple
instances of the model into a deeper flow. Given autoregressive models M1 , M2 , . . . , MK , we model
the density of the random numbers u1 of M1 with M2 , model the random numbers u2 of M2 with M3
and so on, finally modelling the random numbers uK of MK with a standard Gaussian. This stacking
adds flexibility: for example, Figure 1c demonstrates that a flow of 5 autoregressive models is able
to learn multimodal conditionals, even though each model has unimodal conditionals. Stacking has
previously been used in a similar way to improve model fit of deep belief nets [9] and deep mixtures
of factor analyzers [28].
We choose to implement the set of functions {f?i , f?i } with masking, following the approach used
by MADE [6]. MADE is a feedforward network that takes x as input and outputs ?i and ?i for
all i with a single forward pass. The autoregressive property is enforced by multiplying the weight
matrices of MADE with suitably constructed binary masks. In other words, we use MADE with
Gaussian conditionals as the building layer of our flow. The benefit of using masking is that it
enables transforming from data x to random numbers u and thus calculating p(x) in one forward
pass through the flow, thus eliminating the need for sequential recursion as in Equation (4). We call
this implementation of stacking MADEs into a flow Masked Autoregressive Flow (MAF).
3.2
Relationship with Inverse Autoregressive Flow
Like MAF, Inverse Autoregressive Flow (IAF) [13] is a normalizing flow which uses MADE as its
component layer. Each layer of IAF is defined by the following recursion:
xi = ui exp ?i + ?i
where ?i = f?i (u1:i?1 ) and ?i = f?i (u1:i?1 ).
(6)
Similarly to MAF, functions {f?i , f?i } are computed using a MADE with Gaussian conditionals.
The difference is architectural: in MAF ?i and ?i are directly computed from previous data variables
x1:i?1 , whereas in IAF ?i and ?i are directly computed from previous random numbers u1:i?1 .
The consequence of the above is that MAF and IAF are different models with different computational
trade-offs. MAF is capable of calculating the density p(x) of any datapoint x in one pass through
the model, however sampling from it requires performing D sequential passes (where D is the
dimensionality of x). In contrast, IAF can generate samples and calculate their density with one pass,
however calculating the density p(x) of an externally provided datapoint x requires D passes to find
the random numbers u associated with x. Hence, the design choice of whether to connect ?i and
?i directly to x1:i?1 (obtaining MAF) or to u1:i?1 (obtaining IAF) depends on the intended usage.
IAF is suitable as a recognition model for stochastic variational inference [12, 25], where it only
ever needs to calculate the density of its own samples. In contrast, MAF is more suitable for density
estimation, because each example requires only one pass through the model whereas IAF requires D.
A theoretical equivalence between MAF and IAF is that training a MAF with maximum likelihood
corresponds to fitting an implicit IAF to the base density with stochastic variational inference. Let
?x (x) be the data density we wish to learn, ?u (u) be the base density, and f be the transformation
from u to x as implemented by MAF. The density defined by MAF (with added subscript x for
disambiguation) is
?1
?f
.
px (x) = ?u f ?1 (x) det
(7)
?x
4
The inverse transformation f ?1 from x to u can be seen as describing an implicit IAF with base
density ?x (x), which defines the following implicit density over the u space:
?f
pu (u) = ?x (f (u)) det
.
(8)
?u
P
Training MAF by maximizing the total log likelihood n log p(xn ) on train data {xn } corresponds
to fitting px (x) to ?x (x) by stochastically minimizing DKL (?x (x) k px (x)). In Section A of the
supplementary material, we show that
DKL (?x (x) k px (x)) = DKL (pu (u) k ?u (u)).
(9)
Hence, stochastically minimizing DKL (?x (x) k px (x)) is equivalent to fitting pu (u) to ?u (u) by
minimizing DKL (pu (u) k ?u (u)). Since the latter is the loss function used in variational inference,
and pu (u) can be seen as an IAF with base density ?x (x) and transformation f ?1 , it follows that
training MAF as a density estimator of ?x (x) is equivalent to performing stochastic variational
inference with an implicit IAF, where the posterior is taken to be the base density ?u (u) and the
transformation f ?1 implements the reparameterization trick [12, 25]. This argument is presented in
more detail in Section A of the supplementary material.
3.3
Relationship with Real NVP
Real NVP [4] (NVP stands for Non Volume Preserving) is a normalizing flow obtained by stacking
coupling layers. A coupling layer is an invertible transformation f from random numbers u to data x
with a tractable Jacobian, defined by
x1:d = u1:d
xd+1:D = ud+1:D exp ? + ?
where
? = f? (u1:d )
? = f? (u1:d ).
(10)
In the above, denotes elementwise multiplication, and the exp is applied to each element of ?. The
transformation copies the first d elements, and scales and shifts the remaining D?d elements, with
the amount of scaling and shifting being a function of the first d elements. When stacking coupling
layers into a flow, the elements are permuted across layers so that a different set of elements is copied
each time. A special case of the coupling layer where ? = 0 is used by NICE [3].
We can see that the coupling layer is a special case of both the autoregressive transformation used by
MAF in Equation (3), and the autoregressive transformation used by IAF in Equation (6). Indeed, we
can recover the coupling layer from the autoregressive transformation of MAF by setting ?i = ?i = 0
for i ? d and making ?i and ?i functions of only x1:d for i > d (for IAF we need to make ?i and ?i
functions of u1:d instead for i > d). In other words, both MAF and IAF can be seen as more flexible
(but different) generalizations of Real NVP, where each element is individually scaled and shifted as
a function of all previous elements. The advantage of Real NVP compared to MAF and IAF is that it
can both generate data and estimate densities with one forward pass only, whereas MAF would need
D passes to generate data and IAF would need D passes to estimate densities.
3.4
Conditional MAF
Given a set of example pairs {(xn , yn )}, conditional density estimation is the task of estimating
the conditional density p(x | y). Autoregressive modelling extends naturally to conditional density
estimation. Each term in the chain rule of probability
Q can be conditioned on side-information y,
decomposing any conditional density as p(x | y) = i p(xi | x1:i?1 , y). Therefore, we can turn any
unconditional autoregressive model into a conditional one by augmenting its set of input variables
with y and only modelling the conditionals that correspond to x. Any order of the variables can be
chosen, as long as y comes before x. In masked autoregressive models, no connections need to be
dropped from the y inputs to the rest of the network.
We can implement a conditional version of MAF by stacking MADEs that were made conditional
using the above strategy. That is, in a conditional MAF, the vector y becomes an additional input
for every layer. As a special case of MAF, Real NVP can be made conditional in the same way.
In Section 4, we show that conditional MAF significantly outperforms unconditional MAF when
conditional information (such as data labels) is available. In our experiments, MAF was able to
benefit from conditioning considerably more than MADE and Real NVP.
5
4
4.1
Experiments
Implementation and setup
We systematically evaluate three types of density estimator (MADE, Real NVP and MAF) in terms
of density estimation performance on a variety of datasets. Code for reproducing our experiments
(which uses Theano [29]) can be found at https://github.com/gpapamak/maf.
MADE. We consider two versions: (a) a MADE with Gaussian conditionals, denoted simply by
MADE, and (b) a MADE whose conditionals are each parameterized as a mixture of C Gaussians,
denoted by MADE MoG. We used C = 10 in all our experiments. MADE can be seen either as a
MADE MoG with C = 1, or as a MAF with only one autoregressive layer. Adding more Gaussian
components per conditional or stacking MADEs to form a MAF are two alternative ways of increasing
the flexibility of MADE, which we are interested in comparing.
Real NVP. We consider a general-purpose implementation of the coupling layer, which uses two
feedforward neural networks, implementing the scaling function f? and the shifting function f?
respectively. Both networks have the same architecture, except that f? has hyperbolic tangent hidden
units, whereas f? has rectified linear hidden units (we found this combination to perform best). Both
networks have a linear output. We consider Real NVPs with either 5 or 10 coupling layers, denoted
by Real NVP (5) and Real NVP (10) respectively, and in both cases the base density is a standard
Gaussian. Successive coupling layers alternate between (a) copying the odd-indexed variables and
transforming the even-indexed variables, and (b) copying the even-indexed variables and transforming
the odd-indexed variables. It is important to clarify that this is a general-purpose implementation of
Real NVP which is different and thus not comparable to its original version [4], which was designed
specifically for image data. Here we are interested in comparing coupling layers with autoregressive
layers as building blocks of normalizing flows for general-purpose density estimation tasks, and our
design of Real NVP is such that a fair comparison between the two can be made.
MAF. We consider three versions: (a) a MAF with 5 autoregressive layers and a standard Gaussian as
a base density ?u (u), denoted by MAF (5), (b) a MAF with 10 autoregressive layers and a standard
Gaussian as a base density, denoted by MAF (10), and (c) a MAF with 5 autoregressive layers and a
MADE MoG with C = 10 Gaussian components as a base density, denoted by MAF MoG (5). MAF
MoG (5) can be thought of as a MAF (5) stacked on top of a MADE MoG and trained jointly with it.
In all experiments, MADE and MADE MoG order the inputs using the order that comes with the
dataset by default; no alternative orders were considered. MAF uses the default order for the first
autoregressive layer (i.e. the layer that directly models the data) and reverses the order for each
successive layer (the same was done for IAF by Kingma et al. [13]).
MADE, MADE MoG and each layer in MAF is a feedforward neural network with masked weight
matrices, such that the autoregressive property holds. The procedure for designing the masks (due to
Germain et al. [6]) is as follows. Each input or hidden unit is assigned a degree, which is an integer
ranging from 1 to D, where D is the data dimensionality. The degree of an input is taken to be its
index in the order. The D outputs have degrees that sequentially range from 0 to D ?1. A unit is
allowed to receive input only from units with lower or equal degree, which enforces the autoregressive
property. In order for output i to be connected to all inputs with degree less than i, and thus make
sure that no conditional independences are introduced, it is both necessary and sufficient that every
hidden layer contains every degree. In all experiments except for CIFAR-10, we sequentially assign
degrees within each hidden layer and use enough hidden units to make sure that all degrees appear.
Because CIFAR-10 is high-dimensional, we used fewer hidden units than inputs and assigned degrees
to hidden units uniformly at random (as was done by Germain et al. [6]).
We added batch normalization [10] after each coupling layer in Real NVP and after each autoregressive layer in MAF. Batch normalization is an elementwise scaling and shifting, which is easily
invertible and has a tractable Jacobian, and thus it is suitable for use in a normalizing flow. We
found that batch normalization in Real NVP and MAF reduces training time, increases stability
during training and improves performance (as observed by Dinh et al. [4] for Real NVP). Section B
of the supplementary material discusses our implementation of batch normalization and its use in
normalizing flows.
All models were trained with the Adam optimizer [11], using a minibatch size of 100, and a step size
of 10?3 for MADE and MADE MoG, and of 10?4 for Real NVP and MAF. A small amount of `2
6
Table 1: Average test log likelihood (in nats) for unconditional density estimation. The best performing
model for each dataset is shown in bold (multiple models are highlighted if the difference is not
statistically significant according to a paired t-test). Error bars correspond to 2 standard deviations.
POWER
GAS
Gaussian
?7.74 ? 0.02 ?3.58 ? 0.75
MADE
MADE MoG
?3.08 ? 0.03
0.40 ? 0.01
Real NVP (5) ?0.02 ? 0.01
Real NVP (10)
0.17 ? 0.01
MAF (5)
MAF (10)
MAF MoG (5)
HEPMASS
BSDS300
?37.24 ? 1.07
96.67 ? 0.25
3.56 ? 0.04 ?20.98 ? 0.02 ?15.59 ? 0.50
8.47 ? 0.02 ?15.15 ? 0.02 ?12.27 ? 0.47
148.85 ? 0.28
153.71 ? 0.28
4.78 ? 1.80
8.33 ? 0.14
?27.93 ? 0.02
MINIBOONE
?19.62 ? 0.02
?18.71 ? 0.02
?13.55 ? 0.49
?13.84 ? 0.52
152.97 ? 0.28
153.28 ? 1.78
0.14 ? 0.01
9.07 ? 0.02 ?17.70 ? 0.02 ?11.75 ? 0.44 155.69 ? 0.28
0.24 ? 0.01 10.08 ? 0.02 ?17.73 ? 0.02 ?12.24 ? 0.45 154.93 ? 0.28
0.30 ? 0.01
9.59 ? 0.02 ?17.39 ? 0.02 ?11.68 ? 0.44 156.36 ? 0.28
regularization was added, with coefficient 10?6 . Each model was trained with early stopping until no
improvement occurred for 30 consecutive epochs on the validation set. For each model, we selected
the number of hidden layers and number of hidden units based on validation performance (we gave
the same options to all models), as described in Section D of the supplementary material.
4.2
Unconditional density estimation
Following Uria et al. [32], we perform unconditional density estimation on four UCI datasets
(POWER, GAS, HEPMASS, MINIBOONE) and on a dataset of natural image patches (BSDS300).
UCI datasets. These datasets were taken from the UCI machine learning repository [18]. We selected
different datasets than Uria et al. [32], because the ones they used were much smaller, resulting in
an expensive cross-validation procedure involving a separate hyperparameter search for each fold.
However, our data preprocessing follows Uria et al. [32]. The sample mean was subtracted from the
data and each feature was divided by its sample standard deviation. Discrete-valued attributes were
eliminated, as well as every attribute with a Pearson correlation coefficient greater than 0.98. These
procedures are meant to avoid trivial high densities, which would make the comparison between
approaches hard to interpret. Section D of the supplementary material gives more details about the
UCI datasets and the individual preprocessing done on each of them.
Image patches. This dataset was obtained by extracting random 8?8 monochrome patches from
the BSDS300 dataset of natural images [20]. We used the same preprocessing as by Uria et al. [32].
Uniform noise was added to dequantize pixel values, which was then rescaled to be in the range [0, 1].
The mean pixel value was subtracted from each patch, and the bottom-right pixel was discarded.
Table 1 shows the performance of each model on each dataset. A Gaussian fitted to the train data is
reported as a baseline. We can see that on 3 out of 5 datasets MAF is the best performing model, with
MADE MoG being the best performing model on the other 2. On all datasets, MAF outperforms
Real NVP. For the MINIBOONE dataset, due to overlapping error bars, a pairwise comparison was
done to determine which model performs the best, the results of which are reported in Section E
of the supplementary material. MAF MoG (5) achieves the best reported result on BSDS300 for a
single model with 156.36 nats, followed by Deep RNADE [33] with 155.2. An ensemble of 32 Deep
RNADEs was reported to achieve 157.0 nats [33]. The UCI datasets were used for the first time in
the literature for density estimation, so no comparison with existing work can be made yet.
4.3
Conditional density estimation
For conditional density estimation, we used the MNIST dataset of handwritten digits [17] and the
CIFAR-10 dataset of natural images [14]. In both datasets, each datapoint comes from one of 10
distinct classes. We represent the class label as a 10-dimensional, one-hot encoded vector y, and we
model the density p(x | y),Pwhere x represents an image. At test time, we evaluate the probability of
1
a test image x by p(x) = y p(x | y)p(y), where p(y) = 10
is a uniform prior over the labels. For
comparison, we also train every model as an unconditional density estimator and report both results.
7
Table 2: Average test log likelihood (in nats) for conditional density estimation. The best performing
model for each dataset is shown in bold. Error bars correspond to 2 standard deviations.
MNIST
Gaussian
MADE
MADE MoG
CIFAR-10
unconditional
conditional
unconditional
conditional
?1366.9 ? 1.4
?1344.7 ? 1.8
2367 ? 29
2030 ? 41
?1380.8 ? 4.8
?1038.5 ? 1.8
?1361.9 ? 1.9
?1030.3 ? 1.7
147 ? 20
?397 ? 21
187 ? 20
?119 ? 20
Real NVP (5)
Real NVP (10)
?1323.2 ? 6.6
?1370.7 ? 10.1
?1326.3 ? 5.8
?1371.3 ? 43.9
2576 ? 27
2568 ? 26
2642 ? 26
2475 ? 25
MAF (5)
MAF (10)
MAF MoG (5)
?1300.5 ? 1.7
?1313.1 ? 2.0
?1100.3 ? 1.6
?591.7 ? 1.7
?605.6 ? 1.8
?1092.3 ? 1.7
2936 ? 27
3049 ? 26
2911 ? 26
5797 ? 26
5872 ? 26
2936 ? 26
For both MNIST and CIFAR-10, we use the same preprocessing as by Dinh et al. [4]. We dequantize
pixel values by adding uniform noise, and then rescale them to [0, 1]. We transform the rescaled pixel
values into logit space by x 7? logit(? + (1 ? 2?)x), where ? = 10?6 for MNIST and ? = 0.05 for
CIFAR-10, and perform density estimation in that space. In the case of CIFAR-10, we also augment
the train set with horizontal flips of all train examples (as also done by Dinh et al. [4]).
Table 2 shows the results on MNIST and CIFAR-10. The performance of a class-conditional Gaussian
is reported as a baseline for the conditional case. Log likelihoods are calculated in logit space. For
unconditional density estimation, MADE MoG is the best performing model on MNIST, whereas
MAF is the best performing model on CIFAR-10. For conditional density estimation, MAF is by far
the best performing model on both datasets. On CIFAR-10, both MADE and MADE MoG performed
significantly worse than the Gaussian baseline. MAF outperforms Real NVP in all cases.
The conditional performance of MAF is particularly impressive. MAF performs almost twice as well
compared to its unconditional version and to every other model?s conditional version. To facilitate
comparison with the literature, Section E of the supplementary material reports results in bits/pixel.
MAF (5) and MAF (10), the two best performing conditional models, achieve 3.02 and 2.98 bits/pixel
respectively on CIFAR-10. This result is very close to the state-of-the-art 2.94 bits/pixel achieved
by a conditional PixelCNN++ [27], even though, unlike PixelCNN++, our version of MAF does not
incorporate prior image knowledge, and it pays a price for doing density estimation in a transformed
real-valued space (PixelCNN++ directly models discrete pixel values).
5
Discussion
We showed that we can improve MADE by modelling the density of its internal random numbers.
Alternatively, MADE can be improved by increasing the flexibility of its conditionals. The comparison
between MAF and MADE MoG showed that the best approach is dataset specific; in our experiments
MAF outperformed MADE MoG in 6 out of 9 cases, which is strong evidence of its competitiveness.
MADE MoG is a universal density approximator; with sufficiently many hidden units and Gaussian
components, it can approximate any continuous density arbitrarily well. It is an open question
whether MAF with a Gaussian base density has a similar property (MAF MoG clearly does).
We also showed that the coupling layer used in Real NVP is a special case of the autoregressive layer
used in MAF. In fact, MAF outperformed Real NVP in all our experiments. Real NVP has achieved
impressive performance in image modelling by incorporating knowledge about image structure. Our
results suggest that replacing coupling layers with autoregressive layers in the original version of Real
NVP is a promising direction for further improving its performance. Real NVP maintains however
the advantage over MAF (and autoregressive models in general) that samples from the model can be
generated efficiently in parallel.
MAF achieved impressive results in conditional density estimation. Whereas almost all models we
considered benefited from the additional information supplied by the labels, MAF nearly doubled
its performance, coming close to state-of-the-art models for image modelling without incorporating
8
any prior image knowledge. The ability of MAF to benefit significantly from conditional knowledge
suggests that automatic discovery of conditional structure (e.g. finding labels by clustering) could be
a promising direction for improving unconditional density estimation in general.
Density estimation is one of several types of generative modelling, with the focus on obtaining
accurate densities. However, we know that accurate densities do not necessarily imply good performance in other tasks, such as in data generation [31]. Alternative approaches to generative modelling
include variational autoencoders [12, 25], which are capable of efficient inference of their (potentially
interpretable) latent space, and generative adversarial networks [7], which are capable of high quality
data generation. Choice of method should be informed by whether the application at hand calls for
accurate densities, latent space inference or high quality samples. Masked Autoregressive Flow is a
contribution towards the first of these goals.
Acknowledgments
We thank Maria Gorinova for useful comments. George Papamakarios and Theo Pavlakou were supported by the Centre for Doctoral Training in Data Science, funded by EPSRC (grant EP/L016427/1)
and the University of Edinburgh. George Papamakarios was also supported by Microsoft Research
through its PhD Scholarship Programme.
References
[1] J. Ball?, V. Laparra, and E. P. Simoncelli. Density modeling of images using a generalized normalization
transformation. Proceedings of the 4nd International Conference on Learning Representations, 2016.
[2] S. S. Chen and R. A. Gopinath. Gaussianization. Advances in Neural Information Processing Systems 13,
pages 423?429, 2001.
[3] L. Dinh, D. Krueger, and Y. Bengio.
arXiv:1410.8516, 2014.
NICE: Non-linear Independent Components Estimation.
[4] L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using Real NVP. Proceedings of the 5th
International Conference on Learning Representations, 2017.
[5] Y. Fan, D. J. Nott, and S. A. Sisson. Approximate Bayesian computation via regression density estimation.
Stat, 2(1):34?48, 2013.
[6] M. Germain, K. Gregor, I. Murray, and H. Larochelle. MADE: Masked Autoencoder for Distribution
Estimation. Proceedings of the 32nd International Conference on Machine Learning, pages 881?889,
2015.
[7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. Advances in Neural Information Processing Systems 27, pages 2672?2680,
2014.
[8] S. Gu, Z. Ghahramani, and R. E. Turner. Neural adaptive sequential Monte Carlo. Advances in Neural
Information Processing Systems 28, pages 2629?2637, 2015.
[9] G. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation,
18(7):1527?1554, 2006.
[10] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. Proceedings of the 32nd International Conference on Machine Learning, pages 448?456,
2015.
[11] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. Proceedings of the 3rd International
Conference on Learning Representations, 2015.
[12] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. Proceedings of the 2nd International
Conference on Learning Representations, 2014.
[13] D. P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, and M. Welling. Improved variational
inference with Inverse Autoregressive Flow. Advances in Neural Information Processing Systems 29, pages
4743?4751, 2016.
[14] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report,
University of Toronto, 2009.
[15] T. D. Kulkarni, P. Kohli, J. B. Tenenbaum, and V. Mansinghka. Picture: A probabilistic programming
language for scene perception. IEEE Conference on Computer Vision and Pattern Recognition, pages
4390?4399, 2015.
9
[16] T. A. Le, A. G. Baydin, and F. Wood. Inference compilation and universal probabilistic programming.
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017.
[17] Y. LeCun, C. Cortes, and C. J. C. Burges. The MNIST database of handwritten digits. URL http:
//yann.lecun.com/exdb/mnist/.
[18] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
[19] G. Loaiza-Ganem, Y. Gao, and J. P. Cunningham. Maximum entropy flow networks. Proceedings of the
5th International Conference on Learning Representations, 2017.
[20] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its
application to evaluating segmentation algorithms and measuring ecological statistics. pages 416?423,
2001.
[21] B. Paige and F. Wood. Inference networks for sequential Monte Carlo in graphical models. Proceedings of
the 33rd International Conference on Machine Learning, 2016.
[22] G. Papamakarios and I. Murray. Distilling intractable generative models, 2015. Probabilistic Integration
Workshop at Neural Information Processing Systems 28.
[23] G. Papamakarios and I. Murray. Fast -free inference of simulation models with Bayesian conditional
density estimation. Advances in Neural Information Processing Systems 29, 2016.
[24] D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. Proceedings of the 32nd
International Conference on Machine Learning, pages 1530?1538, 2015.
[25] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. Proceedings of the 31st International Conference on Machine Learning, pages
1278?1286, 2014.
[26] O. Rippel and R. P. Adams.
arXiv:1302.5125, 2013.
High-dimensional probability estimation with deep density models.
[27] T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma. PixelCNN++: Improving the PixelCNN with
discretized logistic mixture likelihood and other modifications. arXiv:1701.05517, 2017.
[28] Y. Tang, R. Salakhutdinov, and G. Hinton. Deep mixtures of factor analysers. Proceedings of the 29th
International Conference on Machine Learning, pages 505?512, 2012.
[29] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv:1605.02688, 2016.
[30] L. Theis and M. Bethge. Generative image modeling using spatial LSTMs. Advances in Neural Information
Processing Systems 28, pages 1927?1935, 2015.
[31] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. Proceedings of
the 4nd International Conference on Learning Representations, 2016.
[32] B. Uria, I. Murray, and H. Larochelle. RNADE: The real-valued neural autoregressive density-estimator.
Advances in Neural Information Processing Systems 26, pages 2175?2183, 2013.
[33] B. Uria, I. Murray, and H. Larochelle. A deep and tractable density estimator. Proceedings of the 31st
International Conference on Machine Learning, pages 467?475, 2014.
[34] B. Uria, I. Murray, S. Renals, C. Valentini-Botinhao, and J. Bridle. Modelling acoustic feature dependencies
with artificial neural networks: Trajectory-RNADE. IEEE International Conference on Acoustics, Speech
and Signal Processing, pages 4465?4469, 2015.
[35] B. Uria, M.-A. C?t?, K. Gregor, I. Murray, and H. Larochelle. Neural autoregressive distribution estimation.
Journal of Machine Learning Research, 17(205):1?37, 2016.
[36] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W.
Senior, and K. Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv:1609.03499, 2016.
[37] A. van den Oord, N. Kalchbrenner, L. Espeholt, K. Kavukcuoglu, O. Vinyals, and A. Graves. Conditional
image generation with PixelCNN decoders. Advances in Neural Information Processing Systems 29, pages
4790?4798, 2016.
[38] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. Proceedings of
the 33rd International Conference on Machine Learning, pages 1747?1756, 2016.
[39] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration.
Proceedings of the 13rd International Conference on Computer Vision, pages 479?486, 2011.
10
| 6828 |@word kohli:1 determinant:3 version:9 eliminating:1 repository:2 logit:3 suitably:1 nd:6 open:2 simulation:2 contains:1 lichman:1 rippel:1 outperforms:4 existing:1 com:2 comparing:2 rnade:4 laparra:1 scatter:2 yet:1 must:1 readily:1 gpu:1 uria:8 enables:3 designed:2 drop:2 update:1 plot:2 interpretable:1 generative:12 fewer:1 selected:2 intelligence:1 ith:4 core:1 short:1 provides:1 characterization:1 toronto:1 successive:3 wierstra:1 mathematical:1 constructed:2 competitiveness:1 fitting:3 pairwise:1 mask:3 indeed:1 papamakarios:6 discretized:1 wavenet:1 salakhutdinov:1 decomposed:1 increasing:4 becomes:1 provided:2 estimating:2 interpreted:2 developed:2 informed:1 finding:1 transformation:18 every:7 xd:1 demonstrates:1 scaled:1 uk:4 unit:11 internally:2 grant:1 yn:1 appear:1 before:1 dropped:1 consequence:1 encoding:1 subscript:1 twice:1 doctoral:1 equivalence:1 suggests:1 range:5 statistically:1 acknowledgment:1 lecun:2 enforces:1 practice:1 block:2 implement:3 backpropagation:1 digit:2 procedure:3 universal:2 significantly:4 hyperbolic:1 thought:1 word:3 radial:1 suggest:1 doubled:1 cannot:1 close:4 equivalent:2 maximizing:1 straightforward:1 starting:1 pouget:1 m2:3 iain:1 estimator:16 rule:3 datapoints:1 reparameterization:1 stability:1 laplace:1 target:4 construction:2 exact:1 programming:3 us:8 designing:1 goodfellow:1 trick:1 amortized:1 element:8 recognition:2 particularly:2 updating:2 expensive:1 database:2 observed:1 bottom:1 epsrc:1 ep:1 calculate:4 connected:4 trade:1 rescaled:2 environment:1 transforming:3 ui:7 nats:4 warde:1 maf:79 trained:4 zoran:1 f2:2 gu:1 easily:4 joint:3 multimodal:1 various:1 train:10 stacked:1 distinct:1 fast:4 describe:1 effective:1 monte:3 artificial:2 analyser:1 pearson:1 kalchbrenner:3 whose:5 encoded:1 supplementary:7 valued:4 triangular:1 ability:1 statistic:2 simonyan:1 transform:3 jointly:1 highlighted:1 sisson:1 advantage:2 differentiable:2 net:3 product:2 coming:1 renals:1 uci:7 combining:1 realization:1 loop:1 flexibility:6 poorly:1 achieve:2 sutskever:1 assessing:1 generating:2 adam:3 object:1 coupling:13 recurrent:4 ac:3 augmenting:1 stat:1 completion:1 rescale:1 odd:2 mansinghka:1 strong:1 implemented:1 come:5 revers:1 larochelle:4 differ:1 direction:2 distilling:1 closely:1 drawback:1 gaussianization:2 attribute:2 stochastic:5 human:1 viewing:1 enable:1 successor:1 material:7 implementing:1 espeholt:1 assign:1 f1:2 generalization:3 decompose:1 clarify:1 hold:1 randn:1 considered:2 sufficiently:1 normal:2 exp:7 miniboone:3 ic:1 dieleman:1 substituting:1 achieves:3 early:2 optimizer:1 consecutive:1 baydin:1 purpose:6 estimation:38 outperformed:2 label:5 sensitive:1 individually:1 offs:1 clearly:1 gaussian:24 rather:1 nott:1 avoid:1 rezende:2 focus:2 monochrome:1 improvement:1 maria:1 modelling:16 likelihood:9 mainly:1 indicates:1 contrast:2 adversarial:3 baseline:3 inference:22 stopping:1 typically:2 cunningham:1 hidden:16 transformed:3 interested:2 pixel:10 among:1 flexible:6 denoted:6 augment:1 development:1 art:5 special:4 integration:1 spatial:1 equal:1 construct:1 having:1 beach:1 sampling:2 eliminated:1 represents:2 look:1 unsupervised:1 nearly:1 report:3 mirza:1 primarily:1 individual:1 intended:1 microsoft:1 interest:1 possibility:1 evaluation:3 mixture:5 farley:1 unconditional:11 compilation:1 x22:1 chain:2 accurate:3 capable:3 necessary:1 indexed:4 causal:1 bsds300:4 theoretical:2 mk:2 fitted:1 instance:4 modeling:2 measuring:1 restoration:1 stacking:9 deviation:4 masked:13 uniform:3 krizhevsky:1 successful:2 osindero:1 graphic:1 reported:5 connect:1 dependency:1 learnt:3 synthetic:1 combined:1 considerably:1 st:3 density:104 international:17 oord:4 probabilistic:5 invertible:6 nvp:34 bethge:2 central:1 zen:1 choose:1 worse:1 external:1 stochastically:2 szegedy:1 bold:2 invertibility:2 coefficient:2 matter:1 pwhere:1 satisfy:1 explicitly:1 depends:1 performed:2 doing:1 start:1 recover:1 option:1 parallel:4 maintains:1 masking:2 bayes:1 contribution:1 ass:1 efficiently:3 ensemble:2 correspond:5 nonsingular:1 modelled:1 bayesian:3 handwritten:2 kavukcuoglu:3 raw:1 carlo:3 multiplying:2 trajectory:1 rectified:1 randomness:2 datapoint:4 ed:3 mohamed:2 naturally:1 associated:2 bridle:1 dataset:11 knowledge:5 dimensionality:3 improves:1 segmentation:1 planar:1 improved:2 wei:1 evaluated:1 though:3 done:5 implicit:4 autoencoders:3 until:1 hand:2 correlation:1 horizontal:1 lstms:1 replacing:1 overlapping:1 minibatch:1 defines:1 logistic:1 quality:2 building:3 usa:1 usage:1 requiring:1 facilitate:1 hence:4 assigned:2 regularization:1 during:1 generalized:1 exdb:1 demonstrate:1 performs:2 image:21 variational:14 ranging:1 krueger:1 common:1 permuted:1 conditioning:1 volume:1 discussed:1 occurred:1 m1:2 elementwise:2 interpret:2 dinh:5 composition:1 significant:1 jozefowicz:1 automatic:1 unconstrained:1 rd:4 similarly:1 pointed:2 analyzer:1 centre:1 language:1 funded:1 pixelcnn:6 access:1 impressive:4 base:13 add:1 pu:5 posterior:2 own:2 recent:1 showed:3 certain:1 ecological:1 binary:2 arbitrarily:1 seen:4 preserving:1 george:3 additional:2 greater:1 determine:1 ud:1 signal:1 multiple:5 unimodal:1 simoncelli:1 reduces:1 segmented:1 technical:1 unlabelled:1 cross:1 long:3 cifar:11 divided:1 manipulate:1 dkl:5 paired:1 prediction:2 involving:1 regression:1 vision:2 mog:21 arxiv:5 represent:3 normalization:6 invert:1 achieved:3 proposal:2 background:1 conditionals:15 whereas:6 receive:1 source:2 rest:1 unlike:1 archive:1 pass:4 sure:2 comment:1 flow:42 seem:1 call:4 integer:1 extracting:1 feedforward:3 bengio:3 enough:2 easy:2 variety:1 independence:1 fit:6 gave:1 architecture:5 perfectly:1 det:4 shift:2 whether:5 expression:1 url:2 accelerating:1 paige:1 speech:1 deep:10 useful:2 karpathy:1 amount:2 tenenbaum:1 generate:7 http:3 supplied:1 shifted:1 diagnostic:1 per:1 discrete:2 hyperparameter:1 dropping:1 dickstein:1 express:1 four:1 year:1 wood:2 enforced:1 inverse:7 parameterized:2 extends:1 family:1 almost:2 architectural:1 yann:1 patch:5 disambiguation:1 scaling:3 comparable:1 bit:3 layer:37 hi:3 pay:1 followed:1 courville:1 copied:1 fold:1 fan:1 precisely:1 cubicly:1 x2:6 scene:1 tal:1 u1:10 speed:1 argument:1 performing:11 px:5 gpus:2 martin:1 developing:1 according:1 alternate:1 combination:1 poor:1 ball:1 across:1 smaller:1 making:2 modification:1 den:4 theano:3 taken:3 equation:8 remains:1 previously:1 turn:2 describing:1 mechanism:1 discus:1 know:2 flip:1 tractable:11 available:1 gaussians:2 decomposing:1 salimans:2 fowlkes:1 subtracted:2 alternative:4 batch:5 original:3 denotes:1 remaining:1 ensure:1 include:2 top:1 clustering:1 graphical:1 calculating:4 scholarship:1 murray:9 ghahramani:1 gregor:2 hepmass:2 malik:1 added:4 question:1 parametric:1 strategy:1 surrogate:1 separate:1 thank:1 simulated:2 capacity:1 decoder:1 trivial:1 enforcing:1 ozair:1 code:1 copying:2 relationship:2 index:1 minimizing:3 equivalently:1 setup:1 potentially:1 ba:1 design:5 implementation:6 iaf:22 perform:3 teh:1 convolution:2 observation:1 datasets:13 discarded:1 gas:2 hinton:3 ever:1 team:1 stack:4 reproducing:1 introduced:1 pair:1 germain:3 connection:5 acoustic:2 kingma:7 nip:1 able:4 bar:3 perception:1 pattern:1 challenge:1 memory:1 belief:2 shifting:3 power:2 suitable:8 hot:1 natural:6 recursion:4 turner:1 improve:3 github:1 imply:1 picture:1 autoencoder:4 auto:1 prior:5 nice:4 epoch:1 tangent:1 literature:2 multiplication:1 discovery:1 python:1 theis:2 graf:2 fully:2 loss:1 generation:4 approximator:1 generator:2 validation:3 gopinath:1 degree:9 sufficient:1 systematically:1 tiny:1 supported:2 free:3 copy:1 theo:3 side:1 senior:1 deeper:2 burges:1 wide:2 factorially:1 absolute:1 edinburgh:4 benefit:3 van:4 calculated:2 xn:5 evaluating:2 valid:1 stand:1 autoregressive:59 default:2 forward:3 made:48 adaptive:1 preprocessing:4 programme:1 far:1 welling:2 approximate:3 ml:1 sequentially:3 ioffe:1 xi:10 alternatively:1 un:1 search:1 continuous:1 latent:2 table:4 promising:2 learn:6 ca:1 composing:1 obtaining:3 improving:3 complex:1 necessarily:1 constructing:1 whole:1 noise:2 fair:1 allowed:1 x1:22 xu:1 benefited:1 wish:1 jacobian:9 tang:1 externally:2 bad:1 specific:1 covariate:1 abadie:1 cortes:1 normalizing:19 evidence:2 workshop:1 incorporating:2 intractable:1 mnist:8 sequential:7 adding:2 importance:1 sohl:1 phd:1 conditioned:1 chen:3 suited:2 entropy:2 led:1 simply:1 gao:1 vinyals:2 scalar:1 u2:2 corresponds:3 conditional:40 goal:1 towards:1 price:1 experimentally:1 hard:2 typical:1 except:2 specifically:1 uniformly:1 reducing:1 called:1 total:1 pas:6 m3:1 internal:2 latter:1 meant:1 kulkarni:1 incorporate:1 evaluate:4 audio:2 |
6,444 | 6,829 | Non-Convex Finite-Sum Optimization
Via SCSG Methods
Lihua Lei
UC Berkeley
[email protected]
Cheng Ju
UC Berkeley
[email protected]
Jianbo Chen
UC Berkeley
[email protected]
Michael I. Jordan
UC Berkeley
[email protected]
Abstract
We develop a class of algorithms, as variants of the stochastically controlled
stochastic gradient (SCSG) methods [21], for the smooth non-convex finitesum optimization problem. Assuming the smoothness of each component,
the complexity of SCSGto reach a stationary point with Ek?f (x)k2 ? ? is
O min{??5/3 , ??1 n2/3 } , which strictly outperforms the stochastic gradient descent. Moreover, SCSG is never worse than the state-of-the-art methods based
on variance reduction and it significantly outperforms them when the target accuracy is low. A similar acceleration is also achieved when the functions satisfy
the Polyak-Lojasiewicz condition. Empirical experiments demonstrate that SCSG
outperforms stochastic gradient methods on training multi-layers neural networks
in terms of both training and validation loss.
1
Introduction
We study smooth non-convex finite-sum optimization problems of the form
n
1X
fi (x)
min f (x) =
n i=1
x?Rd
(1)
where each component fi (x) is possibly non-convex with Lipschitz gradients. This generic form
captures numerous statistical learning problems, ranging from generalized linear models [22] to deep
neural networks [19].
In contrast to the convex case, the non-convex case is comparatively under-studied. Early work
focused on the asymptotic performance of algorithms [11, 7, 29], with non-asymptotic complexity
bounds emerging more recently [24]. In recent years, complexity results have been derived for both
gradient methods [13, 2, 8, 9] and stochastic gradient methods [12, 13, 6, 4, 26, 27, 3]. Unlike in the
convex case, in the non-convex case one can not expect a gradient-based algorithm to converge to the
global minimum if only smoothness is assumed. As a consequence, instead of measuring functionvalue suboptimality Ef (x) ? inf x f (x) as in the convex case, convergence is generally measured in
terms of the squared norm of the gradient; i.e., Ek?f (x)k2 . We summarize the best available rates 1
in Table 1. We also list the rates for Polyak-Lojasiewicz (P-L) functions, which will be defined in
Section 2. The accuracy for minimizing P-L functions is measured by Ef (x) ? inf x f (x).
1
It is also common to use Ek?f (x)k to measure convergence; see, e.g. [2, 8, 9, 3]. Our results
pcan be readily
2
transferred to this alternative measure by using Cauchy-Schwartz inequality, Ek?f (x)k ? Ek?f (x)k
? ,
although not vice versa. The rates under this alternative can be made comparable to ours by replacing ? by ?.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Table 1: Computation complexity of gradient methods and stochastic gradient methods for the
finite-sum non-convex optimization problem (1). The second and third columns summarize the rates
in the smooth and P-L cases respectively. ? is the P-L constant and H? is the variance of a stochastic
gradient. These quantities are defined in Section 2. The final column gives additional required
? is
assumptions beyond smoothness or the P-L condition. The symbol ? denotes a minimum and O(?)
the usual Landau big-O notation with logarithmic terms hidden.
Smooth
Polyak-Lojasiewicz
additional cond.
Gradient Methods
GD
Best available
?
O
O n? [24, 13]
n
? 7/8
O
?
[9]
n
? 5/6
O
[9]
n
?
?
Stochastic Gradient Methods
SGD
O ?12 [24, 26]
2/3
Best available O n + n ? [26, 27]
1
n2/3
? 5/3
SCSG
O
?
?
?
O
[25, 17]
-
-
smooth gradient
smooth Hessian
1
?2 ?
H? = O(1)
[17]
2/3
n
? n+
O
[26, 27]
?
? ( 1 ? n) + 1 ( 1 ? n)2/3
O
??
? ??
H? = O(1)
As in the convex case, gradient methods have better dependence on ? in the non-convex case but
worse dependence on n. This is due to the requirement of computing a full gradient. Comparing
the complexity of SGD and the best achievable rate for stochastic gradient methods, achieved via
variance-reduction methods, the dependence on ? is significantly improved in the latter case. However,
unless ? << n?1/2 , SGD has similar or even better theoretical complexity than gradient methods and
existing variance-reduction methods. In practice, it is often the case that n is very large (105 ? 109 )
while the target accuracy is moderate (10?1 ? 10?3 ). In this case, SGD has a meaningful advantage
over other methods, deriving from the fact that it does not require a full gradient computation. This
motivates the following research question: Is there an algorithm that
? achieves/beats the theoretical complexity of SGD in the regime of modest target accuracy;
? and achieves/beats the theoretical complexity of existing variance-reduction methods in the
regime of high target accuracy?
The question has been partially answered in the convex case by [21] in their formulation of the
stochastically controlled stochastic
gradient (SCSG) methods. When the target accuracy is low,
SCSG has the same O ??2 rate as SGD but with a much smaller data-dependent constant factor
(which does not even require bounded gradients). When the target accuracy is high, SCSG achieves
the same rate as the best non-accelerated methods, O( n? ). Despite the gap between this and the
optimal rate, SCSG is the first known algorithm that provably achieves the desired performance in
both regimes.
In this paper, we generalize SCSG to the non-convex setting which, surprisingly, provides a completely
affirmative answer to the question raised before. By only assuming smoothness of each component
as in almost all other works, SCSG is always O ??1/3 faster than SGD and is never worse than
recently developed stochastic gradient methods that achieve the best rate.When ? >> n1 , SCSG is at
least O((?n)2/3 ) faster than the best SVRG-type algorithms. Comparing with the gradient methods,
SCSG has a better convergence rate provided ? >> n?6/5 , which is the common setting in practice.
Interestingly, there is a parallel to recent advances in gradient methods; [9] improved the classical
O(??1 ) rate of gradient descent to O(??5/6 ); this parallels the improvement of SCSG over SGD
from O(??2 ) to O(??5/3 ).
Beyond the theoretical advantages of SCSG, we also show that SCSG yields good empirical performance for the training of multi-layer neural networks. It is worth emphasizing that the mechanism by
which SCSG achieves acceleration (variance reduction) is qualitatively different from other speed-up
2
techniques, including momentum [28] and adaptive stepsizes [18]. It will be of interest in future work
to explore combinations of these various approaches in the training of deep neural networks.
The rest of paper is organized as follows: In Section 2 we discuss our notation and assumptions
and we state the basic SCSG algorithm. We present the theoretical convergence analysis in Section
3. Experimental results are presented in Section 4. All the technical proofs are relegated to the
Appendices.
2
Notation, Assumptions and Algorithm
We use k ? k to denote the Euclidean norm and write min{a, b} as a ? b for brevity throughout the
? which hides logarithmic terms, will only be used to maximize readibility in
paper. The notation O,
our presentation but will not be used in the formal analysis.
We define computation cost using the IFO framework of [1] which assumes that sampling an index
i and
P accessing the pair (?fi (x), fi (x)) incur a unit of cost. For brevity, we write ?fI (x) for
1
i?I ?fi (x). Note that calculating ?fI (x) incurs |I| units of computational cost. x is called
|I|
an ?-accurate solution iff Ek?f (x)k2 ? ?. The minimum IFO complexity to reach an ?-accurate
solution is denoted by Ccomp (?).
Recall that a random variable N has a geometric distribution, N ? Geom(?), if N is supported on
the non-negative integers 2 with
P (N = k) = ? k (1 ? ?),
?k = 0, 1, . . .
An elementary calculation shows that
EN ?Geom(?) =
?
.
1??
(2)
To formulate our complexity bounds, we define
f ? = inf f (x),
x
?f = f (?
x0 ) ? f ? .
Further we define H? as an upper bound of the variance of stochastic gradients, i.e.
n
H? = sup
x
1X
k?fi (x) ? ?f (x)k2 .
n i=1
(3)
The assumption A1 on the smoothness of individual functions will be made throughout this paper.
A1 fi is differentiable with
k?fi (x) ? ?fi (y)k ? Lkx ? yk
for some L < ? and all i ? {1, . . . , n}.
As a direct consequence of assumption A1, it holds for any x, y ? Rd that
L
L
? kx ? yk2 ? fi (x) ? fi (y) ? h?fi (y), x ? yi ? kx ? yk2 .
2
2
(4)
In this paper, we also consider the following Polyak-Lojasiewicz (PL) condition [25]. It is weaker
than strong convexity as well as other popular conditions that appeared in optimization literature; see
[17] for an extensive discussion.
A2 f (x) satisfies the P-L condition with ? > 0 if
k?f (x)k2 ? 2?(f (x) ? f (x? ))
where x? is the global minimum of f .
2
Here we allow N to be zero to facilitate the analysis.
3
2.1
Generic form of SCSG methods
The algorithm we propose in this paper is similar to that of [14] except (critically) the number of
inner loops is a geometric random variable. This is an essential component in the analysis of SCSG,
and, as we will show below, it is key in allowing us to extend the complexity analysis for SCSG to
the non-convex case. Moreover, that algorithm that we present here employs a mini-batch procedure
in the inner loop and outputs a random sample instead of an average of the iterates. The pseudo-code
is shown in Algorithm 1.
Algorithm 1 (Mini-Batch) Stochastically Controlled Stochastic Gradient (SCSG) method for smooth
non-convex finite-sum objectives
Inputs: Number of stages T , initial iterate x
?0 , stepsizes (?j )Tj=1 , batch sizes (Bj )Tj=1 , mini-batch
sizes (bj )Tj=1 .
Procedure
1: for j = 1, 2, ? ? ? , T do
2:
Uniformly sample a batch Ij ? {1, ? ? ? , n} with |Ij | = Bj ;
3:
gj ? ?fIj (?
xj?1 );
(j)
4:
x0 ? x
?j?1 ;
5:
Generate Nj ? Geom (bj /(Bj + bj ));
6:
for k = 1, 2, ? ? ? , Nj do
7:
Randomly pick I?k?1 ? [n] with |I?k?1 | = bj ;
(j)
(j)
(j)
8:
?k?1 ? ?fI?k?1 (xk?1 ) ? ?fI?k?1 (x0 ) + gj ;
(j)
(j)
(j)
9:
xk ? xk?1 ? ?j ?k?1 ;
10:
end for
(j)
11:
x
?j ? xNj ;
12: end for
Output: (Smooth case) Sample x
??T from (?
xj )Tj=1 with P (?
x?T = x
?j ) ? ?j Bj /bj ; (P-L case) x
?T .
As seen in the pseudo-code, the SCSG method consists of multiple epochs. In the j-th epoch, a minibatch of size Bj is drawn uniformly from the data and a sequence of mini-batch SVRG-type updates
are implemented, with the total number of updates being randomly generated from a geometric
distribution, with mean equal to the batch size. Finally it outputs a random sample from {?
xj }Tj=1 .
This is the standard way, proposed by [23], as opposed to computing arg minj?T k?f (?
xj )k which
requires additional overhead. By (2), the average total cost is
T
X
(Bj + bj ? ENj ) =
j=1
T
X
(Bj + bj ?
i=1
T
X
Bj
Bj .
)=2
bj
j=1
(5)
Define T (?) as the minimum number of epochs such that all outputs afterwards are ?-accurate
solutions, i.e.
T (?) = min{T : Ek?f (?
x?T 0 )k ? ? for all T 0 ? T }.
Recall the definition of Ccomp (?) at the beginning of this section, the average IFO complexity to
reach an ?-accurate solution is
T (?)
X
ECcomp (?) ? 2
Bj .
j=1
2.2
Parameter settings
The generic form (Algorithm 1) allows for flexibility in both stepsize, ?j , and batch/mini-batch size,
(Bj , bj ). In order to minimize the amount of tuning needed in practice, we provide several default
settings which have theoretical support. The settings and the corresponding complexity results are
summarized in Table 2. Note that all settings fix bj = 1 since this yields the best rate as will be shown
in Section 3. However, in practice a reasonably large mini-batch size bj might be favorable due to the
acceleration that could be achieved by vectorization; see Section 4 for more discussions on this point.
4
Table 2: Parameter settings analyzed in this paper.
Bj
bj Type of Objectives
ECcomp (?)
2/3
1
O 1? ? n
1
Smooth
O ?5/3
? n?
3
1
n2/3
? 5/3
j2 ?n
1
Smooth
O
?
?
?
1
1
1
1
? ( ? n) + ( ? n)2/3
O ?? ? n
1 Polyak-Lojasiewicz O
??
? ??
?j
Version 1
Version 2
Version 3
3
3.1
1
2LB 2/3
1
2/3
2LBj
1
2/3
2LBj
Convergence Analysis
One-epoch analysis
First we present the analysis for a single epoch. Given j, we define
ej = ?fIj (?
xj?1 ) ? ?f (?
xj?1 ).
(6)
(j)
(j)
As shown in [14], the gradient update ?k is a biased estimate of the gradient ?f (xk ) conditioning
on the current random index ik . Specifically, within the j-th epoch,
(j)
(j)
(j)
(j)
(j)
EI?k ?k = ?f (xk ) + ?fIj (x0 ) ? ?f (x0 ) = ?f (xk ) + ej .
This reveals the basic qualitative difference between SVRG and SCSG. Most of the novelty in our
(j)
analysis lies in dealing with the extra term ej . Unlike [14], we do not assume kxk ? x? k to be
bounded since this is invalid in unconstrained problems, even in convex cases.
By careful analysis of primal and dual gaps [cf. 5], we find that the stepsize ?j should scale as
2
(Bj /bj )? 3 . Then same phenomenon has also been observed in [26, 27, 4] when bj = 1 and Bj = n.
2
Theorem 3.1 Let ?j L = ?(Bj /bj )? 3 . Suppose ? ? 16 and Bj ? 9 for all j, then under Assumption
A1,
13
bj
5L
6I(Bj < n)
2
Ek?f (?
xj )k ?
?
E(f (?
xj?1 ) ? f (?
xj )) +
? H? .
(7)
?
Bj
Bj
The proof is presented in Appendix B. It is not surprising that a large mini-batch size will increase
the theoretical complexity as in the analysis of mini-batch SGD. For this reason we restrict most of
our subsequent analysis to bj ? 1.
3.2
Convergence analysis for smooth non-convex objectives
When only assuming smoothness, the output x
??T is a random element from (?
xj )Tj=1 . Telescoping (7)
over all epochs, we easily obtain the following result.
Theorem 3.2 Under the specifications of Theorem 3.1 and Assumption A1,
P
?1 ?2
T
5L
?f + 6
bj 3 Bj 3 I(Bj < n) H?
j=1
?
.
Ek?f (?
x?T )k2 ?
1
PT
? 13
3
j=1 bj Bj
This theorem covers
manyexisting results. When Bj = n and bj = 1, Theorem 3.2 implies that
L?f
L?f
and hence T (?) = O(1+ ?n1/3
). This yields the same complexity bound
T n1/3
2/3
n
L?f
ECcomp (?) = O(n +
) as SVRG [26]. On the
?
other hand,
when bj = Bj ? B for some
L?f
H?
?
2
B < n, Theorem 3.2 implies that Ek?f (?
xT )k = O T + B . The second term can be made
?
L?f
L?f H?
O(?) by setting B = O H? . Under this setting T (?) = O
and
EC
(?)
=
O
.
2
comp
?
?
Ek?f (?
x?T )k2 = O
This is the same rate as in [26] for SGD.
5
However, both of the above settings are suboptimal since they either set the batch sizes Bj too large
or set the mini-batch sizes bj too large. By Theorem 3.2, SCSG can be regarded as an interpolation
between SGD and SVRG. By leveraging these two parameters, SCSG is able to outperform both
methods.
We start from considering a constant batch/mini-batch size Bj ? B, bj ? 1. Similar to SGD and
?
SCSG, B should be at least O( H? ). In applications like the training of neural networks, the required
accuracy is moderate and hence a small batch size suffices. This is particularly important since the
gradient can be computed without communication overhead, which is the bottleneck of SVRG-type
algorithms. As shown in Corollary 3.3 below, the complexity of SCSG beats both SGD and SVRG.
Corollary 3.3 (Constant batch sizes) Set
bj ? 1,
Bj ? B = min
12H?
,n ,
?
?j ? ? =
1
2
6LB 3
.
Then it holds that
ECcomp (?) = O
?
32 !
H?
L?f
H
?n +
?
?n
.
?
?
?
Assume that L?f , H? = O(1), the above bound can be simplified to
32 !
1
1
1
?n + ?
?n
=O
ECcomp (?) = O
?
?
?
2
n3
5 ?
?
?3
1
!
.
When the target accuracy is high, one might consider a sequence of increasing batch sizes. Heuristically, a large batch is wasteful at the early stages when the iterates are inaccurate. Fixing the batch
3
size to be n as in SVRG is obviously suboptimal. Via an involved analysis, we find that Bj ? j 2
gives the best complexity among the class of SCSG algorithms.
Corollary 3.4 (Time-varying batch sizes) Set
bj ? 1,
n 3
o
Bj = min dj 2 e, n ,
?j =
1
2
.
6LBj3
Then it holds that
ECcomp (?) = O min
!
?
2
5
5
n3
H
5
? 53
?
3
3
(L?f ) + (H ) log
+
? (L?f + H log n) .
,n
5
?
?
?3
(8)
1
The proofs of both Corollary 3.3 and Corollary 3.4 are presented in Appendix C. To simplify the
bound (8), we assume that L?f , H? = O(1) in order to highlight the dependence on ? and n. Then
(8) can be simplified to
!
!
!
2
2
2
3 log n
3
3
5
5
1
1
n
1
n
1
n
5
?
?
3 +
ECcomp (?) = O
? n3 +
=O
=O
.
5 log
5 ? n
5 ?
?
?
?
?
?3
?3
?3
3
The log-factor log5 1? is purely an artifact of our proof. It can be reduced to log 2 +?
3
3
? > 0 by setting Bj ? j 2 (log j) 2 +? ; see remark 1 in Appendix C.
3.3
1
?
for any
Convergence analysis for P-L objectives
When the component fi (x) satisfies the P-L condition, it is known that the global minimum can be
found efficiently by SGD [17] and SVRG-type algorithms [26, 4]. Similarly, SCSG can also achieve
this. As in the last subsection, we start from a generic result to bound E(f (?
xT ) ? f ? ) and then
consider specific settings of the parameters as well as their complexity bounds.
6
1
Theorem 3.5 Let ?j =
5Lbj3
1
1
. Then under the same settings of Theorem 3.2,
??Bj3 +5Lbj3
E(f (?
xT ) ? f ? ) ? ?T ?T ?1 . . . ?1 ? ?f + 6?H? ?
T
X
?T ?T ?1 . . . ?j+1 ? I(Bj < n)
1
j=1
2
.
??Bj + 5Lbj3 Bj3
The proofs and additional discussion are presented in Appendix D. Again, Theorem 3.5 covers
existing complexity bounds for both SGD and SVRG. In fact, when Bj = bj ? B as in SGD, via
some calculation, we obtain that
!
T
L
H?
?
E(f (?
xT ) ? f ) = O
? ?f +
.
?+L
?B
?
L
The second term can be made O(?) by setting B = O( H
?? ), in which case T (?) = O( ? log
?
?f
?
). As
O( LH
?2 ?
a result, the average cost to reach an ?-accurate solution is ECcomp (?) =
), which is the same
as [17]. On the other hand, when Bj ? n and bj ? 1 as in SVRG, Theorem 3.5 implies that
!
T
L
?
? ?f .
E(f (?
xT ) ? f ) = O
1
?n 3 + L
2/3
This entails that T (?) = O (1 + ?n11/3 ) log 1? and hence ECcomp (?) = O (n + n ? ) log 1? ,
which is the same as [26].
By leveraging the batch and mini-batch sizes, we obtain a counterpart of Corollary 3.3 as below.
Corollary 3.6 Set
bj ? 1,
Bj ? B = min
12H?
,n ,
??
?j ? ? =
1
2
6LB 3
Then it holds that
(
ECcomp (?) = O
!
32 )
1 H?
H?
?f
?n +
?n
.
log
??
? ??
?
Recall the results from Table 1, SCSG is O ?1 + (??)11/3 faster than SGD and is never worse than
SVRG. When both ? and ? are moderate, the acceleration of SCSG over SVRG is significant. Unlike
the smooth case, we do not find any possible choice of setting that can achieve a better rate than
Corollary 3.6.
4
Experiments
We evaluate SCSG and mini-batch SGD on the MNIST dataset with (1) a three-layer fully-connected
neural network with 512 neurons in each layer (FCN for short) and (2) a standard convolutional
neural network LeNet [20] (CNN for short), which has two convolutional layers with 32 and 64
filters of size 5 ? 5 respectively, followed by two fully-connected layers with output size 1024 and
10. Max pooling is applied after each convolutional layer. The MNIST dataset of handwritten digits
has 50, 000 training examples and 10, 000 test examples. The digits have been size-normalized and
centered in a fixed-size image. Each image is 28 pixels by 28 pixels. All experiments were carried
out on an Amazon p2.xlarge node with a NVIDIA GK210 GPU with algorithms implemented in
TensorFlow 1.0.
Due to the memory issues, sampling a chunk of data is costly. We avoid this by modifying the inner
loop: instead of sampling mini-batches from the whole dataset, we split the batch Ij into Bj /bj
mini-batches and run SVRG-type updates sequentially on each. Despite the theoretical advantage of
setting bj = 1, we consider practical settings bj > 1 to take advantage of the acceleration obtained
7
by vectorization. We initialized parameters by TensorFlow?s default Xavier uniform initializer. In all
experiments below, we show the results corresponding to the best-tuned stepsizes.
We consider three algorithms: (1) SGD with a fixed batch size B ? {512, 1024}; (2) SCSG with a
fixed batch size B ? {512, 1024} and a fixed mini-batch size b = 32; (3) SCSG with time-varying
batch sizes Bj = dj 3/2 ? ne and bj = dBj /32e. To be clear, given T epochs, the IFO complexity
PT
of the three algorithms are T B, 2T B and 2 j=1 Bj , respectively. We run each algorithm with 20
passes of data. It is worth mentioning that the largest batch size in Algorithm 3 is d2751.5 e = 4561,
which is relatively small compared to the sample size 50000.
We plot in Figure 1 the training and the validation loss against the IFO complexity?i.e., the number of
passes of data?for fair comparison. In all cases, both versions of SCSG outperform SGD, especially
in terms of training loss. SCSG with time-varying batch sizes always has the best performance and it
is more stable than SCSG with a fixed batch size. For the latter, the acceleration is more significant
after increasing the batch size to 1024. Both versions of SCSG provide strong evidence that variance
reduction can be achieved efficiently without evaluating the full gradient.
4
2
6
8
10
#grad / n
12
10-1
4
2
6
8
10
#grad / n
12
14
0
2
4
6
8
10
#grad / n
12
14
100
10-1
0
2
4
6
8
10
#grad / n
12
14
10-1
10-2 0
2
4
6
8
10
#grad / n
12
14
100
10-1
0
2
4
6
8
10
#grad / n
12
FCN
Training Log-Loss
Training Log-Loss
10-2
14
100
0
10-1
FCN
SGD (B = 512)
SCSG (B = 512, b = 32)
SCSG (B = j^1.5, B/b = 32)
100
14
Validation Log-Loss
Validation Log-Loss
0
SGD (B = 1024)
SCSG (B = 1024, b = 32)
SCSG (B = j^1.5, B/b = 32)
100
Validation Log-Loss
10-2
Training Log-Loss
10-1
CNN
Validation Log-Loss
Training Log-Loss
CNN
SGD (B = 512)
SCSG (B = 512, b = 32)
SCSG (B = j^1.5, B/b = 32)
100
SGD (B = 1024)
SCSG (B = 1024, b = 32)
SCSG (B = j^1.5, B/b = 32)
100
10-1
10-2 0
2
4
#grad / n
6
8
10
12
14
2
4
#grad / n
6
8
10
12
14
100
10-1
0
Figure 1: Comparison between two versions of SCSG and mini-batch SGD of training loss (top row)
and validation loss (bottom row) against the number of IFO calls. The loss is plotted on a log-scale.
Each column represents an experiment with the setup printed on the top.
CNN
scsg (B=j^1.5, B/b=16)
sgd (B=j^1.5)
10 0
Validation Log Loss
10 0
Training Log Loss
CNN
scsg (B=j^1.5, B/b=16)
sgd (B=j^1.5)
10 -1
10 -1
0
50
100
150
Wall Clock Time (in second)
200
0
50
100
150
Wall Clock Time (in second)
200
Figure 2: Comparison between SCSG and mini-batch SGD of training loss and validation loss with a
CNN loss, against wall clock time. The loss is plotted on a log-scale.
Given 2B IFO calls, SGD implements updates on two fresh batches while SCSG replaces the second
batch by a sequence of variance reduced updates. Thus, Figure 1 shows that the gain due to variance
reduction is significant when the batch size is fixed. To further explore this, we compare SCSG with
time-varying batch sizes to SGD with the same sequence of batch sizes. The results corresponding to
the best-tuned constant stepsizes are plotted in Figure 3a. It is clear that the benefit from variance
reduction is more significant when using time-varying batch sizes.
We also compare the performance of SGD with that of SCSG with time-varying batch sizes against
wall clock time, when both algorithms are implemented in TensorFlow and run on a Amazon p2.xlarge
node with a NVIDIA GK210 GPU. Due to the cost of computing variance reduction terms in SCSG,
each update of SCSG is slower per iteration compared to SGD. However, SCSG makes faster progress
8
in terms of both training loss and validation loss compared to SCD in wall clock time. The results are
shown in Figure 2.
10-2
0
2
4
6
8
10
#grad / n
12
14
SGD
SCSG
10-1
10-2 0
2
4
6
8
10
#grad / n
12
CNN
14
B/b = 2.0
B/b = 5.0
B/b = 10.0
B/b = 16.0
B/b = 32.0
100
10-1
10-2
0
2
4
6
8
10
#grad / n
12
14
FCN
Training Log-Loss
10-1
FCN
100
Training Log-Loss
SGD
SCSG
Training Log-Loss
Training Log-Loss
CNN
100
B/b = 2.0
B/b = 5.0
B/b = 10.0
B/b = 16.0
B/b = 32.0
100
10-1
0
2
4
6
8
10
#grad / n
12
14
(b) SCSG with different Bj /bj
(a) SCSG and SGD with increasing batch sizes
Finally, we examine the effect of Bj /bj , namely the number of mini-batches within an iteration,
since it affects the efficiency in practice where the computation time is not proportional to the batch
size. Figure 3b shows the results for SCSG with Bj = dj 3/2 ? ne and dBj /bj e ? {2, 5, 10, 16, 32}.
In general, larger Bj /bj yields better performance. It would be interesting to explore the tradeoff
between computation efficiency and this ratio on different platforms.
5
Conclusion and Discussion
We have presented the SCSG method for smooth, non-convex, finite-sum optimization problems.
SCSG is the first algorithm that achieves a uniformly better rate than SGD and is never worse
than SVRG-type algorithms. When the target accuracy is low, SCSG significantly outperforms the
SVRG-type algorithms. Unlike various other variants of SVRG, SCSG is clean in terms of both
implementation and analysis. Empirically, SCSG outperforms SGD in the training of multi-layer
neural networks.
Although we only consider the finite-sum objective in this paper, it is straightforward to extend SCSG
to the general stochastic optimization problems where the objective can be written as E??F f (x; ?):
at the beginning of j-th epoch a batch of i.i.d. sample (?1 , . . . , ?Bj ) is drawn from the distribution F
and
Bj
1 X
?f (?
xj?1 ; ?i ) (see line 3 of Algorithm 1);
gj =
Bj i=1
(k)
(k)
at the k-th step, a fresh sample (??1 , . . . , ??bj ) is drawn from the distribution F and
(j)
?k?1 =
bj
bj
1 X
1 X
(j)
(k)
(j) (k)
?f (xk?1 ; ??i ) ?
?f (x0 ; ??i ) + gj
bj i=1
bj i=1
(see line 8 of Algorithm 1).
Our proof directly carries over to this case, by simply suppressing the term I(Bj < n), and yields
? ?5/3 ) for smooth non-convex objectives and the bound O(?
? ?1 ??1 ? ??5/3 ??2/3 ) for
the bound O(?
P-L objectives. These bounds are obtained simply by setting n = ? in our convergence analysis.
Compared to momentum-based methods [28] and methods with adaptive stepsizes [10, 18], the
mechanism whereby SCSG achieves acceleration is qualitatively different: while momentum aims at
balancing primal and dual gaps [5], adaptive stepsizes aim at balancing the scale of each coordinate,
and variance reduction aims at removing the noise. We believe that an algorithm that combines these
three techniques is worthy of further study, especially in the training of deep neural networks where
the target accuracy is modest.
Acknowledgments
The authors thank Zeyuan Allen-Zhu, Chi Jin, Nilesh Tripuraneni and anonymous reviewers for
helpful discussions.
References
[1] Alekh Agarwal and Leon Bottou. A lower bound for the optimization of finite sums. ArXiv
e-prints abs/1410.0723, 2014.
9
[2] Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma. Finding approximate local minima for nonconvex optimization in linear time. arXiv preprint
arXiv:1611.01146, 2016.
[3] Zeyuan Allen-Zhu. Natasha: Faster stochastic non-convex optimization via strongly non-convex
parameter. arXiv preprint arXiv:1702.00763, 2017.
[4] Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization.
ArXiv e-prints abs/1603.05643, 2016.
[5] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient
and mirror descent. arXiv preprint arXiv:1407.1537, 2014.
[6] Zeyuan Allen-Zhu and Yang Yuan. Improved SVRG for non-strongly-convex or sum-of-nonconvex objectives. ArXiv e-prints, abs/1506.01972, 2015.
[7] Dimitri P Bertsekas. A new class of incremental gradient methods for least squares problems.
SIAM Journal on Optimization, 7(4):913?926, 1997.
[8] Yair Carmon, John C Duchi, Oliver Hinder, and Aaron Sidford. Accelerated methods for
non-convex optimization. arXiv preprint arXiv:1611.00756, 2016.
[9] Yair Carmon, Oliver Hinder, John C Duchi, and Aaron Sidford. " convex until proven guilty":
Dimension-free acceleration of gradient descent on non-convex functions. arXiv preprint
arXiv:1705.02766, 2017.
[10] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning
and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121?2159, 2011.
[11] Alexei A Gaivoronski. Convergence properties of backpropagation for neural nets via theory of
stochastic gradient methods. part 1. Optimization methods and Software, 4(2):117?134, 1994.
[12] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex
stochastic programming. SIAM Journal on Optimization, 23(4):2341?2368, 2013.
[13] Saeed Ghadimi and Guanghui Lan. Accelerated gradient methods for nonconvex nonlinear and
stochastic programming. Mathematical Programming, 156(1-2):59?99, 2016.
[14] Reza Harikandeh, Mohamed Osama Ahmed, Alim Virani, Mark Schmidt, Jakub Kone?cn`y, and
Scott Sallinen. Stop wasting my gradients: Practical SVRG. In Advances in Neural Information
Processing Systems, pages 2242?2250, 2015.
[15] Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic
variational inference. Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[16] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[17] Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximalgradient methods under the polyak-?ojasiewicz condition. In Joint European Conference on
Machine Learning and Knowledge Discovery in Databases, pages 795?811. Springer, 2016.
[18] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[19] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444,
2015.
[20] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[21] Lihua Lei and Michael I Jordan. Less than a single pass: Stochastically controlled stochastic
gradient method. arXiv preprint arXiv:1609.03261, 2016.
[22] Peter McCullagh and John A Nelder. Generalized Linear Models. CRC Press, 1989.
10
[23] Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic
approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?
1609, 2009.
[24] Yurii Nesterov. Introductory lectures on convex optimization: A basic course. Kluwer Academic
Publishers, Massachusetts, 2004.
[25] Boris Teodorovich Polyak. Gradient methods for minimizing functionals. Zhurnal Vychislitel?noi Matematiki i Matematicheskoi Fiziki, 3(4):643?653, 1963.
[26] Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic
variance reduction for nonconvex optimization. arXiv preprint arXiv:1603.06160, 2016.
[27] Sashank J Reddi, Suvrit Sra, Barnab?s P?czos, and Alex Smola. Fast incremental method for
nonconvex optimization. arXiv preprint arXiv:1603.06159, 2016.
[28] Ilya Sutskever, James Martens, George E Dahl, and Geoffrey E Hinton. On the importance of
initialization and momentum in deep learning. ICML (3), 28:1139?1147, 2013.
[29] Paul Tseng. An incremental gradient (-projection) method with momentum term and adaptive
stepsize rule. SIAM Journal on Optimization, 8(2):506?531, 1998.
[30] Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and
R in Machine Learning, 1(1?2):1?305, 2008.
variational inference. Foundations and Trends
11
| 6829 |@word cnn:8 version:6 achievable:1 norm:2 heuristically:1 pick:1 incurs:1 sgd:37 carry:1 reduction:13 initial:1 tuned:2 ours:1 interestingly:1 suppressing:1 document:1 outperforms:5 existing:4 xnj:1 current:1 comparing:2 surprising:1 naman:1 diederik:1 written:1 readily:1 gpu:2 john:5 subsequent:1 plot:1 update:7 juditsky:1 stationary:1 xk:7 beginning:2 short:2 ojasiewicz:1 blei:1 provides:1 iterates:2 node:2 zhang:1 mathematical:1 direct:1 ik:1 bj3:2 qualitative:1 consists:1 yuan:1 overhead:2 combine:1 introductory:1 x0:6 examine:1 multi:3 chi:1 landau:1 considering:1 increasing:3 provided:1 moreover:2 notation:4 bounded:2 emerging:1 affirmative:1 developed:1 tripuraneni:1 finding:1 nj:2 wasting:1 pseudo:2 berkeley:8 jianbo:1 k2:7 schwartz:1 unit:2 bertsekas:1 before:1 local:1 consequence:2 despite:2 interpolation:1 might:2 zeroth:1 initialization:1 studied:1 mentioning:1 nemirovski:1 practical:2 acknowledgment:1 lecun:2 practice:5 implement:1 backpropagation:1 digit:2 procedure:2 empirical:2 significantly:3 printed:1 projection:1 alim:1 ghadimi:2 reviewer:1 marten:1 straightforward:1 jimmy:1 convex:28 focused:1 formulate:1 amazon:2 rule:1 regarded:1 deriving:1 coordinate:1 target:9 suppose:1 pt:2 programming:4 element:1 trend:1 recognition:1 particularly:1 database:1 observed:1 bottom:1 preprint:9 wang:1 capture:1 connected:2 noi:1 yk:1 accessing:1 convexity:1 complexity:21 scd:1 nesterov:1 hinder:2 barnabas:1 ccomp:2 incur:1 purely:1 predictive:1 efficiency:2 completely:1 easily:1 joint:1 various:2 fast:1 larger:1 elad:3 final:1 online:1 obviously:1 advantage:4 differentiable:1 sequence:4 net:1 propose:1 j2:1 loop:3 iff:1 flexibility:1 achieve:3 sutskever:1 convergence:10 requirement:1 incremental:3 adam:1 boris:1 coupling:1 develop:1 sallinen:1 fixing:1 stat:1 virani:1 measured:2 ij:3 progress:1 p2:2 strong:2 implemented:3 implies:3 fij:3 filter:1 stochastic:26 modifying:1 centered:1 crc:1 require:2 fix:1 suffices:1 wall:5 anonymous:1 barnab:1 brian:1 elementary:1 strictly:1 pl:1 hold:4 bj:87 matthew:1 achieves:7 early:2 a2:1 favorable:1 largest:1 vice:1 hoffman:1 always:2 aim:3 avoid:1 ej:3 stepsizes:6 varying:6 corollary:8 derived:1 improvement:1 contrast:1 helpful:1 inference:2 dependent:1 inaccurate:1 hidden:1 relegated:1 karimi:1 provably:1 pixel:2 arg:1 dual:2 among:1 issue:1 denoted:1 art:1 raised:1 platform:1 uc:4 equal:1 never:4 beach:1 sampling:3 represents:1 icml:1 fcn:5 future:1 yoshua:2 simplify:1 employ:1 randomly:2 individual:1 saeed:2 n1:3 lbj:2 ab:3 william:1 interest:1 alexei:1 chong:1 analyzed:1 kone:1 primal:2 tj:6 accurate:5 oliver:2 unification:1 lh:1 modest:2 unless:1 euclidean:1 initialized:1 desired:1 plotted:3 theoretical:8 column:3 cover:2 sidford:2 measuring:1 cost:6 uniform:1 johnson:1 too:2 answer:1 my:1 gd:1 chunk:1 ju:1 st:1 guanghui:3 siam:4 nilesh:1 michael:3 ilya:1 squared:1 again:1 initializer:1 opposed:1 possibly:1 worse:5 stochastically:4 ek:11 dimitri:1 summarized:1 ifo:7 satisfy:1 hazan:3 sup:1 start:2 parallel:2 jul:1 arkadi:1 minimize:1 square:1 accuracy:11 convolutional:3 variance:16 efficiently:2 yield:5 generalize:1 handwritten:1 critically:1 worth:2 comp:1 carmon:2 hamed:1 minj:1 reach:4 cju:1 definition:1 against:4 involved:1 mohamed:1 james:1 proof:6 gain:1 stop:1 dataset:3 massachusetts:1 popular:1 recall:3 subsection:1 knowledge:1 proximalgradient:1 organized:1 hefny:1 improved:3 rie:1 formulation:1 strongly:2 stage:2 smola:2 clock:5 until:1 hand:2 replacing:1 ei:1 nonlinear:1 minibatch:1 artifact:1 lei:3 believe:1 usa:1 facilitate:1 effect:1 normalized:1 counterpart:1 xavier:1 hence:3 lenet:1 whereby:1 suboptimality:1 generalized:2 demonstrate:1 duchi:3 allen:6 ranging:1 image:2 variational:2 ef:2 fi:17 recently:2 dbj:2 common:2 lihua:3 empirically:1 conditioning:1 reza:1 extend:2 kluwer:1 significant:4 versa:1 paisley:1 smoothness:6 rd:2 tuning:1 unconstrained:1 similarly:1 dj:3 specification:1 entail:1 stable:1 yk2:2 lkx:1 gj:4 alekh:1 patrick:1 recent:2 hide:1 inf:3 moderate:3 nvidia:2 nonconvex:6 suvrit:2 inequality:1 yi:1 seen:1 minimum:7 additional:4 george:1 zeyuan:6 converge:1 maximize:1 novelty:1 full:3 multiple:1 afterwards:1 smooth:14 technical:1 faster:6 academic:1 calculation:2 ahmed:2 long:1 a1:5 controlled:4 n11:1 variant:2 basic:3 arxiv:21 iteration:2 agarwal:2 achieved:4 publisher:1 biased:1 rest:1 unlike:4 extra:1 pass:2 pooling:1 orecchia:1 leveraging:2 jordan:4 integer:1 call:2 reddi:2 yang:1 split:1 bengio:2 iterate:1 xj:11 affect:1 restrict:1 suboptimal:2 polyak:7 inner:3 cn:1 haffner:1 tradeoff:1 grad:12 bottleneck:1 ultimate:1 accelerating:1 peter:1 sashank:2 poczos:1 hessian:1 remark:1 deep:5 generally:1 clear:2 amount:1 reduced:2 generate:1 shapiro:1 outperform:2 per:1 write:2 key:1 lan:3 drawn:3 wasteful:1 clean:1 dahl:1 subgradient:1 sum:8 year:1 run:3 almost:1 throughout:2 family:1 yann:2 appendix:5 comparable:1 layer:8 bound:13 followed:1 cheng:1 replaces:1 log5:1 alex:2 n3:3 software:1 answered:1 speed:1 min:8 leon:1 tengyu:1 relatively:1 martin:1 transferred:1 combination:1 smaller:1 bullins:1 pcan:1 discus:1 mechanism:2 needed:1 singer:1 end:2 yurii:1 lojasiewicz:5 available:3 generic:4 stepsize:3 alternative:2 batch:49 yair:2 slower:1 schmidt:2 denotes:1 assumes:1 cf:1 top:2 graphical:1 anatoli:1 calculating:1 yoram:1 especially:2 classical:1 comparatively:1 objective:9 question:3 quantity:1 print:3 costly:1 dependence:4 usual:1 gradient:43 thank:1 gaivoronski:1 cauchy:1 guilty:1 tseng:1 reason:1 fresh:2 assuming:3 code:2 index:2 mini:18 ratio:1 minimizing:2 setup:1 negative:1 ba:1 implementation:1 motivates:1 allowing:1 upper:1 neuron:1 finite:7 descent:5 jin:1 beat:3 hinton:2 communication:1 worthy:1 lb:3 david:1 pair:1 required:2 namely:1 extensive:1 tensorflow:3 kingma:1 nip:1 beyond:2 able:1 below:4 scott:1 regime:3 appeared:1 summarize:2 geom:3 including:1 max:1 memory:1 wainwright:1 telescoping:1 zhu:6 lorenzo:1 numerous:1 ne:2 carried:1 epoch:9 geometric:3 literature:1 discovery:1 asymptotic:2 loss:26 expect:1 highlight:1 fully:2 lecture:1 interesting:1 proportional:1 proven:1 geoffrey:2 validation:10 foundation:1 balancing:2 row:2 course:1 surprisingly:1 supported:1 last:1 svrg:19 free:1 czos:1 formal:1 weaker:1 allow:1 julie:1 benefit:1 default:2 dimension:1 evaluating:1 xlarge:2 author:1 made:4 qualitatively:2 adaptive:5 simplified:2 ec:1 functionals:1 approximate:1 dealing:1 global:3 sequentially:1 reveals:1 assumed:1 nelder:1 vectorization:2 table:5 nature:1 reasonably:1 robust:1 ca:1 sra:2 bottou:2 european:1 big:1 whole:1 noise:1 paul:1 n2:3 fair:1 en:1 tong:1 momentum:5 exponential:1 lie:1 third:1 theorem:11 emphasizing:1 removing:1 xt:5 specific:1 jakub:1 symbol:1 list:1 evidence:1 essential:1 mnist:2 importance:1 mirror:1 kx:2 chen:1 gap:3 logarithmic:2 simply:2 explore:3 kxk:1 natasha:1 partially:1 nutini:1 springer:1 satisfies:2 ma:1 presentation:1 acceleration:8 invalid:1 careful:1 lipschitz:1 mccullagh:1 specifically:1 except:1 uniformly:3 called:1 total:2 pas:1 experimental:1 enj:1 cond:1 meaningful:1 aaron:2 support:1 mark:2 latter:2 brevity:2 alexander:1 accelerated:3 evaluate:1 phenomenon:1 |
6,445 | 683 | Learning Control Under Extreme
Uncertainty
Vijaykumar Gullapalli
Computer Science Department
University of Massachusetts
Amherst, MA 01003
Abstract
A peg-in-hole insertion task is used as an example to illustrate
the utility of direct associative reinforcement learning methods for
learning control under real-world conditions of uncertainty and
noise. Task complexity due to the use of an unchamfered hole
and a clearance of less than 0.2mm is compounded by the presence
of positional uncertainty of magnitude exceeding 10 to 50 times the
clearance. Despite this extreme degree of uncertainty, our results
indicate that direct reinforcement learning can be used to learn a
robust reactive control strategy that results in skillful peg-in-hole
insertions.
1
INTRODUCTION
Many control tasks of interest today involve controlling complex nonlinear systems
under uncertainty and noise. 1 Because traditional control design techniques are not
very effective under such circumstances, methods for learning control are becoming
increasingly popular. Unfortunately, in many of these control tasks, it is difficult
to obtain training information in the form of prespecified instructions on how to
perform the task. Therefore supervised learning methods are not directly applicable.
At the same time, evaluating the performance of a controller on the task is usually
fairly straightforward, and hence these tasks are ideally suited for the application
of associative reinforcement learning (Barto & Anandan, 1985).
IFor our purposes, noise can be regarded simply as one of the sources of uncertainty.
327
328
Gullapalli
In associative reinforcement learning, the learning system's interactions with its
environment are evaluated by a critic, and the goal of the learning system is to
learn to respond to each input with the action that has the best expected evaluation. In learning control tasks, the learning system is the controller, its actions are
control signals, and the critic's evaluations are based on the performance criterion
associated with the control task. Two kinds of associative reinforcement learning
methods, direct and indirect, can be distinguished (e.g., Gullapalli, 1992). Indirect reinforcement learning methods construct and use a model of the environment
and the critic (modeled either separately or together), while direct reinforcement
learning methods do not.
We have previously argued (Gullapalli, 1992; Barto & Gullapalli, 1992) that in the
presence of uncertainty, hand-crafting or learning an adequate model-imperative
if one is to use indirect methods for training the controller-can be very difficult.
Therefore, it can be expeditious to use direct reinforcement learning methods in
such situations. In this paper, a peg-in-hole insertion task is used as an example to
illustrate the utility of direct associative reinforcement learning methods for learning
control under real-world conditions of uncertainty.
2
PEG-IN-HOLE INSERTION
Peg-in-hole insertion has been widely used by roboticists for testing various approaches to robot control and has also been studied as a canonical robot assembly
operation (Whitney, 1982; Gustavson, 1984; Gordon, 1986). Although the abstract
peg-in-hole task can be solved quite easily, real-world conditions of uncertainty due
to (1) errors and noise in sensory feedback, (2) errors in execution of motion commands, and (3) uncertainty due to movement of the part grasped by the robot can
substantially degrade the performance of traditional control methods. Approaches
proposed for peg-in-hole insertion under uncertainty can be grouped into two major
classes: methods based on off-line planning, and methods based on reactive control.
Off-line planning methods combine geometric analysis of the peg-hole configuration
with analysis of the task statics to determine motion strategies that will result in
successful insertion (Whitney, 1982; Gustavson, 1984; Gordon, 1986). In the presence of uncertainty in sensing and control, researchers have suggested incorporating
the uncertainty into the geometric model of the task in configuration space (e.g.,
Lozano-Perez et al., 1984; Erdmann, 1986; Caine et al., 1989; Donald, 1986). Offline planning is based on the assumption that a realistic characterization of the
margins of uncertainty is available, which is a strong assumption when dealing with
real-world systems.
Methods based on reactive control, in comparison, try to counter the effects of
uncertainty with on-line modification of the motion control based on sensory feedback. Often, compliant motion control is used, in which the trajectory is modified
by contact forces or tactile stimuli occurring during the motion. The compliant
behavior either is actively generated or occurs passively due to the physical characteristics of the robot (Whitney, 1982; Asada, 1990). However, as Asada (1990)
points out, many tasks including the peg insertion task require complex nonlinear compliance or admittance behavior that is beyond the capability of a passive
mechanism. Unfortunately, humans find it quite difficult to prespecify appropri-
Learning Control Under Extreme Uncertainty
ate compliant behavior (Lozano-Perez et al., 1984), especially in the presence of
uncertainty. Hence techniques for learning compliant behavior can be very useful.
We demonstrate our approach to learning a reactive control strategy for peg-in-hole
insertion by training a controller to perform peg-in-hole insertions using a Zebra
Zero robot. The Zebra Zero is equipped with joint position encoders and a sixaxis force sensor at its wrist, whose outputs are all subject to uncertainty. Before
describing the controller and presenting its performance in peg insertion, we present
some experimental data quantifying the uncertainty in position and force sensors.
3
QUANTIFYING THE SENSOR UNCERTAINTY
In order to quantify the position uncertainty under varying load conditions similar
to those that occur when the peg is interacting with the hole, we compared the
sensed peg position with its actual position in cartesian space under different load
conditions. In one such experiment, the robot was commanded to maintain a fixed
position under five different loads conditions applied sequentially: no load, and a
fixed load of O.12Kgf applied in the ?:z: and ?y directions. Under each condition,
the position and force feedback from the robot sensors, as well as the actual :z:-y
position of the peg were recorded.
The sensed and actual :z:-y positions of the peg are shown in Table 1. The sensed :z:-y
positions were computed from the joint positions sensed by the Zero's joint position
encoders. As can be seen from the table, there is a large discrepancy between the
sensed and actual positions of the peg: while the actual change in the peg's position
under the external load was of the order of 2 to 3mm, the largest sensed change
in position was less than 0.025mm. In comparison, the clearance between the peg
and the hole (in the 3D task) was 0.175mm. From observations of the robot, we
could determine that the uncertainty in position was primarily due to gear backlash.
Other factors affecting the uncertainty include the posture of the robot arm, which
affects the way the backlash is loaded, and interactions between the peg and the
en vironment.
Table 1: Sensed And Actual Positions Under 5 Different Load Conditions
Load Condition
No load position
With -y load
With +:c load
With +y load
With -:c load
Final (no load) position
Sensed :c-y Position (mm)
Actual :c-y Position (mm)
(0.0, 0.000000)
(0.0, -0.014673)
(0.0,0.000000)
(0.0,0.024646)
(0.0,0.010026)
(0.0,0.000000)
(0.0, 0.0)
(0.0, -2.5)
(1.9, -0.3)
(-2.9, -0.2)
(0.3,2.2)
(0.0, -0.6)
Figure 1 shows 30 time-step samples of the force sensor output for each of the load
conditions described above. As can be seen from the figure, there is considerable
sensor noise, especially in recording moments. Although designing a controller that
can robustly perform peg insertions despite the large uncertainty in sensory input
329
330
Gullapalli
is difficult, our results indicate that a controller can learn a robust peg insertion
strategy.
:
F, I-A--"---"-',---,---.,..-.J
~
i
.!!
I
.I
t
:II:
Fy I""rv-W--..I
...,.......,._--.Jv-,,.,.,........--..J~..,..,...,...---~-?
F, f-'"-.-__
~
0.
...+
to
.!!
I
My
Nr-... r---U u
1~ I-----"""'-...""....,,....../''"'''''"'-...,.......L-...~
_,....-~-
..............
o
10
No Ioed
10
LDIId In ..,
Loed In
+.
1111
10
Loed In
+,
LDIId In -.
No Ioed
Figure I: 30 Time-step Samples Of The Sensed Forces and Moments Under 5
Different Load Conditions. With An Ideal Sensor, The Readings Would Be Constant
In Each 30 Time-step Interval.
4
LEARNING PEG-IN-HOLE INSERTION
Our approach to learning a reactive control strategy for peg insertion under uncertainty is based on active generation of compliant behavior using a nonlinear
mapping from sensed positions and forces to position commands. 2 The controller
learns this mapping through repeated attempts at peg insertion.
The Peg Insertion Tasks As depicted in Figure 2, both 2D and 3D versions
of the peg insertion task were attempted. In the 2D version of the task, the peg
used was 50mm long and 22.225mm (7/8in) wide, while the hole was 23.8125mm
(15/16in) wide. Thus the clearance between the peg and the hole was 0.79375mm
(1/32in). In the 3D version, the peg used was 30mm long and 6mm in diameter,
while the hole was 6.35mm in diameter. Thus the clearance in the 3D case was
0.175mm.
The Controller
The controller was implemented as a connectionist network
that operated in closed loop with the robot so that it could learn a reactive control strategy for performing peg insertions. The network used in the 2D task had
6 inputs, viz., the sensed positions and forces, (X, Y, e) and (Fx, Fy , M z ), three
2See also (Gullapalli et al., 1992).
Learning Control Under Extreme Uncertainty
The 20 peg InsertIon task
(X . Y,EI)
Position
(X. Y.EI) and
Foltles (F ? ? Fr ' M.),
The 3D peg Insertion task
=':. t..
Posmon
(X.Y . Z. El1 ~2)and
Fr'
~ . M ? . My .M,) ,
Cor'lrOls .... :
Position commands (x. y. 8)
Posmon commands (x. y. Z . 8 1 , 82)
Figure 2: The 2D And 3D Peg-in-hole Insertion Tasks.
outputs forming the position command (x, y, 8), and two hidden layers of 15 units
each. For the 3D task, the network had 11 inputs, the sensed positions and forces,
(X,Y,Z,E>1,E>2) and (Fx,Fy,F: , l.{r;, My,lV/ z ), five outputs forming the position
command (x, y, Z, 8 11 82 ), and two hidden layers of 30 units each.
In both networks, the hidden units used were back-propagation units, while the
output units used were stochastic real-valued (SRV) reinforcement learning units
(Gullapalli, 1990). SRV units use a direct reinforcement learning algorithm to find
the best real-valued output for each input (see Gullapalli (1990) for details). The
position inputs to the network were computed from the sensed joint positions using
the forward kinematics equations for the Zero. The force and moment inputs were
those sensed by the six-axis force sensor. A PD servo loop was used to servo the
robot to the position output by the network at each time step.
Training Methodology The controller network was trained in a s~quence of
trials, each of which started with the peg at a random position and orientation
with respect to the hole and ended either when the peg was successfully inserted in
the hole, or when 100 time steps had elapsed. An insertion was termed successful
when the peg was inserted to a depth of 25mm into the hole. At each time step
during training, the sensed peg position and forces were input to the network, and
the computed control output was executed by the robot, resulting in some motion
of the peg. An evaluation of the controller's performance, r, ranging from 0 to 1
with 1 denoting the best possible evaluation, was computed based on the new peg
position and the forces acting on the peg as
if all forces :S 0.5Kgf,
_ { max(O.O, 1.0 - O.Olllposition errorll)
max(O.O, 1.0 - O.Olllposition errorll - O.lFmax ) otherwise,
r -
where Fmax denotes the largest magnitude force component. Thus, the closer the
sensed peg position was to the desired position with the peg inserted in the hole,
the higher the evaluation. Large sensed forces, however, reduced the evaluation.
Using this evaluation, the network adjusted its weights appropriately and the cycle
was repeated.
331
332
Gullapalli
PERFORMANCE RESULTS
5
A learning curve showing the final evaluation over 500 consecutive trials on the 2D
task is shown in Figure 3 (a). The final evaluation levels off close to 1 after about
1100.00
c: 1.00
8
J
?c:
.. 0.10
110.00
~
i8
?>
?
a
c:
i
1
0.10
J 10.00
0.40
40.00
III
0.20
0.0025
76
125
176
226
%75
3:26
375
<126
476
0.0025
76
125
176
226
T,.I,
(a)
276
3:26
376
<126
476
T,.I,
(b)
Figure 3: Smoothed Final Evaluation Received And Smoothed Insertion Time (In
Simulation Time Steps) Taken On Each Of 500 Consecutive Trials On The 2D Peg
Insertion Task. The Smoothed Curve Was Obtained By Filtering The Raw Data
Using A Moving-Average Window Of 25 Consecutive Values.
150 trials because after that amount of training, the controller is consistently able
to perform successful insertions within 100 time steps. However, performance as
measured by insertion time continues to improve, as is indicated by the learning
curve in Figure 3 (b), which shows the time to insertion decreasing continuously
over the 500 trials. These curves indicate that the controller becomes progressively
more skillful at peg insertion with training. Similar results were obtained for the
3D task, although learning was slower in this case. The performance curves for the
3D task are shown in Figure 4.
6
DISCUSSION AND CONCLUSIONS
The high degree of uncertainty in the sensory feedback from the Zebra Zero, coupled with the fine motion control requirements of peg-in-hole insertion make the
task under consideration an example of learning control under extreme uncertainty.
The positional uncertainty, in particular, is of the order of 10 to 50 times the clearance between the peg and the hole and is primarily due to gear backlash. There is
also significant uncertainty in the sensed forces and moments due to sensor noise.
Our results indicate that direct reinforcement learning can be used to learn a reactive control strategy that works robustly even in the presence of a high degree of
uncertainty.
Learning Control Under Extreme Uncertainty
,
c: 1.00
1'00.00
?
.. 0.10
110.00
I
1
Ii
i
?
10.00
0 .10
0.40
40.00
0.20
20.00
0.0021
, 6
226
326
426
_
_
726
TrW,
(a)
(b)
Figure 4: Smoothed Final Evaluation Received And Smoothed Insertion Time (In
Simulation Time Steps) Taken On Each Of 800 Consecutive Trials On The 3D Peg
Insertion Task. The Smoothed Curve Was Obtained By Filtering The Raw Data
Using A Moving-Average Window Of 25 Consecutive Values.
Although others have studied similar tasks, in most other work on learning peg-inhole insertion (e.g., Lee & Kim, 1988) it is assumed that the positional uncertainty
is about an order of magnitude less than the clearance. Moreover, results are
often presented using simulated peg-hole systems. Our results indicate that our
approach works well with a physical system, despite the much higher magnitudes
of noise and consequently greater degree of uncertainty inherent in dealing with
physical systems. Furthermore, the success of the direct reinforcement learning
approach to training the controller indicates that this approach can be useful for
automatically synthesizing robot control strategies that satisfy constraints encoded
in the performance evaluations.
Acknowledgements
This paper has benefited from many useful discussions with Andrew Barto and
Roderic Grupen. I would also like to thank Kamal Souccar for assisting with running
the Zebra Zero. This material is based upon work supported by the Air Force
Office of Scientific Research, Bolling AFB, under Grant AFOSR-89-0526 and by
the National Science Foundation under Grant ECS-8912623.
References
[1] H. Asada. Teaching and learning of compliance using neural nets: Representation and generation of nonlinear compliance. In Proceedings of the 1990 IEEE
International Conference on Robotics and Automation, pages 1237-1244, 1990.
333
334
Gullapalli
[2] A. G. Barto and P. Anandan. Pattern recognIzmg stochastic learning automata. IEEE Transactions on Systems, Man, and Cybernetics, 15:360-375,
1985.
[3] A. G. Barto and V. Gullapalli. Neural Networks and Adaptive Control. In
P. Rudomin, M. A. Arbib, and F. Cervantes-Perez, editors, Natural and Artificial Intelligence. Research Notes in Neural Computation, Springer-Verlag:
Washington. (in press).
[4] M. E. Caine, T. Lozano-Perez, and W. P. Seering. Assembly strategies for
chamferless parts. In Proceedings of the IEEE International Conference on
Robotics and Automation, pages 472-477, May 1989.
[5] B. R. Donald. Robot motion planning with uncertainty in the geometric models
of the robot and environment: A formal framework for error detection and
recovery. In Proceedings of the IEEE International Conference on Robotics
and Automation, pages 1588-1593, 1986.
[6] M. Erdmann. Using backprojections for fine motion planning with uncertainty.
International Journal of Robotics Research, 5(1):19-45, 1986.
[7] S. J. Gordon. A utomated assembly using feature localization. PhD thesis,
Massachusetts Institute of Technology, MIT AI Laboratory, Cambridge, MA,
1986. Technical Report 932.
[8] V. Gullapalli. A stochastic reinforcement learning algorithm for learning realvalued functions. Neural Networks, 3:671-692, 1990.
[9] V. Gullapalli. Reinforcement Learning and its application to control. PhD
thesis, University of Massachusetts, Amherst, MA 01003, 1992.
[10] V. Gullapalli, R. A. Grupen, and A. G. Barto. Learning reactive admittance
control. In Proceedings of the 1992 IEEE International Conference on Robotics
and Automation, pages 1475-1480, Nice, France, 1992.
[11] R. E. Gustavson. A theory for the three-dimensional mating of chamfered
cylindrical parts. Journal of Mechanisms, Transmissions, and Automated Design, December 1984.
[12] S. Lee and M. H. Kim. Learning expert systems for robot fine motion control.
In H. E. Stephanou, A. Meystal, and J. Y. S. Luh, editors, Proceedings of the
1988 IEEE International Symposium on Intelligent Control, pages 534-544,
Arlington, Virginia, USA, 1989. IEEE Computer Society Press: Washington.
[13J T. Lozano-Perez, M. T. Mason, and R. H. Taylor. Automatic synthesis of finemotion strategies for robots. The International Journal of Robotics Research,
3(1):3-24, Spring 1984.
[14] D. E. Whitney. Quasi-static assembly of compliantly supported rigid parts.
Journal of Dynamic Systems, Measurement, and Contro~ 104, March 1982.
Also in Robot Motion: Planning and Contro~ (Brady, M., et al. eds.), MIT
Press, Cambridge, MA, 1982.
| 683 |@word trial:6 cylindrical:1 version:3 instruction:1 simulation:2 sensed:18 moment:4 configuration:2 denoting:1 realistic:1 progressively:1 intelligence:1 gear:2 prespecified:1 el1:1 characterization:1 five:2 direct:9 symposium:1 grupen:2 combine:1 expected:1 behavior:5 planning:6 decreasing:1 automatically:1 actual:7 equipped:1 window:2 becomes:1 moreover:1 kind:1 substantially:1 ended:1 brady:1 control:34 unit:7 grant:2 before:1 despite:3 becoming:1 studied:2 commanded:1 testing:1 wrist:1 grasped:1 donald:2 close:1 straightforward:1 automaton:1 recovery:1 regarded:1 fx:2 controlling:1 today:1 designing:1 continues:1 inserted:3 solved:1 cycle:1 movement:1 counter:1 servo:2 environment:3 pd:1 insertion:33 complexity:1 ideally:1 dynamic:1 trained:1 upon:1 localization:1 easily:1 joint:4 indirect:3 various:1 skillful:2 effective:1 artificial:1 quite:2 whose:1 widely:1 valued:2 encoded:1 otherwise:1 final:5 associative:5 backprojections:1 net:1 interaction:2 fr:2 loop:2 fmax:1 requirement:1 transmission:1 illustrate:2 andrew:1 measured:1 received:2 strong:1 implemented:1 kgf:2 indicate:5 quantify:1 direction:1 stochastic:3 human:1 material:1 argued:1 require:1 adjusted:1 mm:15 mapping:2 major:1 consecutive:5 purpose:1 applicable:1 grouped:1 largest:2 successfully:1 mit:2 sensor:9 modified:1 varying:1 barto:6 command:6 office:1 viz:1 quence:1 consistently:1 indicates:1 kim:2 rigid:1 gustavson:3 hidden:3 quasi:1 france:1 orientation:1 fairly:1 construct:1 washington:2 kamal:1 discrepancy:1 connectionist:1 stimulus:1 gordon:3 others:1 primarily:2 inherent:1 report:1 intelligent:1 national:1 maintain:1 attempt:1 detection:1 interest:1 evaluation:12 extreme:6 operated:1 perez:5 closer:1 taylor:1 desired:1 whitney:4 imperative:1 successful:3 virginia:1 encoders:2 my:3 international:7 amherst:2 lee:2 off:3 compliant:5 together:1 continuously:1 synthesis:1 thesis:2 recorded:1 external:1 expert:1 actively:1 luh:1 automation:4 satisfy:1 try:1 closed:1 capability:1 air:1 loaded:1 characteristic:1 raw:2 trajectory:1 researcher:1 cybernetics:1 ed:1 mating:1 associated:1 static:2 massachusetts:3 popular:1 back:1 trw:1 higher:2 supervised:1 arlington:1 methodology:1 afb:1 evaluated:1 furthermore:1 hand:1 ei:2 nonlinear:4 propagation:1 indicated:1 scientific:1 usa:1 effect:1 lozano:4 hence:2 laboratory:1 cervantes:1 during:2 clearance:7 criterion:1 presenting:1 demonstrate:1 motion:11 passive:1 roderic:1 ranging:1 consideration:1 physical:3 significant:1 measurement:1 cambridge:2 zebra:4 ai:1 automatic:1 teaching:1 had:3 moving:2 robot:18 termed:1 verlag:1 success:1 seen:2 greater:1 anandan:2 determine:2 signal:1 ii:2 rv:1 assisting:1 compounded:1 technical:1 long:2 controller:15 circumstance:1 admittance:2 robotics:6 affecting:1 separately:1 fine:3 interval:1 source:1 appropriately:1 compliance:3 subject:1 recording:1 december:1 presence:5 ideal:1 iii:1 automated:1 affect:1 arbib:1 gullapalli:15 six:1 utility:2 tactile:1 action:2 adequate:1 useful:3 involve:1 amount:1 ifor:1 diameter:2 reduced:1 peg:50 canonical:1 jv:1 uncertainty:36 respond:1 layer:2 occur:1 constraint:1 spring:1 passively:1 performing:1 department:1 march:1 ate:1 increasingly:1 modification:1 taken:2 equation:1 previously:1 describing:1 kinematics:1 mechanism:2 cor:1 available:1 operation:1 distinguished:1 robustly:2 slower:1 denotes:1 running:1 include:1 assembly:4 especially:2 society:1 contact:1 crafting:1 occurs:1 posture:1 strategy:10 traditional:2 nr:1 thank:1 simulated:1 degrade:1 fy:3 srv:2 modeled:1 difficult:4 unfortunately:2 executed:1 synthesizing:1 design:2 perform:4 observation:1 contro:2 situation:1 interacting:1 smoothed:6 bolling:1 elapsed:1 beyond:1 suggested:1 able:1 usually:1 pattern:1 reading:1 including:1 max:2 natural:1 force:18 arm:1 improve:1 technology:1 realvalued:1 axis:1 started:1 coupled:1 nice:1 geometric:3 acknowledgement:1 afosr:1 generation:2 filtering:2 lv:1 foundation:1 degree:4 editor:2 i8:1 critic:3 supported:2 offline:1 backlash:3 formal:1 institute:1 wide:2 feedback:4 depth:1 curve:6 world:4 evaluating:1 sensory:4 forward:1 reinforcement:15 adaptive:1 ec:1 transaction:1 dealing:2 sequentially:1 active:1 assumed:1 table:3 learn:5 robust:2 complex:2 noise:7 repeated:2 benefited:1 en:1 position:37 exceeding:1 learns:1 load:16 showing:1 appropri:1 sensing:1 mason:1 stephanou:1 incorporating:1 phd:2 magnitude:4 execution:1 occurring:1 hole:25 margin:1 cartesian:1 suited:1 depicted:1 vironment:1 simply:1 forming:2 positional:3 springer:1 ma:4 goal:1 quantifying:2 consequently:1 man:1 considerable:1 change:2 asada:3 acting:1 experimental:1 attempted:1 reactive:8 |
6,446 | 6,830 | Beyond normality: Learning sparse probabilistic
graphical models in the non-Gaussian setting
Rebecca E. Morrison
MIT
[email protected]
Ricardo Baptista
MIT
[email protected]
Youssef Marzouk
MIT
[email protected]
Abstract
We present an algorithm to identify sparse dependence structure in continuous
and non-Gaussian probability distributions, given a corresponding set of data. The
conditional independence structure of an arbitrary distribution can be represented
as an undirected graph (or Markov random field), but most algorithms for learning
this structure are restricted to the discrete or Gaussian cases. Our new approach
allows for more realistic and accurate descriptions of the distribution in question,
and in turn better estimates of its sparse Markov structure. Sparsity in the graph is
of interest as it can accelerate inference, improve sampling methods, and reveal
important dependencies between variables. The algorithm relies on exploiting the
connection between the sparsity of the graph and the sparsity of transport maps,
which deterministically couple one probability measure to another.
1
Undirected probabilistic graphical models
Given n samples from the joint probability distribution of some random variables X1 , . . . , Xp , a
common goal is to discover the underlying Markov random field. This field is specified by an
undirected graph G, comprising the set of vertices V = {1, . . . , p} and the set of edges E. The edge
set E encodes the conditional independence structure of the distribution, i.e., ejk ?
/ E ?? Xj ??
Xk | XV \{jk} . Finding the edges of the graph is useful for several reasons: knowledge of conditional
independence relations can accelerate inference and improve sampling methods, as well as illuminate
structure underlying the process that generated the data samples. This problem?identifying an
undirected graph given samples?is quite well studied for Gaussian or discrete distributions. In the
Gaussian setting, the inverse covariance, or precision, matrix precisely encodes the sparsity of the
graph. That is, a zero appears in the jk-th entry of the precision if and only if variables Xj and Xk
are conditionally independent given the rest. Estimation of the support of the precision matrix is often
achieved using a maximum likelihood estimate with `1 penalties. Coordinate descent (glasso) [4] and
neighborhood selection [14] algorithms can be consistent for sparse recovery with few samples, i.e.,
p > n. In the discrete setting, [12] showed that for some particular graph structure, the support of a
generalized covariance matrix encodes the conditional independence structure of the graph, while
[21] employed sparse `1 -penalized logistic regression to identify Ising Markov random fields.
Many physical processes, however, generate data that are continuous but non-Gaussian. One example
is satellite images of cloud cover formation, which may greatly impact weather conditions and
climate [25, 20]. Other examples include biological processes such as bacteria growth [5], heartbeat
behavior [19], and transport in biological tissues [9]. Normality assumptions about the data may
prevent the detection of important conditional dependencies. Some approaches have allowed for
non-Gaussianity, such as the nonparanormal approach of [11, 10], which uses copula functions to
estimate a joint non-Gaussian density while preserving conditional independence. However, this
approach is still restricted by the choice of copula function, and as far as we know, no fully general
approach is yet available. Our goal in this work is to consistently estimate graph structure when
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the underlying data-generating process is non-Gaussian. To do so, we propose the algorithm SING
(Sparsity Identification in Non-Gaussian distributions). SING uses the framework of transport maps
to characterize arbitrary continuous distributions, as described in ?3. Our representations of transport
maps employ polynomial expansions of degree ?. Setting ? = 1 (i.e., linear maps) is equivalent to
assuming that the data are well approximated by a multivariate Gaussian. With ? > 1, SING acts as
a generalization of Gaussian-based algorithms by allowing for arbitrarily rich parameterizations of
the underlying data-generating distribution, without additional assumptions on its structure or class.
2
Learning conditional independence
Let X1 , . . . , Xp have a smooth and strictly positive density ? on Rp . A recent result in [26] shows
that conditional independence of the random variables Xj and Xk can be determined as follows:
Xj ?
? Xk | XV \{jk} ?? ?jk log ?(x) = 0, ? x ? Rp ,
(1)
where ?jk (?) denotes the jk-th mixed partial derivative. Here, we define the generalized precision
as the matrix ?, where ?jk = E? [|?jk log ?(x)|]. Note that ?jk = 0 is a sufficient condition
that variables Xj and Xk be conditionally independent. Thus, finding the zeros in the matrix ? is
equivalent to finding the graphical structure underlying the density ?. Note that the zeros of the
precision matrix in a Gaussian setting encode the same information?the lack of an edge?as the
zeros in the generalized precision introduced here.
Our approach identifies the zeros of ? and thus the edge set E in an iterative fashion via the
algorithm SING, outlined in ?4. Note that computation of an entry of the generalized precision
relies on an expression for the density ?. We represent ? and also estimate ? using the notion of a
transport map between probability distributions. This map is estimated from independent samples
x(i) ? ?, i = 1, . . . , n, as described in ?3. Using a map, of the particular form described below,
offers several advantages: (1) computing the generalized precision given the map is efficient (e.g.,
the result of a convex optimization problem); (2) the transport map itself enjoys a notion of sparsity
that directly relates to the Markov structure of the data-generating distribution; (3) a coarse map may
capture these Markov properties without perfectly estimating the high-dimensional density ?.
Let us first summarize our objective and proposed approach. We aim to solve the following graph
recovery problem:
Given samples {x(i) }ni=1 from some unknown distribution, find the dependence
graph of this distribution and the associated Markov properties.
Our proposed approach loosely follows these steps:
?
?
?
?
Estimate a transport map from samples
Given an estimate of the map, compute the generalized precision ?
Threshold ? to identify a (sparse) graph
Given a candidate graphical structure, re-estimate the map. Iterate. . .
The final step?re-estimating the map, given a candidate graphical structure?makes use of a strong
connection between the sparsity of the graph and the sparsity of the transport map (as shown by [26]
and described in ?3.2). Sparsity in the graph allows for sparsity in the map, and a sparser map allows
for improved estimates of ?. This is the motivation behind an iterative algorithm.
3
Transport maps
The first step of SING is to estimate a transport map from samples [13]. A transport map induces
a deterministic coupling of two probability distributions [22, 15, 18, 26]. Here, we build a map
between the distribution generating the samples (i.e., X ? ?) and a standard normal distribution
? = N (0, Ip ). As described in [28, 2], given two distributions with smooth and strictly positive
densities (?, ?),1 there exists a monotone map S : Rp ? Rp such that S] ? = ? and S ] ? = ?, where
S] ?(y) = ? ? S ?1 (y) det ?S ?1 (y)
(2)
S ] ?(x) = ? ? S(x) det (?S(x)) .
1
(3)
Regularity assumptions on ?, ? can be substantially relaxed, though (2) and (3) may need modification [2].
2
We say ? is the pushforward density of ? by the map S, and similarly, ? is the pullback of ? by S.
Many possible transport maps satisfy the measure transformation conditions above. In this work, we
restrict our attention to lower triangular monotone increasing maps. [22, 7, 2] show that, under the
conditions above, there exists a unique lower triangular map S of the form
? 1
?
S (x1 )
2
?S (x1 , x2 )
?
? 3
?
S
(x
,
x
,
x
)
?
?,
1
2
3
S(x) = ?
?
.
?.
?
.
p
S (x1 , . . . . . . , xp )
with ?k S k > 0. The qualifier ?lower triangular? refers to the property that each component of the
map S k only depends on variables x1 , . . . , xk . The space of such maps is denoted S? .
As an example, consider a normal random variable: X ? N (0, ?). Taking the Cholesky decomposition of the covariance KK T = ?, then K ?1 is a linear operator that maps (in distribution) X to a
random variable Y ? N (0, Ip ), and similarly, K maps Y to X. In this example, we associate the
map K ?1 with S, since it maps the more exotic distribution to the standard normal:
d
d
S(X) = K ?1 X = Y ,
S ?1 (Y ) = KY = X.
In general, however, the map S may be nonlinear. This is exactly what allows us to represent and
capture arbitrary non-Gaussianity in the samples. The monotonicity of each component of the map
(that is, ?k S k > 0) can be enforced by using the following parameterization:
Z xk
S k (x1 , . . . , xk ) = ck (x1 , . . . , xk?1 ) +
exp {hk (x1 , . . . , xk?1 , t)}dt,
0
k?1
k
with functions ck : R
? R and hk : R ? R. Next, a particular form for ck and hk is
specified; in this work, we use a linear expansion with Hermite polynomials for ck and Hermite
functions for hk . An important choice is the maximum degree ? of the polynomials. With higher
degree, the computational difficulty of the algorithm increases by requiring the estimation of more
coefficients in the expansion. This trade-off between higher degree (which captures more possible
nonlinearity) and computational expense is a topic of current research [1]. The space of lower
?
triangular maps, parameterized in this way, is denoted S?
. Computations in the transport map
framework are performed using the software TransportMaps [27].
3.1
Optimization of map coefficients is an MLE problem
Let ? ? Rn? be the vector of coefficients that parameterize the functions ck and hk , which in turn
?
define a particular instantiation of the transport map S? ? S?
. (We include the subscript in this
subsection to emphasize that the map depends on its particular parameterization, but later drop it for
notational efficiency.) To complete the estimation of S? , it remains to optimize for the coefficients ?.
This optimization is achieved by minimizing the Kullback-Leibler divergence between the density in
question, ?, and the pullback of the standard normal ? by the map S? [27]:
]
?? = argmin DKL ?||S?
?
(4)
?
]
= argmin E? log ? ? log S?
?
(5)
?
n
? argmax
?
1X
]
?
log S?
? x(i)
= ?.
n i=1
(6)
As shown in [13, 17], for standard Gaussian ? and lower triangular S, this optimization problem
is convex and separable across dimensions 1, . . . , p. Moreover, by line (6), the solution to the
? Given that the n samples are random, ?
?
optimization problem is a maximum likelihood estimate ?.
converges in distribution as n ? ? to a normal random variable whose mean is the exact minimizer
?? , and whose variance is I ?1 (?? )/n, where I(?) is the Fisher information matrix. That is:
1
? ? N ?? , I ?1 (?? ) , as n ? ?.
?
(7)
n
3
3
4
1
1
5
3
4
5
2
2
(a)
(b)
Figure 1: (a) A sparse graph with an optimal ordering; (b) Suboptimal ordering induces extra edges.
]
Optimizing for the map coefficients yields a representation of the density ? as S?
?. Thus, it is now
possible to compute the conditional independence scores with the generalized precision:
]
?jk = E? [|?jk log ?(x)|] = E? ?jk log S?
?(x)
(8)
n
X
1
?
]
(9)
?
? x(i) = ?
?jk log S?
jk .
n i=1
? First, however, we explain the connection between the two notions of
The next step is to threshold ?.
sparsity?one of the graph and the other of the map.
3.2
Sparsity and ordering of the transport map
Because the transport maps are lower triangular, they are in some sense already sparse. However,
it may be possible to prescribe more sparsity in the form of the map. [26] showed that the Markov
structure associated with the density ? yields tight lower bounds on the sparsity pattern IS , where
the latter is defined as the set of all pairs (j, k), j < k, such that the kth component of the map does
not depend on the jth variable: IS := {(j, k) : j < k, ?j S k = 0}. The variables associated with
the complement of this set are called active. Moreover, these sparsity bounds can be identified by
simple graph operations; see ?5 in [26] for details. Essentially these operations amount to identifying
the intermediate graphs produced by the variable elimination algorithm, but they do not involve
actually performing variable elimination or marginalization. The process starts with node p, creates a
clique between all its neighbors, and then ?removes? it. The process continues in the same way with
nodes p ? 1, p ? 2, and so on until node 1. The edges in the resulting (induced) graph determine the
sparsity pattern of the map IS . In general, the induced graph will be more highly connected unless
the original graph is chordal. Since the set of added edges, or fill-in, depends on the ordering of the
nodes, it is beneficial to identify an ordering that minimizes it. For example, consider the graph in
Figure 1a. The corresponding map has a nontrivial sparsity pattern, and is thus more sparse than a
dense lower triangular map:
? 1
?
S (x1 )
2
?S (x1 , x2 )
?
?
?
S(x) = ?S 3 (x1 , x2 , x3 )
IS = {(1, 4), (2, 4), (1, 5), (2, 5), (3, 5)}.
(10)
?,
?S 4 (
x3 , x4 ) ?
S5(
x4 , x5 )
Now consider Figure 1b. Because of the suboptimal ordering, edges must be added to the induced
graph, shown in dashed lines. The resulting map is then less sparse than in 1a: IS = {(1, 5), (2, 5)}.
An ordering of the variables is equivalent to a permutation ?, but the problem of finding an optimal
permutation is NP-hard, and so we turn to heuristics. Possible schemes include so-called min-degree
and min-fill [8]. Another that we have found to be successful in practice is reverse Cholesky, i.e., the
reverse of a good ordering for sparse Cholesky factorization [24]. We use this in the examples below.
The critical point here is that sparsity in the graph implies sparsity in the map. The space of maps
that respect this sparsity pattern is denoted SI? . A sparser map can be described by fewer coefficients
?, which in turn decreases their total variance when found via MLE. This improves the subsequent
estimate of ?. Numerical results supporting this claim are shown in Figure 2 for a Gaussian grid
graph, p = 16. The plots show three levels of sparsity: ?under,? corresponding to a dense lower
4
triangular map; ?exact,? in which the map includes only the necessary active variables; and ?over,?
corresponding to a diagonal map. In each case, the variance decreases with increasing sample size,
and the sparser the map, the lower the variance. However, non-negligible bias is incurred when the
map is over-sparsified; see Figure 2b. Ideally, the algorithm would move from the under-sparsified
level to the exact level.
Grid graph, p = 16
Grid graph, p = 16
10?1
?
Bias squared in ?
?
Average variance in ?
100
100
Sparsity level
Under
Exact
Over
10?1
10?2
10?3
102
103
Sparsity level
Under
Exact
Over
102
Number of samples
103
Number of samples
(a)
(b)
? jk decreases with fewer coefficients and/or more samples; (b) Bias in ?
? jk
Figure 2: (a) Variance of ?
? are computed using the Frobenius norm.
occurs with oversparsification. The bias and variance of ?
4
Algorithm: SING
We now present the full algorithm. Note that the ending condition is controlled by a variable
DECREASING, which is set to true until the size of the recovered edge set is no longer decreasing. The
final ingredient is the thresholding step, explained in ?4.1. Subscripts l in the algorithm refer to the
given quantity at that iteration.
Algorithm 1: Sparsity Identification in Non-Gaussian distributions (SING)
1
2
3
4
5
6
7
8
9
10
11
input :n i.i.d. samples {x(i) }ni=1 ? ?, maximum polynomial degree ?
?
output : sparse edge set E
?0 | = p(p ? 1)/2, DECREASING = TRUE
define : IS1 = {?}, l = 1, |E
while DECREASING = TRUE do
Estimate transport map Sl ? SI?l , where Sl] ? = ?
]
? l )jk = 1 Pn ?jk log S?
Compute (?
? x(i)
i=1
n
?l
Threshold ?
?l | (the number of edges in the thresholded graph)
Compute |E
?
?
if |El | < |El?1 | then
Find appropriate permutation of variables ?l (for example, reverse Cholesky ordering)
Identify sparsity pattern of subsequent map ISl+1
l ?l+1
else
DECREASING = FALSE
SING is not a strictly greedy algorithm?neither for the sparsity pattern of the map nor for the edge
removal of the graph. First, the process of identifying the induced graph may involve fill-in, and the
extent of this fill-in might be larger than optimal due to the ordering heuristics. Second, the estimate
of the generalized precision is noisy due to finite sample size, and this noise can add randomness to a
thresholding decision. As a result, a variable that is set as inactive may be reactivated in subsequent
iterations. However, we have found that oscillation in the set of active variables is a rare occurence.
Thus, checking that the total number of edges is nondecreasing (as a global measure of sparsity)
works well as a practical stopping criterion.
5
4.1
Thresholding the generalized precision
An important component of this algorithm is a thresholding of the generalized precision. Based
on literature [3] and numerical results, we model the threshold as ?jk = ??jk , where ? is a tuning
? jk )]1/2 (where V denotes variance). Note that a threshold ?jk is computed
parameter and ?jk = [V(?
at each iteration and for every off-diagonal entry of ?. More motivation for this choice is given in
the scaling analysis of the following section. The expression (7) yields an estimate of the variances
? but this uncertainty must still be propagated to the entries of ? in order
of the map coefficients ?,
to compute ?jk . This is possible using the delta method [16], which states that if a sequence of
one-dimensional random variables satisfies
? (n)
d
n X ? ? ?? N ?, ? 2 ,
then for a function g(?),
? (n)
d
? g (?) ?? N g(?), ? 2 |g 0 (?)|2 .
n g X
The MLE result also states that the coefficients are normally distributed as n ? ?. Thus, generalizing
this method to vector-valued random variables gives an estimate for the variance in the entries of ?,
as a function of ?, evaluated at the true minimizer ?? :
1 ?1
T
2
(11)
I (?) (?? ?jk ) .
?jk ? (?? ?jk )
n
??
5
Scaling analysis
In this section, we derive an estimate for the number of samples needed to recover the exact graph
with some given probability. We consider a one-step version of the algorithm, or in other words: what
is the probability that the correct graph will be returned after a single step of SING? We also assume
a particular instantiation of the transport map, and that ?, the minimum non-zero edge weight in the
true generalized precision, is given. That is, ? = minj6=k,?jk 6=0 (?jk ).
There are two possibilities for each pair (j, k), j < k: the edge ejk does exist in the true edge set E
(case 1), or it does not (case 2). In case 1, the estimated value should be greater than its variance, up
to some level of confidence, reflected in the choice of ?: ?jk > ??jk . In the worst case, ?jk = ?,
so it must be that ? > ??jk . On the other hand, in case 2, in which the edge does not exist, then
similarly ? ? ??jk > 0.
If ?jk < ?/?, then by equation (11), we have
? 2
1
T
(?? ?jk ) I ?1 (?) (?? ?jk ) <
n
?
(12)
and so it must be that the number of samples
2
?
.
?
and set n? = maxj6=k n?jk .
T
n > (?? ?jk ) I ?1 (?) (?? ?jk )
Let us define the RHS above as n?jk
(13)
Recall that the estimate in line (9) contains the absolute value of a normally distributed quantity,
known as a folded normal distribution. In case 1, the mean is bounded away from zero, and with
small enough variance, the folded part of this distribution is negligible. In case 2, the mean (before
taking the absolute value) is zero, and so this estimate takes the form of a half-normal distribution.
Let us now relate the level of confidence as reflected in ? to the probability z that an edge is correctly
estimated. We define a function for the standard normal (in case 1) ?1 : R+ ? (0, 1) such that
?1 (?1 ) = z1 and its inverse ?1 = ??1
1 (z1 ), and similarly for the half-normal with ?2 , ?2 , and z2 .
Consider the event Bjk as the event that edge ejk is estimated incorrectly:
n
o
? ? (ejk ?
?
Bjk =
(ejk ? E) ? (?
ejk ?
/ E)
/ E) ? (?
ejk ? E)
.
6
In case 1,
1
(1 ? z1 )
2
where the factor of 1/2 appears because this event only occurs when the estimate is below ? (and not
when the estimate is high). In case 2, we have
?1 ?jk < ? =? P (Bjk ) <
?2 ?jk < ? =? P (Bjk ) < (1 ? z2 ).
To unify these two cases, let us define z where 1 ? z = (1 ? z1 )/2, and set z = z2 . Finally, we have
(Bjk ) < (1 ? z), j < k.
Now we bound the probability that at least one edge is incorrect with a union bound:
?
?
[
X
P?
Bjk ? ?
P (Bjk )
j<k
(14)
j<k
1
p(p ? 1)(1 ? z).
(15)
2
Note p(p ? 1)/2 is the number of possible edges. The probability that an edge is incorrect increases
as p increases, and decreases as z approaches 1. Next, we bound this probability of recovering an
incorrect graph by m. Then p(p ? 1)(1 ? z) < 2m which yields z > 1 ? 2m/ (p(p ? 1)). Let
2m
2m
?1
,
?
1
?
.
(16)
? ? = max [?1 , ?2 ] = max ??1
1
?
1
2
p(p ? 1)
p(p ? 1)
=
Therefore, to recover the correct graph with probability m we need at least n? samples, where
(
? 2 )
?
T
n? = max (?? ?jk ) I ?1 (?) (?? ?jk )
.
j6=k
?
6
6.1
Examples
Modified Rademacher
Consider r pairs of random variables (X, Y ), where:
X ? N (0, 1)
Y = W X, with W ? N (0, 1).
(17)
(18)
(A common example illustrating that two random variables can be uncorrelated but not independent
uses draws for W from a Rademacher distribution, which are ?1 and 1 with equal probability.) When
r = 5, the corresponding graphical model and support of the generalized precision are shown in
Figure 3. The same figure also shows the one- and two-dimensional marginal distributions for one
pair (X, Y ). Each 1-dimensional marginal is symmetric and unimodal, but the two-dimensional
marginal is quite non-Gaussian.
Figures 4a?4c show the progression of the identified graph over the iterations of the algorithm, with
n = 2000, ? = 2, and maximum degree ? = 2. The variables are initially permuted to demonstrate
that the algorithm is able to find a good ordering. After the first iteration, one extra edge remains.
After the second, the erroneous edge is removed and the graph is correct. After the third, the sparsity
of the graph has not changed and the recovered graph is returned as is. Importantly, an assumption of
normality on the data returns the incorrect graph, displayed in Figure 4d. (This assumption can be
enforced by using a linear transport map, or ? = 1.) In fact, not only is the graph incorrect, the use of
a linear map fails to detect any edges at all and deems the ten variables to be independent.
6.2
Stochastic volatility
As a second example, we consider data generated from a stochastic volatility model of a financial asset
[23, 6]. The log-volatility of the asset is modeled as an autoregressive process at times t = 1, . . . , T .
In particular, the state at time t + 1 is given as
Zt+1 = ? + ?(Zt ? ?) + t ,
7
t ? N (0, 1)
(19)
2
2
4
6
8
0
1
3
5
7
9
4
6
3
2
(a)
8
y
1
0
10
?1
5
?2
?3
?3 ?2 ?1 0
10
(c)
1
2
3
x
(b)
Figure 3: (a) The undirected graphical model; (b) One- and two-dimensional marginal distributions
for one pair (X, Y ); (c) Adjacency matrix of true graph (dark blue corresponds to an edge, off-white
to no edge).
2
2
2
2
4
4
4
4
6
6
6
6
8
8
8
8
10
10
10
10
5
10
5
(a)
10
5
(b)
10
5
(c)
10
(d)
Figure 4: (a) Adjacency matrix of original graph under random variable permutation; (b) Iteration 1;
(c) Iterations 2 and 3 are identical: correct graph recovered via SING with ? = 2; (d) Recovered
graph, using SING with ? = 1.
where
Z0 |?, ? ? N ?,
?
?=2
1
1 ? ?2
e?
? 1,
1 + e??
,
? ? N (0, 1)
?? ? N (3, 1).
(20)
(21)
The corresponding graph is depicted in Figure 6. With T = 6, samples were generated from the
posterior distribution of the state Z1:6 and hyperparameters ? and ?, given noisy measurements of the
state. Using a relatively large number of samples n = 15000, ? = 1.5, and ? = 2, the correct graph
is recovered, shown in Figure 6a. With the same amount of data, a linear map returns the incorrect
graph?having both missing and spurious additional edges. The large number of samples is required
?
Z1
Z2
?
Z3
Z4
ZT
(a)
?
?
Z1
Z2
Z3
Z4
Z5
Z6
? ? Z1 Z2 Z3 Z4 Z5 Z6
(b)
Figure 5: (a) The graph of the stochastic volatility model; (b) Adjacency matrix of true graph.
8
?
?
Z1
Z2
Z3
Z4
Z5
Z6
?
?
Z1
Z2
Z3
Z4
Z5
Z6
? ? Z1 Z2 Z3 Z4 Z5 Z6
?
?
Z1
Z2
Z3
Z4
Z5
Z6
? ? Z1 Z2 Z3 Z4 Z5 Z6
(a)
? ? Z1 Z2 Z3 Z4 Z5 Z6
(b)
(c)
Figure 6: Recovered graphs using: (a) SING, ? = 2, n = 15000; (b) SING, ? = 1; (c) GLASSO.
because the edges between hyperparameters and state variables are quite weak. Magnitudes of the
entries of the generalized precision (scaled to have maximum value 1) are displayed in Figure 7a.
The stronger edges may be recovered with a much smaller number of samples (n = 2000), however;
see Figure 7b. This example illustrates the interplay between the minimum edge weight ? and the
number of samples needed, as seen in the previous section. In some cases, it may be more reasonable
to expect that, given a fixed number of samples, SING could recover edges with edge weight above
some ?min , but would not reliably discover edges below that cutoff. Strong edges could also be
discovered using fewer samples and a modified SING algorithm with `1 penalties (a modification to
the algorithm currently under development).
For comparison, Figure 6c shows the graph produced by assuming that the data are Gaussian and
using the GLASSO algorithm [4]. Results were generated for 40 different values of the tuning
parameter ? ? (10?6 , 1). The result shown here was chosen such that the sparsity level is locally
constant with respect to ?, specifically at ? = .15. Here we see that using a Gaussian assumption
with non-Gaussian data overestimates edges among state variables and underestimates edges between
state variables and the hyperparameter ?.
1.0
?
?
Z1
Z2
Z3
Z4
Z5
Z6
?
?
Z1
Z2
Z3
Z4
Z5
Z6
0.8
0.6
0.4
0.2
? ? Z1 Z2 Z3 Z4 Z5 Z6
0.0
? ? Z1 Z2 Z3 Z4 Z5 Z6
(a)
(b)
? (b) Strong edges recovered via SING,
Figure 7: (a) The scaled generalized precision matrix ?;
n = 2000.
7
Discussion
The scaling analysis presented here depends on a particular representation of the transport map. An
interesting open question is: What is the information-theoretic (representation-independent) lower
bound on the number of samples needed to identify edges in the non-Gaussian setting? This question
relates to the notion of an information gap: any undirected graph satisfies the Markov properties of an
infinite number of distributions, and thus identification of the graph should require less information
than that of the distribution. Formalizing these notions is an important topic of future work.
Acknowledgments
This work has been supported in part by the AFOSR MURI on ?Managing multiple information
sources of multi-physics systems,? program officer Jean-Luc Cambier, award FA9550-15-1-0038. We
would also like to thank Daniele Bigoni for generous help with code implementation and execution.
9
References
[1] D. Bigoni, A. Spantini, and Y. Marzouk. On the computation of monotone transports. In preparation.
[2] V. I. Bogachev, A. V. Kolesnikov, and K. V. Medvedev. Triangular transformations of measures. Sbornik:
Mathematics, 196(3):309, 2005.
[3] T. Cai and W. Liu. Adaptive thresholding for sparse covariance matrix estimation. Journal of the American
Statistical Association, 106(494):672?684, 2011.
[4] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 9(3):432?441, 2008.
[5] S. K. Ghosh, A. G. Cherstvy, D. S. Grebenkov, and R. Metzler. Anomalous, non-Gaussian tracer diffusion
in crowded two-dimensional environments. New Journal of Physics, 18(1):013027, 2016.
[6] S. Kim, N. Shephard, and S. Chib. Stochastic volatility: likelihood inference and comparison with ARCH
models. The Review of Economic Studies, 65(3):361?393, 1998.
[7] H. Knothe. Contributions to the theory of convex bodies. The Michigan Mathematical Journal,
1957(1028990175), 1957.
[8] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009.
[9] C. Liu, R. Bammer, B. Acar, and M. E. Moseley. Characterizing non-Gaussian diffusion by using
generalized diffusion tensors. Magnetic Resonance in Medicine, 51(5):924?937, 2004.
[10] H. Liu, F. Han, M. Yuan, J. Lafferty, and L. Wasserman. High-dimensional semiparametric Gaussian
copula graphical models. The Annals of Statistics, 40(4):2293?2326, 2012.
[11] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional
undirected graphs. Journal of Machine Learning Research, 10:2295?2328, 2009.
[12] P.-L. Loh and M. J. Wainwright. Structure estimation for discrete graphical models: Generalized covariance
matrices and their inverses. In NIPS, pages 2096?2104, 2012.
[13] Y. Marzouk, T. Moselhy, M. Parno, and A. Spantini. Sampling via measure transport: An introduction. In
R. Ghanem, D. Higdon, and H. Owhadi, editors, Handbook of Uncertainty Quantification. Springer, 2016.
[14] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the lasso. The
Annals of Statistics, pages 1436?1462, 2006.
[15] T. A. Moselhy and Y. M. Marzouk. Bayesian inference with optimal maps. Journal of Computational
Physics, 231(23):7815?7850, 2012.
[16] G. W. Oehlert. A note on the delta method. The American Statistician, 46(1):27?29, 1992.
[17] M. Parno and Y. Marzouk. Transport map accelerated Markov chain Monte Carlo. arXiv preprint
arXiv:1412.5492, 2014.
[18] M. Parno, T. Moselhy, and Y. M. Marzouk. A multiscale strategy for Bayesian inference using transport
maps. SIAM/ASA Journal on Uncertainty Quantification, 4(1):1160?1190, 2016.
[19] C.-K. Peng, J. Mietus, J. Hausdorff, S. Havlin, H. E. Stanley, and A. L. Goldberger. Long-range anticorrelations and non-Gaussian behavior of the heartbeat. Physical Review Letters, 70(9):1343, 1993.
[20] M. Perron and P. Sura. Climatology of non-Gaussian atmospheric statistics. Journal of Climate, 26(3):1063?
1083, 2013.
[21] P. Ravikumar, M. J. Wainwright, and J. D. Lafferty. High-dimensional Ising model selection using
l1-regularized logistic regression. The Annals of Statistics, 38(3):1287?1319, 2010.
[22] M. Rosenblatt. Remarks on a multivariate transformation. The Annals of Mathematical Statistics, 23(3):470?
472, 1952.
[23] H. Rue and L. Held. Gaussian Markov Random Fields: Theory and Applications. Chapman & Hall/CRC
Monographs on Statistics & Applied Probability. CRC Press, 2005.
[24] Y. Saad. Iterative Methods for Sparse Linear Systems. SIAM, 2003.
10
[25] A. Sengupta, N. Cressie, B. H. Kahn, and R. Frey. Predictive inference for big, spatial, non-Gaussian
data: MODIS cloud data and its change-of-support. Australian & New Zealand Journal of Statistics,
58(1):15?45, 2016.
[26] A. Spantini, D. Bigoni, and Y. Marzouk. Inference via low-dimensional couplings. arXiv preprint
arXiv:1703.06131, 2017.
[27] T. M. Team. TransportMaps v1.0. http://transportmaps.mit.edu.
[28] C. Villani. Optimal Transport: Old and New, volume 338. Springer, 2008.
11
| 6830 |@word illustrating:1 version:1 polynomial:4 norm:1 stronger:1 villani:1 open:1 covariance:6 decomposition:1 deems:1 liu:4 contains:1 score:1 nonparanormal:2 current:1 recovered:8 z2:16 chordal:1 si:2 yet:1 goldberger:1 must:4 realistic:1 subsequent:3 numerical:2 remove:1 drop:1 plot:1 acar:1 greedy:1 fewer:3 half:2 parameterization:2 xk:10 fa9550:1 coarse:1 parameterizations:1 node:4 hermite:2 mathematical:2 incorrect:6 yuan:1 peng:1 behavior:2 nor:1 multi:1 decreasing:5 increasing:2 discover:2 underlying:5 estimating:2 exotic:1 moreover:2 bounded:1 formalizing:1 what:3 biostatistics:1 argmin:2 substantially:1 minimizes:1 finding:4 transformation:3 ghosh:1 every:1 act:1 growth:1 exactly:1 scaled:2 normally:2 overestimate:1 positive:2 negligible:2 before:1 frey:1 xv:2 subscript:2 might:1 studied:1 higdon:1 meinshausen:1 factorization:1 range:1 bjk:7 unique:1 practical:1 acknowledgment:1 practice:1 union:1 x3:2 weather:1 word:1 confidence:2 refers:1 selection:3 operator:1 minj6:1 anticorrelations:1 optimize:1 equivalent:3 map:70 deterministic:1 missing:1 attention:1 convex:3 unify:1 zealand:1 identifying:3 recovery:2 wasserman:2 importantly:1 fill:4 financial:1 notion:5 coordinate:1 annals:4 exact:6 us:3 prescribe:1 cressie:1 associate:1 approximated:1 jk:45 continues:1 metzler:1 ising:2 muri:1 spantini:3 cloud:2 preprint:2 capture:3 parameterize:1 worst:1 connected:1 ordering:11 trade:1 decrease:4 removed:1 monograph:1 environment:1 ideally:1 depend:1 tight:1 asa:1 predictive:1 creates:1 heartbeat:2 efficiency:1 accelerate:2 joint:2 represented:1 monte:1 youssef:1 formation:1 neighborhood:1 quite:3 whose:2 heuristic:2 solve:1 larger:1 say:1 pullback:2 valued:1 jean:1 triangular:9 statistic:7 nondecreasing:1 itself:1 noisy:2 final:2 ip:2 interplay:1 advantage:1 sequence:1 cai:1 propose:1 description:1 frobenius:1 ky:1 exploiting:1 regularity:1 satellite:1 rademacher:2 generating:4 converges:1 volatility:5 coupling:2 derive:1 help:1 shephard:1 strong:3 recovering:1 implies:1 australian:1 correct:5 stochastic:4 elimination:2 adjacency:3 crc:2 require:1 generalization:1 biological:2 strictly:3 hall:1 normal:9 exp:1 claim:1 generous:1 estimation:7 currently:1 uhlmann:1 mit:8 gaussian:27 aim:1 modified:2 ck:5 pn:1 baptista:1 encode:1 notational:1 consistently:1 likelihood:3 greatly:1 hk:5 kim:1 sense:1 detect:1 inference:7 el:2 stopping:1 havlin:1 initially:1 spurious:1 relation:1 koller:1 kahn:1 comprising:1 among:1 denoted:3 development:1 resonance:1 sengupta:1 spatial:1 copula:3 marginal:4 field:5 equal:1 having:1 beach:1 sampling:3 chapman:1 x4:2 identical:1 future:1 np:1 few:1 employ:1 chib:1 divergence:1 argmax:1 statistician:1 friedman:2 detection:1 interest:1 highly:1 possibility:1 behind:1 held:1 chain:1 accurate:1 edge:40 partial:1 bacteria:1 necessary:1 unless:1 loosely:1 old:1 re:2 cover:1 vertex:1 entry:6 rare:1 qualifier:1 successful:1 characterize:1 dependency:2 st:1 density:10 siam:2 probabilistic:3 off:3 physic:3 squared:1 american:2 derivative:1 ricardo:1 return:2 gaussianity:2 coefficient:9 includes:1 crowded:1 satisfy:1 depends:4 performed:1 later:1 start:1 recover:3 contribution:1 ni:2 variance:12 yield:4 identify:6 weak:1 identification:3 bayesian:2 cambier:1 produced:2 carlo:1 asset:2 j6:1 tissue:1 randomness:1 explain:1 underestimate:1 associated:3 couple:1 propagated:1 recall:1 knowledge:1 subsection:1 improves:1 stanley:1 actually:1 appears:2 higher:2 dt:1 reflected:2 improved:1 marzouk:7 evaluated:1 though:1 arch:1 until:2 hand:1 transport:25 nonlinear:1 multiscale:1 lack:1 logistic:2 reveal:1 usa:1 requiring:1 true:8 hausdorff:1 symmetric:1 leibler:1 climate:2 conditionally:2 white:1 x5:1 daniele:1 criterion:1 generalized:16 complete:1 demonstrate:1 theoretic:1 climatology:1 l1:1 image:1 common:2 permuted:1 physical:2 volume:1 association:1 s5:1 refer:1 measurement:1 tuning:2 outlined:1 grid:3 similarly:4 z4:13 mathematics:1 nonlinearity:1 maxj6:1 han:1 longer:1 add:1 multivariate:2 posterior:1 showed:2 recent:1 optimizing:1 reverse:3 arbitrarily:1 kolesnikov:1 preserving:1 minimum:2 additional:2 relaxed:1 greater:1 seen:1 employed:1 managing:1 determine:1 morrison:1 dashed:1 relates:2 full:1 unimodal:1 multiple:1 smooth:2 reactivated:1 offer:1 long:2 ravikumar:1 mle:3 award:1 dkl:1 controlled:1 impact:1 z5:12 anomalous:1 regression:2 essentially:1 arxiv:4 iteration:7 represent:2 achieved:2 semiparametric:2 else:1 source:1 extra:2 rest:1 saad:1 induced:4 undirected:7 lafferty:3 intermediate:1 enough:1 iterate:1 independence:8 xj:5 marginalization:1 hastie:1 perfectly:1 restrict:1 suboptimal:2 identified:2 lasso:2 economic:1 det:2 inactive:1 expression:2 pushforward:1 penalty:2 loh:1 returned:2 remark:1 useful:1 involve:2 amount:2 dark:1 ten:1 locally:1 induces:2 generate:1 http:1 sl:2 exist:2 estimated:4 delta:2 correctly:1 tibshirani:1 blue:1 rosenblatt:1 discrete:4 hyperparameter:1 officer:1 threshold:5 prevent:1 neither:1 cutoff:1 thresholded:1 diffusion:3 v1:1 graph:56 monotone:3 enforced:2 inverse:4 parameterized:1 uncertainty:3 letter:1 reasonable:1 oscillation:1 draw:1 decision:1 scaling:3 bogachev:1 bound:6 nontrivial:1 precisely:1 x2:3 software:1 encodes:3 min:3 performing:1 separable:1 relatively:1 across:1 beneficial:1 smaller:1 modification:2 explained:1 restricted:2 equation:1 remains:2 turn:4 needed:3 know:1 available:1 operation:2 progression:1 away:1 appropriate:1 magnetic:1 rp:4 original:2 denotes:2 include:3 graphical:11 medicine:1 build:1 tensor:1 objective:1 move:1 question:4 already:1 added:2 occurs:2 quantity:2 strategy:1 dependence:2 diagonal:2 illuminate:1 kth:1 thank:1 topic:2 extent:1 reason:1 assuming:2 code:1 modeled:1 kk:1 z3:13 minimizing:1 relate:1 expense:1 implementation:1 reliably:1 zt:3 unknown:1 allowing:1 markov:11 sing:16 finite:1 descent:1 ejk:7 displayed:2 supporting:1 sparsified:2 incorrectly:1 team:1 rn:1 discovered:1 arbitrary:3 isl:1 atmospheric:1 rebecca:1 introduced:1 complement:1 pair:5 required:1 specified:2 perron:1 connection:3 z1:18 nip:2 beyond:1 able:1 below:4 pattern:6 sparsity:29 summarize:1 program:1 max:3 rsb:1 sbornik:1 wainwright:2 critical:1 event:3 difficulty:1 quantification:2 regularized:1 normality:3 scheme:1 improve:2 identifies:1 tracer:1 occurence:1 review:2 literature:1 removal:1 checking:1 afosr:1 glasso:3 fully:1 permutation:4 expect:1 mixed:1 interesting:1 ingredient:1 ghanem:1 incurred:1 degree:7 sufficient:1 xp:3 consistent:1 thresholding:5 principle:1 editor:1 uncorrelated:1 penalized:1 changed:1 supported:1 jth:1 enjoys:1 bias:4 neighbor:1 taking:2 characterizing:1 absolute:2 sparse:15 distributed:2 dimension:1 ending:1 rich:1 autoregressive:1 adaptive:1 far:1 moselhy:3 emphasize:1 kullback:1 monotonicity:1 clique:1 global:1 active:3 instantiation:2 handbook:1 parno:3 continuous:3 iterative:3 z6:12 ca:1 expansion:3 rue:1 dense:2 rh:1 motivation:2 noise:1 hyperparameters:2 big:1 allowed:1 x1:12 body:1 fashion:1 mietus:1 precision:17 fails:1 deterministically:1 candidate:2 third:1 z0:1 erroneous:1 exists:2 false:1 magnitude:1 execution:1 illustrates:1 sparser:3 gap:1 knothe:1 generalizing:1 depicted:1 michigan:1 springer:2 corresponds:1 minimizer:2 satisfies:2 relies:2 conditional:9 goal:2 luc:1 fisher:1 hard:1 is1:1 change:1 determined:1 folded:2 specifically:1 infinite:1 called:2 total:2 moseley:1 support:4 cholesky:4 latter:1 accelerated:1 preparation:1 |
6,447 | 6,831 | An Inner-loop Free Solution to Inverse Problems
using Deep Neural Networks
Kai Fai?
Duke University
[email protected]
Lawrence Carin
Duke University
[email protected]
Qi Wei?
Duke University
[email protected]
Katherine Heller
Duke University
[email protected]
Abstract
We propose a new method that uses deep learning techniques to accelerate the
popular alternating direction method of multipliers (ADMM) solution for inverse
problems. The ADMM updates consist of a proximity operator, a least squares
regression that includes a big matrix inversion, and an explicit solution for updating
the dual variables. Typically, inner loops are required to solve the first two subminimization problems due to the intractability of the prior and the matrix inversion.
To avoid such drawbacks or limitations, we propose an inner-loop free update rule
with two pre-trained deep convolutional architectures. More specifically, we learn
a conditional denoising auto-encoder which imposes an implicit data-dependent
prior/regularization on ground-truth in the first sub-minimization problem. This
design follows an empirical Bayesian strategy, leading to so-called amortized
inference. For matrix inversion in the second sub-problem, we learn a convolutional
neural network to approximate the matrix inversion, i.e., the inverse mapping is
learned by feeding the input through the learned forward network. Note that
training this neural network does not require ground-truth or measurements, i.e.,
data-independent. Extensive experiments on both synthetic data and real datasets
demonstrate the efficiency and accuracy of the proposed method compared with
the conventional ADMM solution using inner loops for solving inverse problems.
1
Introduction
Most of the inverse problems are formulated directly to the setting of an optimization problem related
to the a forward model [25]. The forward model maps unknown signals, i.e., the ground-truth, to
acquired information about them, which we call data or measurements. This mapping, or forward
problem, generally depends on a physical theory that links the ground-truth to the measurements.
Solving inverse problems involves learning the inverse mapping from the measurements to the groundtruth. Specifically, it recovers a signal from a small number of degraded or noisy measurements. This
is usually ill-posed [26, 25]. Recently, deep learning techniques have emerged as excellent models
and gained great popularity for their widespread success in allowing for efficient inference techniques
on applications include pattern analysis (unsupervised), classification (supervised), computer vision,
image processing, etc [6]. Exploiting deep neural networks to help solve inverse problems has been
explored recently [24, 1] and deep learning based methods have achieved state-of-the-art performance
in many challenging inverse problems like super-resolution [3, 24], image reconstruction [20],
?
The authors contributed equally to this work.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
automatic colorization [13]. More specifically, massive datasets currently enables learning end-to-end
mappings from the measurement domain to the target image/signal/data domain to help deal with these
challenging problems instead of solving the inverse problem by inference. This mapping function
from degraded data point to ground-truth has recently been characterized by using sophisticated
networks, e.g., deep neural networks. A strong motivation to use neural networks stems from the
universal approximation theorem [5], which states that a feed-forward network with a single hidden
layer containing a finite number of neurons can approximate any continuous function on compact
subsets of Rn , under mild assumptions on the activation function.
More specifically, in recent work [3, 24, 13, 20], an end-to-end mapping from measurements y to
ground-truth x was learned from the training data and then applied to the testing data. Thus, the
complicated inference scheme needed in the conventional inverse problem solver was replaced by
feeding a new measurement through the pre-trained network, which is much more efficient. To
improve the scope of deep neural network models, more recently, in [4], a splitting strategy was
proposed to decompose an inverse problem into two optimization problems, where one sub-problem,
related to regularization, can be solved efficiently using trained deep neural networks, leading to
an alternating direction method of multipliers (ADMM) framework [2, 17]. This method involves
training a deep convolutional auto-encoder network for low-level image modeling, which explicitly
imposes regularization that spans the subspace that the ground-truth images live in. For the subproblem that requires inverting a big matrix, a conventional gradient descent algorithm was used,
leading to an alternating update, iterating between feed-forward propagation through a network and
iterative gradient descent. Thus, an inner loop for gradient descent is still necessary in this framework.
A similar approach to learn approximate ISTA with neural network is illustrated in [11].
In this work, we propose an inner-loop free framework, in the sense that no iterative algorithm
is required to solve sub-problems, using a splitting strategy for inverse problems. The alternating
updates for the two sub-problems were derived by feeding through two pre-trained deep neural
networks, i.e., one using an amortized inference based denoising convolutional auto-encoder network
for the proximity operation and one using structured convolutional neural networks for the huge
matrix inversion related to the forward model. Thus, the computational complexity of each iteration
in ADMM is linear with respect to the dimensionality of the signals. The network for the proximity
operation imposes an implicit prior learned from the training data, including the measurements as well
as the ground-truth, leading to amortized inference. The network for matrix inversion is independent
from the training data and can be trained from noise, i.e., a random noise image and its output from
the forward model. To make training the networks for the proximity operation easier, three tricks have
been employed: the first one is to use a pixel shuffling technique to equalize the dimensionality of the
measurements and ground-truth; the second one is to optionally add an adversarial loss borrowed
from the GAN (Generative Adversarial Nets) framework [10] for sharp image generation; the last one
is to introduce a perceptual measurement loss derived from pre-trained networks, such as AlexNet
[12] or VGG-16 Model [23]. Arguably, the speed of the proposed algorithm, which we term InfADMM-ADNN (Inner-loop free ADMM with Auxiliary Deep Neural Network), comes from the fact
that it uses two auxiliary pre-trained networks to accelerate the updates of ADMM.
Contribution The main contribution of this paper is comprised of i) learning an implicit
prior/regularizer using a denoising auto-encoder neural network, based on amortized inference;
ii) learning the inverse of a big matrix using structured convolutional neural networks, without using
training data; iii) each of the above networks can be exploited to accelerate the existing ADMM
solver for inverse problems.
2
Linear Inverse Problem
Notation: trainable networks by calligraphic font, e.g., A, fixed networks by italic font e.g., A. As
mentioned in the last section, the low dimensional measurement is denoted as y ? Rm , which is
reduced from high dimensional ground truth x ? Rn by a linear operator A such that y = Ax. Note
that usually n ? m, which makes the number of parameters to estimate no smaller than the number
of data points in hand. This imposes an ill-posed problem for finding solution x on new observation
y, since A is an underdetermined measurement matrix. For example, in a super-resolution set-up, the
matrix A might not be invertible, such as the strided Gaussian convolution in [21, 24]. To overcome
this difficulty, several computational strategies, including Markov chain Monte Carlo (MCMC) and
tailored variable splitting under the ADMM framework, have been proposed and applied to different
2
kinds of priors, e.g., the empirical Gaussian prior [29, 32], the Total Variation prior [22, 30, 31], etc.
In this paper, we focus on the popular ADMM framework due to its low computational complexity
and recent success in solving large scale optimization problems. More specifically, the optimization
problem is formulated as
? = arg min ky ? Azk2 + ?R(x), s.t. z = x
x
(1)
x,z
where the introduced auxiliary variable z is constrained to be equal to x, and R(x) captures the
structure promoted by the prior/regularization. If we design the regularization in an empirical
Bayesian way, by imposing an implicit data dependent prior on x, i.e., R(x; y) for amortized
inference [24], the augmented Lagrangian for (1) is
L(x, z, u) = ky ? Azk2 + ?R(x; y) + hu, x ? zi + ?kx ? zk2
(2)
where u is the Lagrange multiplier, and ? > 0 is the penalty parameter. The usual augmented
Lagrange multiplier method is to minimize L w.r.t. x and z simultaneously. This is difficult and does
not exploit the fact that the objective function is separable. To remedy this issue, ADMM decomposes
the minimization into two subproblems that are minimizations w.r.t. x and z, respectively. More
specifically, the iterations are as follows:
xk+1 = arg min ?kx ? zk + uk /2?k2 + ?R(x; y)
(3)
x
zk+1 = arg min ky ? Azk2 + ?kxk+1 ? z + uk /2?k2
z
(4)
uk+1 = uk + 2?(xk+1 ? zk+1 ).
(5)
If the prior R is appropriately chosen, such as kxk1 , a closed-form solution for (3), i.e., a soft
thresholding solution is naturally desirable. However, for some more complicated regularizations,
e.g., a patch based prior [8], solving (3) is nontrivial, and may require iterative methods. To solve
(4), a matrix inversion is necessary, for which conjugate gradient descent (CG) is usually applied to
update z [4]. Thus, solving (3) and (4) is in general cumbersome. Inner loops are required to solve
these two sub-minimization problems due to the intractability of the prior and the inversion, resulting
in large computational complexity. To avoid such drawbacks or limitations, we propose an inner
loop-free update rule with two pretrained deep convolutional architectures.
3
3.1
Inner-loop free ADMM
Amortized inference for x using a conditional proximity operator
Solving sub-problem (3) is equivalent to finding the solution of the proximity operator PR (v; y) =
?
into R without loss of
arg minx 12 kx ? vk2 + R(x; y), where we incorporate the constant 2?
generality. If we impose the first order necessary conditions [18], we have
x = PR (v; y) ? 0 ? ?R(?; y)(x) + x ? v ? v ? x ? ?R(?; y)(x)
(6)
where ?R(?; y) is a partial derivative operator. For notational simplicity, we define another operator
F =: I + ?R(?; y). Thus, the last condition in (6) indicates that xk+1 = F ?1 (v). Note that the
inverse here represents the inverse of an operator, i.e., the inverse function of F. Thus our objective is
to learn such an inverse operator which projects v into the prior subspace. For simple priors like k ? k1
or k ? k22 , the projection can be efficiently computed. In this work, we propose an implicit examplebased prior, which does not have a truly Bayesian interpretation, but aids in model optimization.
In line with this prior, we define the implicit proximity operator G? (x; v, y) parameterized by ? to
approximate unknown F ?1 . More specifically, we propose a neural network architecture referred to
as conditional Pixel Shuffling Denoising Auto-Encoders (cPSDAE) as the operator G, where pixel
shuffling [21] means periodically reordering the pixels in each channel mapping a high resolution
image to a low resolution image with scale r and increase the number of channels to r2 (see [21] for
more details). This allows us to transform v so that it is the same scale as y, and concatenate it with
y as the input of cPSDAE easily. The architecture of cPSDAE is shown in Fig. 1 (d).
3.2
Inversion-free update of z
While it is straightforward to write down the closed-form solution for sub-problem (4) w.r.t. z as is
shown in (7), explicitly computing this solution is nontrivial.
?1
zk+1 = K A> y + ?xk+1 + uk /2 , where K = A> A + ?I
(7)
3
(a)
(b)
(c)
(d)
(e)
Figure 1: Network for updating z (in black): (a) loss function (9), (b) structure of B ?1 , (c) struture of C? .
Note that the input is random noise independent from the training data. Network for updating z (in blue): (d)
? , y) (?
structure of cPSDAE G? (x; x
x plays the same role as v in training), (e) adversarial training for R(x; y).
Note again that (a)(b)(c) describes the network for inferring z, which is data-independent and (d)(e) describes
the network for inferring x, which is data-dependent.
In (7), A> is the transpose of the matrix A. As we mentioned, the term K in the right hand side
involves an expensive matrix inversion with computational complexity O(n3 ) . Under some specific
assumptions, e.g., A is a circulant matrix, this matrix inversion can be accelerated with a Fast Fourier
transformation, which has a complexity of order O(n log n). Usually, the gradient based update
has linear complexity in each iteration and thus has an overall complexity of order O(nint log n),
where nint is the number of iterations. In this work, we will learn this matrix inversion explicitly
by designing a neural network. Note that K is only dependent on A, and thus can be computed in
advance for future use. This problem can be reduced to a smaller scale matrix inversion by applying
the Sherman-Morrison-Woodbury formula:
?1
K = ? ?1 I ? A> BA , where B = ?I + AA>
.
(8)
Therefore, we only need to solve the matrix inversion in dimension m ? m, i.e., estimating B. We
propose an approach to approximate it by a trainable deep convolutional neural network C? ? B
parameterized by ?. Note that B ?1 = ?I + AA> can be considered as a two-layer fully-connected
or convolutional network as well, but with a fixed kernel. This inspires us to design two auto-encoders
with shared weights, and minimize the sum of two reconstruction losses to learn the inversion C? :
arg min E? k? ? C? B ?1 ?k22 + k? ? B ?1 C? ?k22
(9)
?
where ? is sampled from a standard Gaussian distribution. The loss in (9) is clearly depicted in Fig. 1
(a) with the structure of B ?1 in Fig. 1 (b) and the structure of C? in Fig. 1 (c). Since the matrix B is
symmetric, we can reparameterize C? as W? W?> , where W? represents a multi-layer convolutional
network and W?> is a symmetric convolution transpose architecture using shared kernels with W? ,
as shown in Fig. 1 (c) (the blocks with the same colors share the same network parameters). By
plugging the learned C? in (8) , we obtain a reusable deep neural network K? = ? ?1 I ? A> C? A
as a surrogate for the exact inverse matrix K. The update of z at each iteration can be done by
applying the same K? as follows:
zk+1 ? ? ?1 I ? A> C? A A> y + ?xk+1 + uk /2 .
(10)
3.3
Adversarial training of cPSDAE
In this section, we will describe the proposed adversarial training scheme for cPSDAE to update
x. Suppose that we have the paired training dataset (xi , yi )N
i=1 , a single cPSDAE with the input
? is a corrupted
pair (?
x, y) is trying to minimize the reconstruction error Lr (G? (?
x, y), x), where x
? = x + n where n is random noise. Notice Lr in traditional DAE is commonly
version of x, i.e., x
4
defined as `2 loss, however, `1 loss is an alternative in practice. Additionally, we follow the idea in
[19, 7] by introducing a discriminator and a comparator to help train the cPSDAE, and find that it can
produce sharper or higher quality images than merely optimizing G. This will wrap our conditional
generative model G? into the conditional GAN [10] framework with an extra feature matching
network (comparator). Recent advances in representation learning problems have shown that the
features extracted from well pre-trained neural networks on supervised classification problems can
be successfully transferred to others tasks, such as zero-shot learning [15], style transfer learning
[9]. Thus, we can simply use pre-trained AlexNet [12] or VGG-16 Model [23] on ImageNet as the
comparator without fine-tuning in order to extract features that capture complex and perceptually
important properties. The feature matching loss Lf (C(G? (?
x, y)), C(x)) is usually the `2 distance of
high level image features, where C represents the pre-trained network. Since C is fixed, the gradient
of this loss can be back-propagated to ?.
For the adversarial training, the discriminator D? is a trainable convolutional network. We can keep
the standard discriminator loss as in a traditional GAN, and add the generator loss of the GAN to the
previously defined DAE loss and comparator loss. Thus, we can write down our two objectives,
LD (x, y) = ? log D? (x) ? log (1 ? D? (G? (?
x, y)))
LG (x, y) = ?r kG? (?
x, y) ?
xk22
+ ?f kC(G? (?
x, y)) ?
(11)
C(x)k22
? ?a log D? (G? (?
x, y))
(12)
The optimization involves iteratively updating ? by minimizing LD keeping ? fixed, and then
updating ? by minimizing LG keeping ? fixed. The proposed method, including training and
inference has been summarized in Algorithm 1. Note that each update of x or z using neural networks
in an ADMM iteration has a complexity of linear order w.r.t. the data dimensionality n.
3.4
Discussion
Algorithm 1 Inner-loop free ADMM with Auxiliary Deep Neural Nets (Inf-ADMM-ADNN)
Training stage:
1: Train net K? for inverting AT A + ?I
2: Train net cPSDAE for proximity operator of
R(x; y)
Testing stage:
1: for t = 1, 2, . . . do
2:
Update x cf. xk+1 = F ?1 (v);
3:
Update z cf. (10);
4:
Update u cf. (5);
5: end for
A critical point for learning-based methods is
whether the method generalizes to other problems. More specifically, how does a method that
is trained on a specific dataset perform when applied to another dataset? To what extent can we
reuse the trained network without re-training?
In the proposed method, two deep neural networks are trained to infer x and z. For the
network w.r.t. z, the training only requires the
forward model A to generate the training pairs
(, A). The trained network for z can be applied
for any other datasets as long as A remains the
same. Thus, this network can be adapted easily to accelerate inference for inverse problems
without training data. However, for inverse problems that depends on a different A, a re-trained network is required. It is worth mentioning that the
forward model A can be easily learned using training dataset (x, y), leading to a fully blind estimator
associated with the inverse problem. An example of learning A? can be found in the supplementary
materials. For the network w.r.t. x, training requires data pairs (xi , yi ) because of the amortized
inference. Note that this is different from training a prior for x only using training data xi . Thus,
the trained network for x is confined to the specific tasks constrained by the pairs (x, y). To extend
the generality of the trained network, the amortized setting can be removed, i.e, y is removed from
the training, leading to a solution to proximity operator PR (v) = arg minx 12 kx ? vk2 + R(x).
This proximity operation can be regarded as a denoiser which projects the noisy version v of x into
the subspace imposed by R(x). The trained network (for the proximity operator) can be used as a
plug-and-play prior [27] to regularize other inverse problems for datasets that share similar statistical
characteristics. However, a significant change in the training dataset, e.g., different modalities like
MRI and natural images (e.g., ImageNet [12]), would require re-training.
Another interesting point to mention is the scalability of the proposed method to data of different
dimensions. The scalability can be adapted using patch-based methods without loss of generality. For
example, a neural network is trained for images of size 64 ? 64 but the test image is of size 256 ? 256.
To use this pre-trained network, the full image can be decomposed as four 64 ? 64 images and fed to
5
the network. To overcome the possible blocking artifacts, eight overlapping patches can be drawn
from the full image and fed to the network. The output of these eight patches are then averaged
(unweighted or weighted) over the overlapping parts. A similar strategy using patch stitching can be
exploited to feed small patches to the network for higher dimensional datasets.
4
Experiments
In this section, we provide experimental results and analysis on the proposed Inf-ADMM-ADNN and
compare the results with a conventional ADMM using inner loops for inverse problems. Experiments
on synthetic data have been implemented to show the fast convergence of our method, which comes
from the efficient feed-forward propagation through pre-trained neural networks. Real applications
using proposed Inf-ADMM-ADNN have been explored, including single image super-resolution,
motion deblurring and joint super-resolution and colorization.
4.1
Synthetic data
To evaluate the performance of proposed Inf-ADMM-ADNN, we first test the neural network
K? , approximating the matrix inversion on synthetic data. More specifically, we assume that the
ground-truth x is drawn from a Laplace distribution Laplace(?, b), where ? = 0 is the location
parameter and b is the scale parameter. The forward model A is a sparse matrix representing
convolution with a stride of 4. The architecture of A is available in the supplementary materials
(see Section 2). The noise n is drawn from a standard Gaussian distribution N (0, ? 2 ). Thus, the
observed data is generated as y = Ax + n. Following Bayes theorem, the maximum a posterior
estimate of x given y, i.e., maximizing p(x|y) ? p(y|x)p(x) can be equivalently formulated as
arg minx 2?1 2 ky ? Axk22 + 1b kxk1 , where b = 1 and ? = 1 in this setting. Following (3), (4),
k
1 (z
(5), this problem is reduced to the following three sub-problems: i) xk+1 = S 2?
? uk /2?);
ii) zk+1 = arg minz ky ? Azk22 + ?kxk+1 ? z + uk /2?k22 ; iii)uk+1 = uk + 2?(xk+1 ? zk+1 ),
0
|a| ? ?
where the soft thresholding operator S is defined as S? (a) =
and
a ? sgn(a)? |a| > ?
k+1
sgn(a) extracts the sign of a. The update of x
has a closed-form solution, i.e., soft thresholding
of zk ? uk /2?. The update of zk+1 requires the inversion of a big matrix, which is usually solved
using a gradient descent based algorithm. The update of uk+1 is straightforward. Thus, we compare
the gradient descent based update, a closed-form solution for matrix inversion2 and the proposed
inner-free update using a pre-trained neural network. The evolution of the objective function w.r.t.
the number of iterations and the time has been plotted in the left and middle of Figs. 2. While all
three methods perform similarly from iteration to iteration (in the left of Figs. 2), the proposed innerloop free based and closed-form inversion based methods converge much faster than the gradient
based method (in the middle of Figs. 2). Considering the fact that the closed-form solution, i.e., a
direct matrix inversion, is usually not available in practice, the learned neural network allows us to
approximate the matrix inversion in a very accurate and efficient way.
10 4
4
GD-based
Closed-form
Proposed
1.8
objective
8770
1.4
8760
1.2
0.7
= 0.0001
= 0.0005
= 0.001
= 0.005
= 0.01
= 0.1
0.6
1.6
1.6
objective
GD-based
Closed-form
Proposed
1.8
10000
0.5
1.4
NMSE
10
9500
0.4
1.2
9000
0.3
8750
22
1
24
26
28
0.05
1
0.1
0.15
4
time/s
6
0.2
0.2
0.1
0
5
10
15
iterations
20
25
30
0
2
8
0
50
100
150
iterations
Figure 2: Synthetic data: (left) objective v.s. iterations, (middle) objective v.s. time. MNIST dataset: (right)
NMSE v.s. iterations for MNIST image 4? super-resolution.
2
Note that this matrix inversion can be explicitly computed due to its small size in this toy experiment. In
practice, this matrix is not built explicitly.
6
Figure 3: Top two rows : (column 1) LR images, (column 2) bicubic interpolation (?4), (column 3) results
using proposed method (?4), (column 4) HR image. Bottom row: (column 1) motion blurred images, (column
2) results using Wiener filter with the best performance by tuning regularization parameter, (column 3) results
using proposed method, (column 4) ground-truth.
4.2
Image super-resolution and motion deblurring
In this section, we apply the proposed Inf-ADMM-ADNN to solve the poplar image super-resolution
problem. We have tested our algorithm on the MNIST dataset [14] and the 11K images of the
Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset [28]. In the first two rows of Fig. 3, high
resolution images, as shown in the last column, have been blurred (convolved) using a Gaussian
kernel of size 3 ? 3 and downsampled every 4 pixels in both vertical and horizontal directions
to generate the corresponding low resolution images as shown in the first column. The bicubic
interpolation of LR images and results using proposed Inf-ADMM-ADNN on a 20% held-out test
set are displayed in column 2 and 3. Visually, the proposed Inf-ADMM-ADNN gives much better
results than the bicubic interpolation, recovering more details including colors and edges. A similar
task to super-resolution is motion deblurring, in which the convolution kernel is a directional kernel
and there is no downsampling. The motion deblurring results using Inf-ADMM-ADNN are displayed
in the bottom of Fig. 3 and are compared with the Wiener filtered deblurring result (the performance
of Wiener filter has been tuned to the best by adjusting the regularization parameter). Obviously, the
Inf-ADMM-ADNN gives visually much better results than the Wiener filter. Due to space limitations,
more simulation results are available in supplementary materials (see Section 3.1 and 3.2).
To explore the convergence speed w.r.t. the ADMM regularization parameter ?, we have plotted
the normalized mean square error (NMSE) defined as NMSE = k?
x ? xk22 /kxk22 , of super-resolved
MNIST images w.r.t. ADMM iterations using different values of ? in the right of Fig. 2. It is
interesting to note that when ? is large, e.g., 0.1 or 0.01, the NMSE of ADMM updates converges
to a stable value rapidly in a few iterations (less than 10). Reducing the value of ? slows down the
decay of NMSE over iterations but reaches a lower stable value. When the value of ? is small enough,
e.g., ? = 0.0001, 0.0005, 0.001, the NMSE converges to the identical value. This fits well with the
claim in Boyd?s book [2] that when ? is too large it does not put enough emphasis on minimizing the
7
objective function, causing coarser estimation; thus a relatively small ? is encouraged in practice.
Note that the selection of this regularization parameter is still an open problem.
4.3
Joint super-resolution and colorization
While image super-resolution tries to enhance spatial resolution from spatially degraded images, a
related application in the spectral domain exists, i.e., enhancing spectral resolution from a spectrally
degraded image. One interesting example is the so-called automatic colorization, i.e., hallucinating a
plausible color version of a colorless photograph. To the best knowledge of the authors, this is the
first time we can enhance both spectral and spatial resolutions from one single band image. In this
section, we have tested the ability to perform joint super-resolution and colorization from one single
colorless LR image on the celebA-dataset [16]. The LR colorless image, its bicubic interpolation
and ?2 HR image are displayed in the top row of Fig. 4. The ADMM updates in the 1st, 4th and
7th iterations (on held-out test set) are displayed in the bottom row, showing that the updated image
evolves towards higher quality. More results are in the supplementary materials (see Section 3.3).
Figure 4: (top left) colorless LR image, (top middle) bicubic interpolation, (top right) HR ground-truth, (bottom
left to right) updated image in 1th, 4th and 7th ADMM iteration. Note that the colorless LR images and bicubic
interpolations are visually similar but different in details noticed by zooming out.
5
Conclusion
In this paper we have proposed an accelerated alternating direction method of multipliers, namely,
Inf-ADMM-ADNN to solve inverse problems by using two pre-trained deep neural networks. Each
ADMM update consists of feed-forward propagation through these two networks, with a complexity
of linear order with respect to the data dimensionality. More specifically, a conditional pixel shuffling
denoising auto-encoder has been learned to perform amortized inference for the proximity operator.
This auto-encoder leads to an implicit prior learned from training data. A data-independent structured
convolutional neural network has been learned from noise to explicitly invert the big matrix associated
with the forward model, getting rid of any inner loop in an ADMM update, in contrast to the
conventional gradient based method. This network can also be combined with existing proximity
operators to accelerate existing ADMM solvers. Experiments and analysis on both synthetic and real
dataset demonstrate the efficiency and accuracy of the proposed method. In future work we hope to
extend the proposed method to inverse problems related to nonlinear forward models.
8
Appendices
We will address the question proposed by reviewers in this Appendix.
To Reviewer 1 The title has been changed to ?An inner-loop free solution to inverse problems
using deep neural networks? according to the reviewer?s suggestion, which is in consistence with our
arxiv submission. The pixel shuffling used in our PSDAE architecture is mainly to keep the filter
size of every layer including input and output as the same, thus trick has been practically proved
to remove the check-board effect. Especially for the super-resolution task with different scales of
input/output, it is basically to use the input to regress the same scale output but with more channels.
Figure 5: Result of super-resolution from SRGAN with different settings.
To Reviewer 2 As we explained in the rebuttal, we have the implementation of SRCNN with or
without adversarial loss in our own but we did not successfully reproduce a reasonable result in our
dataset. Thus, we did not include the visualization in the initial submission, since either blurriness or
check-board effect will appear, but we will further fine-tune the model or use other tricks such as
pixel shuffling. [11] has been added to the reference.
To Reviewer 3 Most of the questions have been addressed in the rebuttal.
9
Acknowledgments
The authors would like to thank Siemens Corporate Research for supporting this work and thank
NVIDIA for the GPU donations.
References
[1] Jonas Adler and Ozan ?ktem. Solving ill-posed inverse problems using iterative deep neural
networks. arXiv preprint arXiv:1704.04058, 2017.
[2] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations
R in Machine Learning, 3(1):1?122, 2011.
and Trends
[3] Joan Bruna, Pablo Sprechmann, and Yann LeCun. Super-resolution with deep convolutional
sufficient statistics. arXiv preprint arXiv:1511.05666, 2015.
[4] JH Chang, Chun-Liang Li, Barnabas Poczos, BVK Kumar, and Aswin C Sankaranarayanan.
One network to solve them all?solving linear inverse problems using deep projection models.
arXiv preprint arXiv:1703.09912, 2017.
[5] Bal?zs Csan?d Cs?ji. Approximation with artificial neural networks. Faculty of Sciences, Etvs
Lornd University, Hungary, 24:48, 2001.
R
[6] Li Deng, Dong Yu, et al. Deep learning: methods and applications. Foundations and Trends
in Signal Processing, 7(3?4):197?387, 2014.
[7] Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics
based on deep networks. In Advances in Neural Information Processing Systems, pages 658?666,
2016.
[8] Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations
over learned dictionaries. IEEE Trans. Image Process., 15(12):3736?3745, 2006.
[9] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional
neural networks. In Proc. IEEE Int. Conf. Comp. Vision and Pattern Recognition (CVPR), pages
2414?2423, 2016.
[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural
Information Processing Systems, pages 2672?2680, 2014.
[11] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In Proceedings
of the 27th International Conference on Machine Learning (ICML-10), pages 399?406, 2010.
[12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in Neural Information Processing Systems, pages
1097?1105, 2012.
[13] Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. In Proc. European Conf. Comp. Vision (ECCV), pages 577?593. Springer,
2016.
[14] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proc. IEEE, 86(11):2278?2324, 1998.
[15] Jimmy Lei Ba, Kevin Swersky, Sanja Fidler, et al. Predicting deep zero-shot convolutional
neural networks using textual descriptions. In Proc. IEEE Int. Conf. Comp. Vision (ICCV),
pages 4247?4255, 2015.
[16] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the
wild. In Proc. IEEE Int. Conf. Comp. Vision (ICCV), pages 3730?3738, 2015.
[17] Songtao Lu, Mingyi Hong, and Zhengdao Wang. A nonconvex splitting method for symmetric
nonnegative matrix factorization: Convergence analysis and optimality. IEEE Transactions on
Signal Processing, 65(12):3120?3135, June 2017.
[18] Helmut Maurer and Jochem Zowe. First and second-order necessary and sufficient optimality
conditions for infinite-dimensional programming problems. Math. Progam., 16(1):98?110,
1979.
10
[19] Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play
generative networks: Conditional iterative generation of images in latent space. arXiv preprint
arXiv:1612.00005, 2016.
[20] Jo Schlemper, Jose Caballero, Joseph V Hajnal, Anthony Price, and Daniel Rueckert. A
deep cascade of convolutional neural networks for MR image reconstruction. arXiv preprint
arXiv:1703.00555, 2017.
[21] Wenzhe Shi, Jose Caballero, Ferenc Husz?r, Johannes Totz, Andrew P Aitken, Rob Bishop,
Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an
efficient sub-pixel convolutional neural network. In Proc. IEEE Int. Conf. Comp. Vision and
Pattern Recognition (CVPR), pages 1874?1883, 2016.
[22] M. Simoes, J. Bioucas-Dias, L.B. Almeida, and J. Chanussot. A convex formulation for
hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci.
Remote Sens., 53(6):3373?3388, Jun. 2015.
[23] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
[24] Casper Kaae S?nderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz?r. Amortised
MAP inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
[25] Albert Tarantola. Inverse problem theory and methods for model parameter estimation. SIAM,
2005.
[26] A.N. Tikhonov and V.I.A. Arsenin. Solutions of ill-posed problems. Scripta series in mathematics. Winston, 1977.
[27] Singanallur V Venkatakrishnan, Charles A Bouman, and Brendt Wohlberg. Plug-and-play priors
for model based reconstruction. In Proc. IEEE Global Conf. Signal and Information Processing
(GlobalSIP), pages 945?948. IEEE, 2013.
[28] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The
caltech-ucsd birds-200-2011 dataset. 2011.
[29] Q. Wei, N. Dobigeon, and Jean-Yves Tourneret. Bayesian fusion of multi-band images. IEEE J.
Sel. Topics Signal Process., 9(6):1117?1127, Sept. 2015.
[30] Qi Wei, Nicolas Dobigeon, and Jean-Yves Tourneret. Fast fusion of multi-band images based
on solving a Sylvester equation. IEEE Trans. Image Process., 24(11):4109?4121, Nov. 2015.
[31] Qi Wei, Nicolas Dobigeon, Jean-Yves Tourneret, J. M. Bioucas-Dias, and Simon Godsill.
R-FUSE: Robust fast fusion of multi-band images based on solving a Sylvester equation. IEEE
Signal Process. Lett., 23(11):1632?1636, Nov 2016.
[32] N. Zhao, Q. Wei, A. Basarab, N. Dobigeon, D. Kouam?, and J. Y. Tourneret. Fast single image
super-resolution using a new analytical solution for `2 ? `2 problems. IEEE Trans. Image
Process., 25(8):3683?3697, Aug. 2016.
11
| 6831 |@word mild:1 middle:4 version:3 inversion:21 mri:1 faculty:1 open:1 hu:1 simulation:1 mention:1 shot:2 ld:2 initial:1 liu:1 series:1 daniel:2 tuned:1 document:1 existing:3 michal:1 luo:1 activation:1 chu:1 gpu:1 axk22:1 tarantola:1 periodically:1 concatenate:1 hajnal:1 enables:1 remove:1 update:24 generative:4 xk:8 lr:8 filtered:1 math:1 location:1 direct:1 jonas:1 consists:1 wild:1 introduce:1 acquired:1 aitken:1 gatys:1 multi:4 decomposed:1 consistence:1 solver:3 considering:1 project:2 estimating:1 notation:1 alexnet:2 anh:1 what:1 kg:1 kind:1 superresolution:1 spectrally:1 z:1 finding:2 transformation:1 every:2 rm:1 k2:2 uk:12 sherjil:1 appear:1 arguably:1 bioucas:2 interpolation:6 might:1 black:1 bird:2 emphasis:1 alexey:2 challenging:2 branson:1 mentioning:1 factorization:1 averaged:1 acknowledgment:1 woodbury:1 lecun:3 testing:2 practice:4 block:1 progam:1 lf:1 lcarin:1 maire:1 empirical:3 universal:1 cascade:1 projection:2 matching:2 pre:12 boyd:2 downsampled:1 selection:1 operator:16 zehan:1 put:1 live:1 applying:2 conventional:5 map:2 lagrangian:1 equivalent:1 imposed:1 maximizing:1 straightforward:2 reviewer:5 ecker:1 jimmy:1 shi:2 convex:1 resolution:24 simplicity:1 splitting:4 pouget:1 rule:2 estimator:1 regarded:1 regularize:1 variation:1 laplace:2 updated:2 target:1 play:4 suppose:1 massive:1 exact:1 duke:8 programming:1 us:2 designing:1 deblurring:5 goodfellow:1 trick:3 amortized:9 trend:2 expensive:1 recognition:4 updating:5 nderby:1 submission:2 coarser:1 blocking:1 kxk1:2 role:1 subproblem:1 observed:1 bottom:4 solved:2 capture:2 preprint:7 wang:3 connected:1 remote:1 removed:2 equalize:1 mentioned:2 complexity:9 warde:1 barnabas:1 venkatakrishnan:1 trained:23 solving:11 shakhnarovich:1 ferenc:2 efficiency:2 eric:1 accelerate:5 easily:3 joint:3 resolved:1 xiaoou:1 regularizer:1 train:3 fast:6 describe:1 monte:1 artificial:1 kevin:1 jean:4 emerged:1 kai:2 solve:9 posed:4 supplementary:4 plausible:1 cvpr:2 elad:1 encoder:6 ability:1 statistic:1 simonyan:1 transform:1 noisy:2 obviously:1 net:5 azk2:3 matthias:1 propose:7 reconstruction:5 sen:1 analytical:1 causing:1 loop:14 hungary:1 rapidly:1 description:1 ky:5 scalability:2 getting:1 exploiting:1 convergence:3 sutskever:1 produce:1 generating:1 karol:1 converges:2 help:3 donation:1 andrew:2 stat:2 borrowed:1 aug:1 strong:1 auxiliary:4 implemented:1 involves:4 come:2 c:1 recovering:1 direction:5 kaae:1 drawback:2 zowe:1 attribute:1 filter:4 sgn:2 material:4 require:3 feeding:3 decompose:1 underdetermined:1 proximity:13 practically:1 considered:1 ground:13 visually:3 great:1 lawrence:1 mapping:7 caballero:3 scope:1 claim:1 dictionary:1 cub:1 estimation:2 proc:7 currently:1 title:1 successfully:2 weighted:1 minimization:4 hope:1 clearly:1 gaussian:5 super:18 husz:2 avoid:2 sel:1 clune:1 derived:2 ax:2 focus:1 june:1 notational:1 indicates:1 mainly:1 check:2 contrast:1 adversarial:8 cg:1 helmut:1 sense:1 vk2:2 inference:14 dependent:4 typically:1 hidden:1 kc:1 perona:1 reproduce:1 pixel:9 issue:1 arg:8 dual:1 ill:4 denoted:1 overall:1 lucas:1 classification:3 art:1 constrained:2 spatial:2 brox:1 equal:1 beach:1 encouraged:1 identical:1 represents:3 yu:1 unsupervised:1 carin:1 icml:1 celeba:1 future:2 others:1 mirza:1 dosovitskiy:2 yoshua:3 few:1 strided:1 simultaneously:1 replaced:1 huge:1 wohlberg:1 truly:1 farley:1 held:2 chain:1 accurate:1 bicubic:6 edge:1 partial:1 necessary:4 maurer:1 re:3 plotted:2 dae:2 bouman:1 column:11 modeling:1 soft:3 introducing:1 subset:1 comprised:1 krizhevsky:1 welinder:1 inspires:1 too:1 encoders:2 corrupted:1 gregory:1 synthetic:6 gd:2 combined:1 st:2 adler:1 international:1 siam:1 dong:1 invertible:1 enhance:2 michael:2 bethge:1 ilya:1 jo:1 again:1 containing:1 conf:6 book:1 derivative:1 leading:6 style:2 zhao:1 toy:1 li:2 stride:1 summarized:1 coding:1 includes:1 int:4 blurred:2 rueckert:2 explicitly:6 depends:2 blind:1 try:1 jason:1 closed:8 bayes:1 complicated:2 simon:1 contribution:2 minimize:3 square:2 yves:3 accuracy:2 convolutional:19 degraded:4 characteristic:1 efficiently:2 wiener:4 poplar:1 serge:1 directional:1 bayesian:4 basically:1 lu:1 carlo:1 globalsip:1 worth:1 comp:5 ping:1 reach:1 cumbersome:1 regress:1 naturally:1 associated:2 recovers:1 propagated:1 sampled:1 dataset:12 adjusting:1 popular:2 proved:1 color:3 knowledge:1 dimensionality:4 sophisticated:1 back:1 feed:5 steve:1 higher:3 supervised:2 follow:1 totz:1 zisserman:1 wei:6 formulation:1 done:1 generality:3 implicit:7 stage:2 hand:2 horizontal:1 mehdi:1 nonlinear:1 overlapping:2 propagation:3 widespread:1 fai:1 quality:2 artifact:1 adnn:11 lei:1 usa:1 effect:2 k22:5 normalized:1 multiplier:6 remedy:1 evolution:1 regularization:11 fidler:1 alternating:6 symmetric:3 iteratively:1 spatially:1 neal:1 illustrated:1 deal:1 bal:1 hong:1 trying:1 demonstrate:2 motion:5 image:56 recently:4 parikh:1 charles:1 physical:1 ji:1 extend:2 interpretation:1 yosinski:1 measurement:13 significant:1 imposing:1 shuffling:6 automatic:3 tuning:2 mathematics:1 similarly:1 sherman:1 bruna:1 stable:2 sanja:1 similarity:1 etc:2 add:2 patrick:1 posterior:1 own:1 recent:3 larsson:1 optimizing:1 inf:10 tikhonov:1 nvidia:1 nonconvex:1 catherine:1 calligraphic:1 success:2 yi:2 exploited:2 caltech:2 promoted:1 impose:1 employed:1 deng:1 mr:1 converge:1 redundant:1 signal:9 ii:2 morrison:1 full:2 corporate:1 desirable:1 infer:1 stem:1 stephen:1 borja:1 faster:1 characterized:1 plug:3 long:2 equally:1 plugging:1 paired:1 qi:4 regression:1 sylvester:2 vision:6 enhancing:1 metric:1 arxiv:15 iteration:18 kernel:5 tailored:1 albert:1 achieved:1 confined:1 invert:1 jochem:1 fine:2 addressed:1 modality:1 appropriately:1 extra:1 call:1 gustav:1 iii:2 enough:2 bengio:3 fit:1 zi:1 architecture:7 inner:15 idea:1 haffner:1 vgg:2 whether:1 hallucinating:1 reuse:1 penalty:1 peter:1 karen:1 poczos:1 deep:29 generally:1 iterating:1 tune:1 johannes:1 band:4 reduced:3 generate:2 notice:1 sign:1 popularity:1 blue:1 write:2 reusable:1 four:1 drawn:3 fuse:1 merely:1 pietro:1 sum:1 inverse:32 parameterized:2 jose:3 swersky:1 reasonable:1 groundtruth:1 yann:3 patch:6 appendix:2 layer:4 courville:1 fan:1 winston:1 nonnegative:1 nontrivial:2 adapted:2 xiaogang:1 alex:1 n3:1 fourier:1 speed:2 span:1 min:4 reparameterize:1 kumar:1 separable:1 leon:1 optimality:2 relatively:1 transferred:1 structured:3 according:1 conjugate:1 smaller:2 describes:2 joseph:1 evolves:1 rob:1 explained:1 iccv:2 pr:3 xk22:2 visualization:1 previously:1 remains:1 bing:1 equation:2 needed:1 sprechmann:1 fed:2 stitching:1 end:5 zk2:1 colorless:5 dia:2 generalizes:1 operation:4 available:3 aharon:1 eight:2 apply:1 ozan:1 spectral:3 alternative:1 convolved:1 thomas:1 top:5 include:2 cf:3 gan:4 exploit:1 k1:1 especially:1 approximating:1 gregor:1 objective:9 noticed:1 question:2 added:1 font:2 strategy:5 usual:1 traditional:2 italic:1 surrogate:1 gradient:11 minx:3 subspace:4 wrap:1 link:1 distance:1 zooming:1 thank:2 topic:1 extent:1 ozair:1 denoiser:1 colorization:6 minimizing:3 downsampling:1 optionally:1 difficult:1 katherine:1 lg:2 equivalently:1 sharper:1 liang:1 subproblems:1 slows:1 ba:2 godsill:1 ziwei:1 design:3 implementation:1 unknown:2 contributed:1 allowing:1 perform:4 vertical:1 neuron:1 observation:1 datasets:5 convolution:4 markov:1 finite:1 descent:6 displayed:4 supporting:1 hinton:1 rn:2 ucsd:2 sharp:1 peleato:1 pablo:1 introduced:1 inverting:2 pair:4 required:4 namely:1 extensive:1 discriminator:3 imagenet:3 eckstein:1 wah:1 trainable:3 learned:11 textual:1 nip:1 trans:4 address:1 usually:7 pattern:3 aswin:1 built:1 including:6 tourneret:4 video:1 critical:1 david:1 difficulty:1 natural:1 predicting:1 hr:3 representing:1 scheme:2 improve:1 kxk22:1 jun:1 auto:8 extract:2 sept:1 joan:1 heller:1 prior:20 theis:1 loss:16 reordering:1 fully:2 generation:2 limitation:3 interesting:3 suggestion:1 geoffrey:1 generator:1 foundation:2 sufficient:2 imposes:4 scripta:1 thresholding:3 intractability:2 share:2 casper:1 row:5 eccv:1 arsenin:1 changed:1 last:4 free:11 transpose:2 keeping:2 side:1 jh:1 circulant:1 face:1 amortised:1 sparse:3 distributed:1 overcome:2 dimension:2 lett:1 unweighted:1 forward:15 author:3 commonly:1 nguyen:1 transaction:1 approximate:6 compact:1 nov:2 keep:2 global:1 rid:1 belongie:1 xi:3 continuous:1 iterative:5 latent:1 decomposes:1 additionally:1 learn:6 zk:9 robust:1 ca:1 channel:3 kheller:1 transfer:2 nicolas:2 excellent:1 complex:1 european:1 bottou:1 domain:3 anthony:1 did:2 main:1 big:5 motivation:1 noise:6 wenzhe:2 ista:1 nmse:7 augmented:2 fig:12 referred:1 xu:1 board:2 srcnn:1 aid:1 sub:10 inferring:2 explicit:1 perceptual:2 minz:1 ian:1 tang:1 theorem:2 down:3 formula:1 specific:3 bishop:1 showing:1 explored:2 r2:1 decay:1 chun:1 abadie:1 fusion:3 consist:1 exists:1 mnist:4 sankaranarayanan:1 gained:1 hyperspectral:1 perceptually:1 kx:4 easier:1 depicted:1 bvk:1 photograph:1 simply:1 explore:1 lagrange:2 kxk:2 pretrained:1 chang:1 springer:1 aa:2 truth:13 mingyi:1 extracted:1 conditional:7 comparator:4 formulated:3 towards:1 jeff:1 shared:2 admm:33 price:1 change:1 specifically:10 infinite:1 reducing:1 blurriness:1 denoising:6 called:2 total:1 experimental:1 siemens:1 aaron:1 almeida:1 jonathan:1 alexander:1 accelerated:2 incorporate:1 evaluate:1 mcmc:1 tested:2 |
6,448 | 6,832 | OnACID: Online Analysis of Calcium Imaging Data
in Real Time
Andrea Giovannucci?1
Anne K. Churchland?
Johannes Friedrich??1
Dmitri Chklovskii?
Matthew Kaufman?
Liam Paninski?
Eftychios A. Pnevmatikakis?2
? Flatiron Institute, New York, NY 10010
? Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724
?
Columbia University, New York, NY 10027
{agiovannucci, jfriedrich, dchklovskii, epnevmatikakis}@flatironinstitute.org
{mkaufman, churchland}@cshl.edu
[email protected]
Abstract
Optical imaging methods using calcium indicators are critical for monitoring the
activity of large neuronal populations in vivo. Imaging experiments typically
generate a large amount of data that needs to be processed to extract the activity
of the imaged neuronal sources. While deriving such processing algorithms is
an active area of research, most existing methods require the processing of large
amounts of data at a time, rendering them vulnerable to the volume of the recorded
data, and preventing real-time experimental interrogation. Here we introduce
OnACID, an Online framework for the Analysis of streaming Calcium Imaging
Data, including i) motion artifact correction, ii) neuronal source extraction, and iii)
activity denoising and deconvolution. Our approach combines and extends previous
work on online dictionary learning and calcium imaging data analysis, to deliver
an automated pipeline that can discover and track the activity of hundreds of cells
in real time, thereby enabling new types of closed-loop experiments. We apply our
algorithm on two large scale experimental datasets, benchmark its performance on
manually annotated data, and show that it outperforms a popular offline approach.
1
Introduction
Calcium imaging methods continue to gain traction among experimental neuroscientists due to their
capability of monitoring large targeted neuronal populations across multiple days or weeks with
decisecond temporal and single-neuron spatial resolution. To infer the neural population activity
from the raw imaging data, an analysis pipeline is employed which typically involves solving the
following problems (all of which are still areas of active research): i) correcting for motion artifacts
during the imaging experiment, ii) identifying/extracting the sources (neurons and axonal or dendritic
processes) in the imaged field of view (FOV), and iii) denoising and deconvolving the neural activity
from the dynamics of the expressed calcium indicator.
1
2
These authors contributed equally to this work.
To whom correspondence should be addressed.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The fine spatiotemporal resolution of calcium imaging comes at a data rate cost; a typical two-photon
(2p) experiment on a 512?512 pixel large FOV imaged at 30Hz, generates ?50GB of data (in 16-bit
integer format) per hour. These rates can be significantly higher for other planar and volumetric
imaging techniques, e.g., light-sheet [1] or SCAPE imaging [4], where the data rates can exceed 1TB
per hour. The resulting data deluge poses a significant challenge.
Of the three basic pre-processing problems described above, the problem of source extraction faces
the most severe scalability issues. Popular approaches reshape the data movies into a large array
with dimensions (#pixels) ? (#timesteps), that is then factorized (e.g., via independent component
analysis [20] or constrained non-negative matrix factorization (CNMF) [26]) to produce the locations
in the FOV and temporal activities of the imaged sources. While effective for small or medium
datasets, direct factorization can be impractical, since a typical experiment can quickly produce
datasets larger than the available RAM. Several strategies have been proposed to enhance scalability,
including parallel processing [9], spatiotemporal decimation [10], dimensionality reduction [23], and
out-of-core processing [13]. While these approaches enable efficient processing of larger datasets,
they still require significant storage, power, time, and memory resources.
Apart from recording large neural populations, optical methods can also be used for stimulation
[5]. Combining optogenetic methods for recording and perturbing neural ensembles opens the
door to exciting closed-loop experiments [24, 15, 8], where the pattern of the stimulation can be
determined based on the recorded activity during behavior. In a typical closed-loop experiment, the
monitored/perturbed regions of interest (ROIs) have been preselected by analyzing offline a previous
dataset from the same FOV. Monitoring the activity of a ROI, which usually corresponds to a soma,
typically entails averaging the fluorescence over the corresponding ROI, resulting in a signal that is
only a proxy for the actual neural activity and which can be sensitive to motion artifacts and drifts, as
well as spatially overlapping sources, background/neuropil contamination, and noise. Furthermore, by
preselecting the ROIs, the experimenter is unable to detect and incorporate new sources that become
active later during the experiment, which prevents the execution of truly closed-loop experiments.
In this paper, we present an Online, single-pass, algorithmic framework for the Analysis of Calcium
Imaging Data (OnACID). Our framework is highly scalable with minimal memory requirements,
as it processes the data in a streaming fashion one frame at a time, while keeping in memory a set
of low dimensional sufficient statistics and a small minibatch of the last data frames. Every frame
is processed in four sequential steps: i) The frame is registered against the previous denoised (and
registered) frame to correct for motion artifacts. ii) The fluorescence activity of the already detected
sources is tracked. iii) Newly appearing neurons and processes are detected and incorporated to the
set of existing sources. iv) The fluorescence trace of each source is denoised and deconvolved to
provide an estimate of the underlying spiking activity.
Our algorithm integrates and extends the online NMF algorithm of [19], the CNMF source extraction
algorithm of [26], and the near-online deconvolution algorithm of [11], to provide a framework
capable of real time identification and processing of hundreds of neurons in a typical 2p experiment
(512?512 pixel wide FOV imaged at 30Hz), enabling novel designs of closed-loop experiments.
We apply OnACID to two large-scale (50 and 65 minute long) mouse in vivo 2p datasets; our
algorithm can find and track hundreds of neurons faster than real-time, and outperforms the CNMF
algorithm of [26] benchmarked on multiple manual annotations using a precision-recall framework.
2
Methods
We illustrate OnACID in process in Fig. 1. At the beginning of the experiment (Fig. 1-left), only
a few components are active, as shown in the panel A by the max-correlation image3 , and these
are detected by the algorithm (Fig. 1B). As the experiment proceeds more neurons activate and are
subsequently detected by OnACID (Fig. 1 middle, right) which also tracks their activity across time
(Fig. 1C). See also Supplementary Movie 1 for an example in simulated data.
Next, we present the steps of OnACID in more detail.
3
The correlation image (CI) at every pixel is equal to the average temporal correlation coefficient between
that pixel and its neighbors [28] (8 neighbors were used for our analysis). The max-correlation image is obtained
by computing the CI for each batch of 1000 frames, and then taking the maximum over all these images.
2
1000 frames
6000 frames
90000 frames
Max-Correlation
Image
A
Found
Components
B
C
5s
30s
5min
Figure 1: Illustration of the online data analysis process. Snapshots of the online analysis after
processing 1000 frames (left), 6000 frames (middle), and 90000 frames (right). A) "Max-correlation"
image of registered data at each snapshot point (see text for definition). B) Spatial footprints (shapes)
of the components (neurons and processes) found by OnACID up to each point. C) Examples of
neuron activity traces (marked by contours in panel A and highlighted in red in panel B). As the
experiment proceeds, OnACID detects newly active neurons and tracks their activity.
Motion correction: Our online approach allows us to employ a very simple yet effective motion
correction scheme: each denoised dataframe can be used to register the next incoming noisy dataframe.
To enhance robustness we use the denoised background/neuropil signal (defined in the next section)
as a template to align the next dataframe. We use rigid, sub-pixel registration [16], although piecewise
rigid registration can also be used at an additional computational cost. This simple alignment process
is not suitable for offline algorithms due to noise in the raw data, leading to the development of
various algorithms based on template matching [14, 23, 25] or Hidden Markov Models [7, 18].
Source extraction: A standard approach for source extraction is to model the fluorescence within
a matrix factorization framework [20, 26]. Let Y ? Rd?T denote the observed fluorescence across
space and time in a matrix format, where d denotes the number of imaged pixels, and T the length
of the experiment in timepoints. If the number of imaged sources is K, then let A ? Rd?K denote
the matrix where column i encodes the "spatial footprint" of the source i. Similarly, let C ? RK?T
denote the matrix where each row encodes the temporal activity of the corresponding source. The
observed data matrix can then be expressed as
Y = AC + B + E,
(1)
d?T
where B, E ? R
denote matrices for background/neuropil activity and observation noise, respectively. A common approach, introduced in [26], is to express the background matrix B as a
low rank matrix, i.e., B = bf , where b ? Rd?nb and f ? Rnb ?T denote the spatial and temporal
components of the low rank background signal, and nb is a small integer, e.g., nb = 1, 2. The CNMF
framework of [26] operates by alternating optimization of [A, b] given the data Y and estimates of
[C; f ], and vice versa, where each column of A is constrained to be zero outside of a neighborhood
around its previous estimate. This strategy exploits the spatial locality of each neuron to reduce the
computational complexity. This framework can be adapted to a data streaming setup using the online
NMF algorithm of [19], where the observed fluorescence at time t can be written as
yt = Act + bft + ?t .
(2)
Proceeding in a similar alternating way, the activity of all neurons at time t, ct , and temporal
background ft , given yt and the spatial footprints and background [A, b], can be found by solving
a nonnegative least squares problem, whereas [A, b] can be estimated efficiently as in [19] by only
?t = [ct ; ft ])
keeping in memory the sufficient statistics (where we define c
Wt =
t?1
t Wt?1
?>
+ 1t yt c
t ,
Mt =
3
t?1
t Mt?1
?t c
?>
+ 1t c
t ,
(3)
while at the same time enforcing the same spatial locality constraints as in the CNMF framework.
Deconvolution: The online framework presented above estimates the demixed fluorescence traces
c1 , . . . , cK of each neuronal source. The fluorescence is a filtered version of the underlying neural
activity that we want to infer. To further denoise and deconvolve the neural activity from the
dynamics of the indicator we use the OASIS algorithm [11] that implements the popular spike
deconvolution algorithm of [30] in a nearly online fashion by adapting the highly efficient Pool
Adjacent Violators Algorithm used in isotonic regression
[3]. The calcium dynamics is modeled with
Pp
a stable autoregressive process of order p, ct = i=1 ?i ct?i + st . We use p = 1 here, but can extend
to p = 2 to incorporate the indicator rise time [11]. OASIS solves a modified LASSO problem
minimize
?,?
c
s
1
c
2 k?
? yk2 + ?k?sk1
subject to s?t = c?t ? ??
ct?1 ? smin or s?t = 0
(4)
where the `1 penalty on ?s or the minimal spike size smin can be used to enforce sparsity of the neural
activity. The algorithm progresses through each time series sequentially from beginning to end and
backtracks only to the most recent spike. We can further restrict the lag to few frames, to obtain a
good approximate solution applicable for real-time experiments.
Detecting new components: The approach explained above enables tracking the activity of a fixed
number of sources, and will ignore neurons that become active later in the experiment. To account
for a variable number of sources in an online NMF setting, [12] proposes to add a new random
component when the correlation coefficient between each data frame and its representation in terms of
the current factors is lower than a threshold. This approach is insufficient here since the footprint of a
new neuron in the whole FOV is typically too small to modify the correlation coefficient significantly.
We approach the problem by introducing a buffer that contains the last lb instances of the residual
signal rt = yt ? Act ? bft , where lb is a reasonably small number, e.g., lb = 100. On this buffer,
similarly to [26], we perform spatial smoothing with a Gaussian kernel with radius similar to the
expected neuron radius, and then search for the point in space that explains the maximum variance.
New candidate components anew , and cnew are estimated by performing a local rank-1 NMF of the
residual matrix restricted to a fixed neighborhood around the point of maximal variance.
To limit false positives, the candidate component is screened for quality. First, to prevent noise
overfitting, the shape anew must be significantly correlated (e.g., ?s ? 0.8 ? 0.9) to the residual buffer
averaged over time and restricted to the spatial extent of anew . Moreover, if anew significantly overlaps
with any of the existing components, then its temporal component cnew must not be highly correlated
with the corresponding temporal components; otherwise we reject it as a possible duplicate of an
existing component. Once a new component is accepted, [A, b], [C; f ] are augmented with anew and
cnew respectively, and the sufficient statistics are updated as follows:
1
1
tMt
C?buf c>
>
new
Mt =
,
(5)
Wt = Wt , Ybuf cnew ,
>
kcnew k2
t
t cnew C?buf
where Ybuf , C?buf denote the matrices Y, [C; f ], restricted to the last lb frames that the buffer stores.
This process is repeated until no new components are accepted, at which point the next frame is read
and processed. The whole online procedure is described in Algorithm 1; the supplement includes
pseudocode description of all the referenced routines.
Initialization: To initialize our algorithm we use the CNMF algorithm on a short initial batch of
data of length Tb , (e.g., Tb = 1000). The sufficient statistics are initialized from the components that
the offline algorithm finds according to (3). To ensure that new components are also initialized in
the darker parts of the FOV, each data frame is normalized with the (running) mean for every pixel,
during both the offline and the online phases.
Algorithmic Speedups: Several algorithmic and computational schemes are employed to boost the
speed of the algorithm and make it applicable to real-time large-scale experiments. In [19] block
coordinate descent is used to update the factors A, warm started at the value from the previous
iteration. The same trick is used here not only for A, but also for C, since the calcium traces are
continuous and typically change slowly. Moreover, the temporal traces of components that do not
spatially overlap with each other can be updated simultaneously in vector form; we use a simple
greedy scheme to partition the components into spatially non-overlapping groups.
Since neurons? shapes are not expected to change at a fast timescale, updating their values (i.e.,
recomputing A and b) is not required at every timepoint; in practice we update every lb timesteps.
4
Algorithm 1 O NACID
Require: Data matrix Y , initial estimates A, b, C, f , S, current number of components K, current
timestep t0 , rest of parameters.
1: W = Y [:, 1 : t0 ]C > /t0
2: M = CC > /t0
. Initialize sufficient statistics
3: G = D ETERMINE G ROUPS([A, b], K)
. Alg. S1-S2
4: Rbuf = [Y ? [A, b][C; f ]][:, t0 ? lb + 1 : t0 ]
. Initialize residual buffer
5: t = t0
6: while there is more data do
7:
t?t+1
8:
yt ? A LIGN F RAME(yt , bft?1 )
. [16]
9:
[ct ; ft ] ? U PDATE T RACES([A, b], [ct?1 ; ft?1 ], yt , G)
. Alg. S3
10:
C, S ? OASIS(C, ?, smin , ?)
. [11]
11:
[A, b], [C, f ], K, G, Rbuf , W, M ?
12:
D ETECT N EW C OMPONENTS([A, b], [C, f ], K, G, Rbuf , yt , W, M )
. Alg. S4
13:
Rbuf ? [Rbuf [:, 2 : lb ], yt ? Act ? bft ]
. Update residual buffer
14:
if mod (t ? t0 , lb ) = 0 then
. Update W, M, [A, b] every lb timesteps
15:
W, M ? U PDATE S UFF S TATISTICS(W, M, yt , [ct ; ft ])
. Equation (3)
16:
[A, b] ? U PDATE S HAPES[W, M, [A, b]]
. Alg. S5
17: return A, b, C, f , S
false pos false neg
B
C
1
28
Processing time [ms]
true pos
Pearson?s r
A
0.8
0.6
0.4
0.2
offline ?
5
2
lag (frames)
0
24
20
500
1000
1500
frame
2000
Figure 2: Application to simulated data. A) Detected and missed components. B) Tukey boxplot
of spike train correlations with ground truth. Online deconvolution recovers spike trains well and the
accuracy increases with the allowed lag in spike assignment. C) Processing time is less than 33 ms
for all the frames.
Additionally, the sufficient statistics Wt , Mt are only needed for updating the estimates of [A, b] so
they can be updated only when required. Motion correction can be sped up by estimating the motion
only on a small (active) contiguous part of the FOV. Finally, as shown in [10], spatial decimation can
bring significant speed benefits without compromising the quality of the results.
Software: OnACID is implemented in Python and is available at https://github.com/
simonsfoundation/caiman as part of the CaImAn package [13].
3
Results
Benchmarking on simulated data: To compare to ground truth spike trains, we simulated a 2000
frame dataset taken at 30Hz over a 256 ? 256 pixel wide FOV containing 400 "donut" shaped neurons
with Poisson spike trains (see supplement for details). OnACID was initialized on the first 500 frames.
During initialization, 265 active sources were accurately detected (Fig. S2). After the full 2000
frames, the algorithm had detected and tracked all active sources, plus one false positive (Fig. 2A).
After detecting a neuron, we need to extract its spikes with a short time-lag, to enable interesting
closed loop experiments. To quantify performance we measured the correlation of the inferred spike
train with the ground truth (Fig. 2B). We varied the lag in the online estimator, i.e. the number of
future samples observed before assigning a spike at time zero. Lags of 2-5 yield already similar
5
Table 1: OnACID significantly outperforms the offline CNMF approach. Benchmarking is against
two independent manual annotations within the precision/recall (and their harmonic mean F1 score)
framework. For each row-column pair, the column dataset is regarded as the ground truth.
F1 (precision, recall)
OnACID
CNMF
Labeler 2
Labeler 1
0.79 (0.87,0.72)
0.71 (0.74, 0.69)
0.89 (0.89,0.89)
Labeler 2
0.78 (0.86,0.71)
0.71 (0.75,0.68)
-
CNMF
0.79 (0.83,0.75)
-
results as the solution with unrestricted lag. A further requirement for online closed-loop experiments
is that the computational processing is fast enough. To balance the computational load over frames,
we distributed here the shape update over the frames, while still updating each neuron every 30
frames on average. Because the shape update is the last step of the loop in Algorithm 1, we keep track
of the time already spent in the iteration and increase or decrease the number of updated neurons
accordingly. In this way the frame processing rate remained always higher than 30Hz (Fig. 2C).
Application to in vivo 2p mouse hippocampal data: Next we considered a larger scale (90K frames,
480 ? 480 pixels) real 2p calcium imaging dataset taken at 30Hz (i.e., 50 minute experiment). Motion
artifacts were corrected prior to the analysis described below. The online algorithm was initialized on
the first 1000 frames of the dataset using a Python implementation of the CNMF algorithm found in
the CaImAn package [13]. During initialization 139 active sources were detected; by the end of all
90K frames, 727 active sources had been detected and tracked (5 of which were discarded due to
their small size).
Benchmarking against offline processing and manual annotations: We collected manual annotations from two independent labelers who were instructed to find round or donut shaped neurons of
similar size using the ImageJ Cell Magic Wand tool [31] given i) a movie obtained by removing a
running 20th percentile (as a crude background approximation) and downsampling in time by a factor
of 10, and ii) the max-correlation image. The goal of this pre-processing was to suppress silent and
promote active cells. The labelers found respectively 872 and 880 ROIs. We also compared with the
CNMF algorithm applied to the whole dataset which found 904 sources (805 after filtering for size).
To quantify performance we used a precision/recall framework similar to [2]. As a distance metric
between two cells we used the Jaccard distance, and the pairing between different annotations was
computed using the Hungarian algorithm, where matches with distance > 0.7 were discarded4 .
Table. 1 summarizes the results within the precision/recall framework. The online algorithm not
only matches but outperforms the offline approach of CNMF, reaching high performance values
(F1 = 0.79 and 0.78 against the two manual annotations, as opposed to 0.71 against both annotations
for CNMF). The two annotations matched closely with each other (F1 = 0.89), indicating high
reliability, whereas OnACID vs CNMF also produced a high score (F1 = 0.79), suggesting significant
overlap in the mismatches between the two algorithms against manual annotations.
Fig. 3 offers a more detailed view, where contour plots of the detected components are superimposed
on the max-correlation image for the online (Fig. 3A) and offline (Fig. 3B) algorithms (white) and the
annotations of Labeler 1 (red) restricted to a 200?200 pixel part of the FOV. Annotations of matches
and mismatches between the online algorithm and the two labelers, as well as between the two labelers
in the entire FOV are shown in Figs. S3-S8. For the automated procedures binary masks and contour
plots were constructed by thresholding the spatial footprint of each component at a level equal to 0.2
times its maximum value. A close inspection at the matches between the online algorithm and the
manual annotation (Fig. 3A-left) indicates that neurons with a strong footprint in the max-correlation
image (indicating calcium transients with high amplitude compared to noise and background/neuropil
activity) are reliably detected, despite the high neuron density and level of overlap. On the other
hand, mismatches (Fig. 3B-left) can sometimes be attributed to shape mismatches, manually selected
components with no signature in the max-correlation image (indicating faint or possibly unclear
activity) that are not detected by the online algorithm (false negatives), or small partially visible
processes detected by OnACID but ignored by the labelers ("false" positives).
4
Note that the Cell Magic Wand Tool by construction, tends to select circular ROI shapes whereas the results
of the online algorithm do not pose restrictions on the shapes. As a result the computed Jaccard distances tend to
be overestimated. This explains our choice of a seemingly high mismatch threshold.
6
A
D
Mismatches
1
CDF
Matches
0.6
0.2
3
0
0.5
1
correlation coefficient
E
2
Cost of tracking neurons? activity
real
time
32
1
28
Offline
Human
0.3
0.9
20 ?m
Time [ms]
B
Human
800
20
16
500
12
200
8
0
3
10
20 30 40
Time [min]
inferred neurons
24
Online
50
Cost of updating shapes per neuron
0.65
Time [ms]
2
1
0.55
0.45
C
1
300
online
offline
r=0.89
80
Time [ms]
r=0.98
2
r=0.76
500 700
neurons
Cost of adding neurons
real time
60
40
20
3
200
20 min
400
600
neurons
800
Figure 3: Application to an in vivo 50min long hippocampal dataset and comparison against
an offline approach and manual annotation. A-left) Matched inferred locations between the
online algorithm (white) and the manual annotation of Labeler 1 (red), superimposed on the maxcorrelation image. A-right) False positive (white) and false negative (red) mismatches between the
online algorithm and a manual annotation. B) Same for the offline CNMF algorithm (grey) against
the same manual annotation (red). The online approach outperforms the CNMF algorithm in the
precision/recall framework (F1 score 0.77 vs 0.71). The images are restricted to a 200?200 pixel
part of the FOV. Matches and non-matches for the whole FOV are shown in the supplement. C)
Examples of inferred sources and their traces from the two algorithms and corresponding annotation
for three indentified neurons (also shown with orange arrows in panels A,B left). The algorithm is
capable of identifying new neurons once they become active, and track their activity similarly to
offline approaches. D) Empirical CDF of correlation coefficients between the matched traces between
the online and the offline approaches over the entire 50 minute traces. The majority of the correlation
coefficients has very high values suggesting that the online algorithm accurately tracks the neural
activity across time (see also correlation coefficients for the three examples shown in panel C). E)
Timing of the online process. Top: Time required per frame when no shapes are updated and no
neurons are updated (top). The algorithms is always faster than real time in tracking neurons and
scales mildly with the number of neurons. Time required to update shapes per neuron (middle), and
add new neurons (bottom) as a function of the number of neurons. Adding neurons is slower but
occurs only sporadically affecting only mildly the required processing time (see text for details).
7
Fig. 3C shows examples of the traces from three selected neurons. OnACID can detect and track
neurons with very sparse spiking over the course of the entire 50 minute experiment (Fig. 3C-top),
and produce traces that are highly correlated with their offline counterparts. To examine the quality
of the inferred traces (where ground truth collection at such scale is both very strenuous and severely
impeded by the presence of background signals and neuropil activity), we compared the traces
between the online algorithm and the CNMF approach on matched pairs of components. Fig. 3D
shows the empirical cumulative distribution function (CDF) of the correlation coefficients from this
comparison. The majority of the coefficients attain values close to 1, suggesting that the online
algorithm can detect new neurons once they become active and then reliably track their activity.
OnACID is faster than real time on average: In addition to being more accurate, OnACID is also
considerably faster as it required ?27 minutes, i.e., ? 2? faster than real time on average, to analyze
the full dataset (2 minutes for initialization and 25 for the online processing) as opposed to ?1.5
hours for the offline approach and ?10 hours for each of the annotators (who only select ROIs).
Fig. 3E illustrates the time consumption of the various steps. In the majority of the frames where no
spatial shapes are being updated and no new neurons are being incorporated, OnACID processing
speed exceeds the data rate of 30Hz (Fig. 3E-top), and this processing time scales only mildly with
the inclusion of new neurons. The cost of updating shapes and sufficient statistics per neuron is also
very low (< 1ms), and only scales mildly with the number of existing neurons (Fig. 3E-middle). As
argued before this cost can be distributed among all the frames while maintaining faster than real time
processing rates. The expensive step appears when detecting and including one or possibly more new
neurons in the algorithm (Fig. 3E-bottom). Although this occurs only sporadically, several speedups
can be potentially employed here to achieve beyond real time at every frame (see also Discussion
section), which would facilitate zero-lag closed-loop experiments.
Application to in vivo 2p mouse parietal cortex data: As a second application to 2p data we used
a 116,000 frame dataset, taken at 30Hz over a 512?512 FOV (64min long). The first 3000 frames
were used for initialization during which the CNMF algorithm found 442 neurons, before switching
to OnACID, which by the end of the experiment found a total of 752 neurons (734 after filtering for
size). Compared to two independent manual annotations of 928 and 875 ROIs respectively, OnACID
achieved F1 = 0.76, 0.79 significantly outperforming CNMF (F1 = 0.65, 0.66 respectively). The
matches and mismatches between OnACID and Labeler 1 on a 200?200 pixel part of the FOV are
shown in Fig. 4A. Full FOV pairings as well as precision/recall metrics are given in Table 2.
Table 2: Comparison of performance of OnACID and the CNMF algorithm using the precision/recall
framework for the parietal cortex 116000 frames dataset. For each row-column pair, the column
dataset is regarded as ground truth. The numbers in the parentheses are the precision and recall,
respectively, preceded by their harmonic mean (F1 score). OnACID significantly outperforms the
offline CNMF approach.
F1 (precision, recall)
OnACID
CNMF
Labeler 2
Labeler 1
0.76 (0.86,0.68)
0.65 (0.70, 0.60)
0.89 (0.86,0.91)
Labeler 2
0.79 (0.86,0.72)
0.66 (0.74,0.59)
-
CNMF
0.65 (0.55,0.82)
-
For this dataset, rigid motion correction was also performed according to the simple method of
aligning each frame to the denoised (and registered) background from the previous frame. Fig. 4B
shows that this approach produced strikingly similar results to an offline template based, rigid motion
correction method [25]. The difference in the displacements produced by the two methods was less
than 1 pixel for all 116,000 frames with standard deviations 0.11 and 0.12 pixel for the x and y
directions, respectively. In terms of timing, OnACID processed the dataset in 48 minutes, again faster
than real time on average. This also includes the time needed for motion correction, which on average
took 5ms per frame (a bit less than 10 minutes in total).
4
Discussion - Future Work
Although at first striking, the superior performance of OnACID compared to offline CNMF, for the
datasets presented in this work, can be attributed to several factors. Calcium transient events are
localized both in space (spatial footprint of a neuron), and in time (typically 0.3-1s for genetic indica8
Matches
B
Mismatches
x shift [pixels]
0.8
Online
y shift [pixels]
Human
A
Displacements
4
0
-4
offline
online
-8
4
0
-4
-8
offline
online
-12
20?m
0.2
0
20
Time [min]
40
60
Figure 4: Application to an in vivo 64min long parietal cortex dataset. A-left) Matched inferred
locations between the online algorithm (white) and the manual annotation of Labeler 1 (red). A-right)
False positive (white) and false negative (red) mismatches between the online algorithm and a manual
annotation. B) Displacement vectors estimated by OnACID during motion registration compared to a
template based algorithm. OnACID estimates the same motion vectors at a sub-pixel resolution (see
text for more details).
tors). By looking at a short rolling buffer OnACID is able to more robustly detect activity compared
to offline approaches that look at all the data simultaneously. Moreover, OnACID searches for new
activity in the residuals buffer that excludes the activity of already detected neurons, making it easier
to detect new overlapping components. Finally, offline CNMF requires the a priori specification of the
number of components, making it more prone to either false positive or false negative components.
For both the datasets presented above, the analysis was done using the same space correlation
threshold ?s = 0.9. This strict choice leads to results with high precision and lower recall (see Tables
1 and 2). Results can be moderately improved by allowing a second pass of the data that can identify
neurons that were initially not selected. Moreover, by relaxing the threshold the discrepancy between
the precision and recall scores can be reduced, with only marginal modifications to the F1 scores
(data not shown).
Our current implementation performs all processing serially. In principle, significant speed gains can
be obtained by performing computations not needed at each timestep (updating shapes and sufficient
statistics) or occur only sporadically (incorporating a new neuron) in a parallel thread with shared
memory. Moreover, different online dictionary learning algorithms that do not require the solution of
an inverse problem at each timestep can potentially further speed up our framework [17].
For detecting centroids of new sources OnACID examines a static image obtained by computing the
variance across time of the spatially smoother residual buffer. While this approach works very well in
practice it effectively favors shapes looking similar to a pre-defined Gaussian blob (when spatially
smoothed). Different approaches for detecting neurons in static images can be possibly used here,
e.g., [22], [2], [29], [27].
Apart from facilitating closed-loop behavioral experiments and rapid general calcium imaging data
analysis, our online pipeline can be potentially employed to future, optical-based, brain computer
interfaces [6, 21] where high quality real-time processing is critical to their performance. These
directions will be pursued in future work.
Acknowledgments
We thank Sue Ann Koay, Jeff Gauthier and David Tank (Princeton University) for sharing their cortex
and hippocampal data with us. We thank Lindsey Myers, Sonia Villani and Natalia Roumelioti for
providing manual annotations. We thank Daniel Barabasi (Cold Spring Harbor Laboratory) for useful
discussions. AG, DC, and EAP were internally funded by the Simons Foundation. Additional support
was provided by SNSF P300P2_158428 (JF), and NIH BRAIN Initiative R01EB22913, DARPA
N66001-15-C-4032, IARPA MICRONS D16PC00003 (LP).
9
References
[1] Misha B Ahrens, Michael B Orger, Drew N Robson, Jennifer M Li, and Philipp J Keller.
Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nature
methods, 10(5):413?420, 2013.
[2] Noah Apthorpe, Alexander Riordan, Robert Aguilar, Jan Homann, Yi Gu, David Tank, and
H Sebastian Seung. Automatic neuron detection in calcium imaging data using convolutional
networks. In Advances in Neural Information Processing Systems, pages 3270?3278, 2016.
[3] Richard E Barlow, David J Bartholomew, JM Bremner, and H Daniel Brunk. Statistical inference
under order restrictions: The theory and application of isotonic regression. Wiley New York,
1972.
[4] Matthew B Bouchard, Venkatakaushik Voleti, C?sar S Mendes, Clay Lacefield, Wesley B
Grueber, Richard S Mann, Randy M Bruno, and Elizabeth MC Hillman. Swept confocallyaligned planar excitation (scape) microscopy for high-speed volumetric imaging of behaving
organisms. Nature photonics, 9(2):113?119, 2015.
[5] Edward S Boyden, Feng Zhang, Ernst Bamberg, Georg Nagel, and Karl Deisseroth. Millisecondtimescale, genetically targeted optical control of neural activity. Nature neuroscience, 8(9):1263?
1268, 2005.
[6] Kelly B Clancy, Aaron C Koralek, Rui M Costa, Daniel E Feldman, and Jose M Carmena.
Volitional modulation of optically recorded calcium signals during neuroprosthetic learning.
Nature neuroscience, 17(6):807?809, 2014.
[7] Daniel A Dombeck, Anton N Khabbaz, Forrest Collman, Thomas L Adelman, and David W
Tank. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron,
56(1):43?57, 2007.
[8] Valentina Emiliani, Adam E Cohen, Karl Deisseroth, and Michael H?usser. All-optical interrogation of neural circuits. Journal of Neuroscience, 35(41):13917?13926, 2015.
[9] Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas J Sofroniew,
Davis V Bennett, Joshua Rosen, Chao-Tsung Yang, Loren L Looger, and Misha B Ahrens.
Mapping brain activity at scale with cluster computing. Nature methods, 11(9):941?950, 2014.
[10] Johannes Friedrich, Weijian Yang, Daniel Soudry, Yu Mu, Misha B Ahrens, Rafael Yuste,
Darcy S Peterka, and Liam Paninski. Multi-scale approaches for high-speed imaging and
analysis of large neural populations. bioRxiv, page 091132, 2016.
[11] Johannes Friedrich, Pengcheng Zhou, and Liam Paninski. Fast online deconvolution of calcium
imaging data. PLOS Computational Biology, 13(3):e1005423, 2017.
[12] Sahil Garg, Irina Rish, Guillermo Cecchi, and Aurelie Lozano. Neurogenesis-inspired dictionary
learning: Online model adaption in a changing world. arXiv preprint arXiv:1701.06106, 2017.
[13] A Giovannucci, J Friedrich, B Deverett, V Staneva, D Chklovskii, and E Pnevmatikakis. Caiman:
An open source toolbox for large scale calcium imaging data analysis on standalone machines.
Cosyne Abstracts, 2017.
[14] David S Greenberg and Jason ND Kerr. Automated correction of fast motion artifacts for
two-photon imaging of awake animals. Journal of neuroscience methods, 176(1):1?15, 2009.
[15] Logan Grosenick, James H Marshel, and Karl Deisseroth. Closed-loop and activity-guided
optogenetic control. Neuron, 86(1):106?139, 2015.
[16] Manuel Guizar-Sicairos, Samuel T Thurman, and James R Fienup. Efficient subpixel image
registration algorithms. Optics letters, 33(2):156?158, 2008.
[17] Tao Hu, Cengiz Pehlevan, and Dmitri B Chklovskii. A hebbian/anti-hebbian network for online
sparse dictionary learning derived from symmetric matrix factorization. In Signals, Systems and
Computers, 2014 48th Asilomar Conference on, pages 613?619. IEEE, 2014.
[18] Patrick Kaifosh, Jeffrey D Zaremba, Nathan B Danielson, and Attila Losonczy. Sima: Python
software for analysis of dynamic fluorescence imaging data. Frontiers in neuroinformatics,
8:80, 2014.
[19] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning for matrix
factorization and sparse coding. Journal of Machine Learning Research, 11(Jan):19?60, 2010.
10
[20] Eran A Mukamel, Axel Nimmerjahn, and Mark J Schnitzer. Automated analysis of cellular
signals from large-scale calcium imaging data. Neuron, 63(6):747?760, 2009.
[21] Daniel J O?shea, Eric Trautmann, Chandramouli Chandrasekaran, Sergey Stavisky, Jonathan C
Kao, Maneesh Sahani, Stephen Ryu, Karl Deisseroth, and Krishna V Shenoy. The need for
calcium imaging in nonhuman primates: New motor neuroscience and brain-machine interfaces.
Experimental neurology, 287:437?451, 2017.
[22] Marius Pachitariu, Adam M Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, and
Maneesh Sahani. Extracting regions of interest from biological images with convolutional
sparse block coding. In Advances in Neural Information Processing Systems, pages 1745?1753,
2013.
[23] Marius Pachitariu, Carsen Stringer, Sylvia Schr?der, Mario Dipoppa, L Federico Rossi, Matteo
Carandini, and Kenneth D Harris. Suite2p: beyond 10,000 neurons with standard two-photon
microscopy. bioRxiv, page 061507, 2016.
[24] Adam M Packer, Lloyd E Russell, Henry WP Dalgleish, and Michael H?usser. Simultaneous
all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo.
Nature Methods, 12(2):140?146, 2015.
[25] Eftychios A Pnevmatikakis and Andrea Giovannucci. Normcorre: An online algorithm for
piecewise rigid motion correction of calcium imaging data. Journal of Neuroscience Methods,
291:83?94, 2017.
[26] Eftychios A Pnevmatikakis, Daniel Soudry, Yuanjun Gao, Timothy A Machado, Josh Merel,
David Pfau, Thomas Reardon, Yu Mu, Clay Lacefield, Weijian Yang, et al. Simultaneous
denoising, deconvolution, and demixing of calcium imaging data. Neuron, 89(2):285?299,
2016.
[27] Stephanie Reynolds, Therese Abrahamsson, Renaud Schuck, P Jesper Sj?str?m, Simon R
Schultz, and Pier Luigi Dragotti. Able: an activity-based level set segmentation algorithm for
two-photon calcium imaging data. eNeuro, pages ENEURO?0012, 2017.
[28] Spencer L Smith and Michael H?usser. Parallel processing of visual space by neighboring
neurons in mouse visual cortex. Nature neuroscience, 13(9):1144?1149, 2010.
[29] Quico Spaen, Dorit S Hochbaum, and Roberto As?n-Ach?. HNCcorr: A novel combinatorial
approach for cell identification in calcium-imaging movies. arXiv preprint arXiv:1703.01999,
2017.
[30] Joshua T Vogelstein, Adam M Packer, Timothy A Machado, Tanya Sippy, Baktash Babadi,
Rafael Yuste, and Liam Paninski. Fast nonnegative deconvolution for spike train inference from
population calcium imaging. Journal of neurophysiology, 104(6):3691?3704, 2010.
[31] Theo Walker. Cell magic wand tool, 2014.
11
| 6832 |@word neurophysiology:1 version:1 middle:4 villani:1 nd:1 bf:1 open:2 hu:1 grey:1 pengcheng:1 thereby:1 schnitzer:1 reduction:1 initial:2 deisseroth:4 series:1 score:6 optically:1 contains:1 daniel:7 genetic:1 reynolds:1 schuck:1 existing:5 luigi:1 current:4 outperforms:6 rish:1 anne:1 manuel:1 com:1 yet:1 assigning:1 must:2 written:1 visible:1 partition:1 shape:15 enables:1 motor:1 plot:2 update:7 standalone:1 v:2 pursued:1 selected:3 greedy:1 boyden:1 accordingly:1 inspection:1 beginning:2 smith:1 core:1 short:3 filtered:1 detecting:5 philipp:1 location:3 tmt:1 org:1 zhang:1 simonsfoundation:1 constructed:1 direct:1 become:4 pairing:2 initiative:1 combine:1 behavioral:1 introduce:1 mask:1 expected:2 rapid:1 andrea:2 behavior:1 examine:1 multi:1 brain:5 freeman:1 inspired:1 detects:1 eap:1 actual:1 jm:1 str:1 provided:1 discover:1 matched:5 underlying:2 circuit:2 medium:1 panel:5 estimating:1 moreover:5 factorized:1 benchmarked:1 kaufman:1 lindsey:1 ag:1 impractical:1 temporal:9 sapiro:1 every:8 act:3 zaremba:1 k2:1 control:2 internally:1 shenoy:1 before:3 positive:6 local:1 timing:2 modify:1 limit:1 soudry:2 switching:1 despite:1 referenced:1 tends:1 analyzing:1 severely:1 matteo:1 modulation:1 donut:2 plus:1 garg:1 initialization:5 fov:16 relaxing:1 factorization:5 liam:5 averaged:1 acknowledgment:1 practice:2 block:2 implement:1 footprint:7 stavisky:1 cold:3 displacement:3 procedure:2 jan:2 area:2 pehlevan:1 empirical:2 maneesh:2 significantly:7 adapting:1 reject:1 matching:1 attain:1 pre:3 close:2 sheet:2 deconvolve:1 storage:1 kawashima:1 nb:3 isotonic:2 restriction:2 yt:10 keller:1 resolution:6 impeded:1 identifying:2 correcting:1 estimator:1 examines:1 array:1 aguilar:1 deriving:1 regarded:2 population:6 coordinate:1 sar:1 updated:7 construction:1 decimation:2 trick:1 expensive:1 updating:6 observed:4 bottom:2 ft:5 homann:1 preprint:2 region:2 renaud:1 plo:1 russell:1 contamination:1 decrease:1 mu:3 complexity:1 moderately:1 seung:1 sk1:1 rnb:1 dynamic:4 signature:1 solving:2 churchland:2 deliver:1 eric:1 gu:1 strikingly:1 po:2 darpa:1 cnmf:25 various:2 pdate:3 looger:1 train:6 jesper:1 effective:2 activate:1 fast:5 detected:14 outside:1 neighborhood:2 neuroinformatics:1 pearson:1 jean:1 lag:8 reardon:1 larger:3 supplementary:1 otherwise:1 federico:1 favor:1 statistic:8 grosenick:1 timescale:1 highlighted:1 noisy:1 seemingly:1 online:48 blob:1 myers:1 took:1 maximal:1 neighboring:1 combining:1 loop:11 ernst:1 achieve:1 description:1 kao:1 scalability:2 carmena:1 ach:1 requirement:2 cluster:1 produce:3 natalia:1 adam:4 spent:1 yuanjun:1 illustrate:1 ac:1 pose:2 stat:1 measured:1 progress:1 solves:1 strong:1 implemented:1 edward:1 hungarian:1 come:1 orger:1 involves:1 quantify:2 direction:2 guided:1 radius:2 closely:1 annotated:1 correct:1 compromising:1 subsequently:1 human:3 enable:2 transient:2 mann:1 explains:2 require:4 argued:1 f1:11 pettit:1 timepoint:1 koralek:1 dendritic:1 biological:1 spencer:1 frontier:1 d16pc00003:1 correction:9 around:2 considered:1 ground:6 roi:8 algorithmic:3 mapping:1 week:1 matthew:2 tor:1 dictionary:4 barabasi:1 neurogenesis:1 robson:1 integrates:1 applicable:2 combinatorial:1 fluorescence:9 sensitive:1 pnevmatikakis:4 vice:1 tool:3 snsf:1 gaussian:2 always:2 thurman:1 modified:1 ck:1 reaching:1 zhou:1 mobile:1 takashi:1 derived:1 subpixel:1 ponce:1 rank:3 indicates:1 superimposed:2 centroid:1 detect:5 inference:2 rigid:5 streaming:3 entire:3 typically:6 initially:1 hidden:1 tao:1 tank:3 issue:1 among:2 pixel:18 priori:1 proposes:1 development:1 spatial:13 smoothing:1 constrained:2 initialize:3 orange:1 field:1 equal:2 once:3 extraction:5 beach:1 marginal:1 manually:2 biology:1 shaped:2 labeler:10 look:1 deconvolving:1 nearly:1 yu:3 promote:1 discrepancy:1 rosen:1 future:4 piecewise:2 richard:2 duplicate:1 employ:1 few:2 packer:3 simultaneously:2 phase:1 irina:1 jeffrey:1 detection:1 neuroscientist:1 interest:2 circular:1 highly:4 severe:1 alignment:1 photonics:1 truly:1 misha:3 light:2 accurate:1 animal:1 capable:2 iv:1 initialized:4 logan:1 biorxiv:2 minimal:2 deconvolved:1 instance:1 column:6 recomputing:1 optogenetic:2 contiguous:1 assignment:1 mt:4 cost:7 introducing:1 deviation:1 rolling:1 hundred:3 too:1 perturbed:1 spatiotemporal:2 considerably:1 st:2 density:1 loren:1 overestimated:1 axel:1 pool:1 enhance:2 michael:5 quickly:1 mouse:5 again:1 recorded:3 opposed:2 containing:1 slowly:1 possibly:3 cosyne:1 leading:1 return:1 li:1 suggesting:3 jeremy:1 account:1 photon:4 coding:2 lloyd:1 includes:2 coefficient:9 register:1 race:1 tsung:1 performed:1 view:2 jason:1 closed:10 later:2 mario:1 analyze:1 sporadically:3 red:7 francis:1 dalgleish:2 parallel:3 capability:1 bouchard:1 annotation:21 denoised:5 simon:2 vivo:7 minimize:1 square:1 accuracy:1 convolutional:2 variance:3 who:2 efficiently:1 ensemble:1 yield:1 identify:1 anton:1 raw:2 identification:2 accurately:2 produced:3 mc:1 monitoring:3 cc:1 simultaneous:2 dataframe:3 manual:15 sharing:1 sebastian:1 volumetric:2 definition:1 against:8 pp:1 james:2 pier:1 attributed:2 recovers:1 static:2 monitored:1 costa:1 newly:2 experimenter:1 carandini:1 popular:3 mendes:1 gain:2 recall:12 usser:3 dataset:14 dimensionality:1 segmentation:1 clay:2 amplitude:1 routine:1 appears:1 wesley:1 higher:2 day:1 planar:2 brunk:1 improved:1 done:1 furthermore:1 until:1 correlation:20 hand:1 gauthier:1 overlapping:3 minibatch:1 cnew:5 quality:4 artifact:6 facilitate:1 usa:1 normalized:1 true:1 barlow:1 counterpart:1 lozano:1 alternating:2 symmetric:1 imaged:7 laboratory:2 wp:1 spatially:5 read:1 white:5 sima:1 adjacent:1 round:1 during:9 adelman:1 davis:1 percentile:1 excitation:1 samuel:1 m:7 hippocampal:3 attila:1 nimmerjahn:1 performs:1 motion:16 bring:1 interface:2 image:15 harmonic:2 novel:2 nih:1 common:1 strenuous:1 superior:1 pseudocode:1 sped:1 preceded:1 spiking:2 tracked:3 perturbing:1 functional:1 cohen:1 stimulation:2 volume:1 machado:2 extend:1 organism:1 s8:1 s5:1 significant:5 versa:1 feldman:1 automatic:1 rd:3 similarly:3 inclusion:1 bruno:1 bartholomew:1 had:2 funded:1 henry:2 reliability:1 stable:1 specification:1 entail:1 cortex:5 yk2:1 behaving:1 add:2 aligning:1 align:1 labelers:5 patrick:1 recent:1 apart:2 manipulation:1 store:1 randy:1 buffer:9 outperforming:1 continue:1 binary:1 yi:1 joshua:2 der:1 swept:1 krishna:1 peterka:1 additional:2 neg:1 unrestricted:1 employed:4 scape:2 signal:8 ii:4 stephen:1 multiple:2 full:3 smoother:1 infer:2 vogelstein:1 hebbian:2 exceeds:1 match:9 faster:7 offer:1 long:5 bach:1 preselecting:1 equally:1 parenthesis:1 scalable:1 basic:1 regression:2 metric:2 poisson:1 sue:1 arxiv:4 iteration:2 sergey:1 kernel:1 sometimes:1 hochbaum:1 microscopy:3 cell:7 c1:1 achieved:1 affecting:1 addition:1 chklovskii:3 whereas:3 want:1 fine:1 addressed:1 walker:1 source:27 background:11 rest:1 strict:1 subject:1 tend:1 nagel:1 hz:7 recording:3 mod:1 integer:2 extracting:2 axonal:1 near:1 yang:3 door:1 presence:1 exceed:1 enough:1 iii:3 rendering:1 automated:4 harbor:3 timesteps:3 restrict:1 lasso:1 silent:1 reduce:1 valentina:1 eftychios:3 shift:2 t0:8 thread:1 cecchi:1 gb:1 penalty:1 york:3 weijian:2 useful:1 ignored:1 detailed:1 tukey:1 johannes:3 amount:2 s4:1 traction:1 cshl:1 processed:4 reduced:1 generate:1 http:1 s3:2 ahrens:3 estimated:3 neuroscience:7 per:7 track:9 georg:1 express:1 smin:3 group:1 four:1 soma:1 threshold:4 changing:1 prevent:1 registration:4 kenneth:1 n66001:1 imaging:30 ram:1 excludes:1 dmitri:2 timestep:3 volitional:1 wand:3 screened:1 package:2 inverse:1 letter:1 micron:1 striking:1 jose:1 bft:4 extends:2 chandrasekaran:1 forrest:1 missed:1 summarizes:1 jaccard:2 bit:2 uff:1 ct:8 correspondence:1 babadi:1 nonnegative:2 activity:38 imagej:1 adapted:1 occur:1 optic:1 constraint:1 sylvia:1 kaifosh:1 noah:2 boxplot:1 software:2 awake:2 encodes:2 generates:1 nathan:1 speed:7 min:7 spring:3 performing:2 optical:6 format:2 rossi:1 marius:2 speedup:2 according:2 across:5 elizabeth:1 stephanie:1 lp:1 primate:1 making:2 s1:1 modification:1 explained:1 restricted:5 pipeline:3 asilomar:1 taken:3 equation:1 resource:1 jennifer:1 kerr:1 needed:3 deluge:1 end:3 caiman:4 available:2 pachitariu:2 backtracks:1 apply:2 enforce:1 reshape:1 nicholas:1 appearing:1 robustly:1 sonia:1 robustness:1 batch:2 lacefield:2 slower:1 thomas:2 denotes:1 running:2 ensure:1 top:4 maintaining:1 tanya:1 exploit:1 feng:1 already:4 spike:12 occurs:2 strategy:2 losonczy:1 eran:1 rt:1 unclear:1 riordan:1 stringer:1 thank:3 unable:1 distance:4 simulated:4 majority:3 consumption:1 whom:1 collected:1 extent:1 cellular:4 enforcing:1 bremner:1 length:2 modeled:1 illustration:1 insufficient:1 balance:1 downsampling:1 providing:1 setup:1 robert:1 potentially:3 trace:12 negative:5 rise:1 magic:3 suppress:1 design:1 implementation:2 calcium:25 reliably:2 perform:1 contributed:1 allowing:1 neuron:59 snapshot:2 datasets:7 observation:1 benchmark:1 enabling:2 markov:1 descent:1 discarded:1 parietal:3 anti:1 marshel:1 incorporated:2 looking:2 frame:41 dc:1 varied:1 schr:1 smoothed:1 lb:9 drift:1 aurelie:1 inferred:6 nmf:4 david:6 introduced:1 pair:3 required:6 toolbox:1 rame:1 friedrich:4 pfau:1 hausser:1 registered:4 ryu:1 hour:4 boost:1 nip:1 beyond:2 able:2 proceeds:2 usually:1 below:1 pattern:1 mismatch:10 sparsity:1 challenge:1 genetically:1 tb:3 preselected:1 max:8 memory:5 including:3 power:1 suitable:1 overlap:4 serially:1 warm:1 critical:2 event:1 indicator:4 residual:7 scheme:3 movie:4 github:1 demixed:1 julien:1 started:1 extract:2 columbia:2 roberto:1 sahani:2 text:3 prior:1 kelly:1 chao:1 python:3 interrogation:2 yuste:2 merel:1 interesting:1 filtering:2 localized:1 annotator:1 foundation:1 fienup:1 sufficient:8 proxy:1 principle:1 exciting:1 thresholding:1 row:3 karl:4 prone:1 guillermo:2 course:1 last:4 keeping:2 theo:1 offline:25 institute:1 neighbor:2 template:4 face:1 taking:1 wide:2 sparse:4 distributed:2 benefit:1 greenberg:1 dimension:1 neuroprosthetic:1 world:1 cumulative:1 contour:3 autoregressive:1 preventing:1 instructed:1 collection:1 author:1 schultz:1 dorit:1 sj:1 approximate:1 ignore:1 rafael:2 keep:1 anew:5 active:14 sequentially:1 overfitting:1 mairal:1 incoming:1 neurology:1 search:2 continuous:1 table:5 additionally:1 nature:7 reasonably:1 ca:1 alg:4 neuropil:5 cengiz:1 arrow:1 whole:5 hillman:1 s2:2 noise:5 iarpa:1 denoise:1 repeated:1 allowed:1 facilitating:1 neuronal:5 fig:24 augmented:1 benchmarking:3 fashion:2 darker:1 ny:3 wiley:1 precision:12 sub:2 timepoints:1 candidate:2 crude:1 minute:8 rk:1 remained:1 removing:1 load:1 faint:1 demixing:1 incorporating:1 deconvolution:8 false:12 sequential:1 effectively:1 adding:2 drew:1 ci:2 mukamel:1 supplement:3 shea:1 execution:1 illustrates:1 rui:1 mildly:4 easier:1 locality:2 timothy:2 paninski:4 gao:1 visual:2 josh:1 prevents:1 expressed:2 danielson:1 tracking:3 partially:1 vulnerable:1 oasis:3 truth:6 violator:1 adaption:1 harris:1 cdf:3 corresponds:1 goal:1 targeted:2 marked:1 ann:1 bamberg:1 jeff:1 jf:1 sippy:1 shared:1 change:2 apthorpe:1 bennett:1 typical:4 determined:1 corrected:1 operates:1 wt:5 averaging:1 denoising:3 total:2 pas:2 accepted:2 experimental:4 ew:1 indicating:3 select:2 aaron:1 mark:1 support:1 jonathan:1 alexander:1 incorporate:2 princeton:1 correlated:3 |
6,449 | 6,833 | Collaborative PAC Learning
Avrim Blum
Toyota Technological Institute at Chicago
Chicago, IL 60637
[email protected]
Ariel D. Procaccia
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Nika Haghtalab
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Mingda Qiao
Institute for Interdisciplinary Information Sciences
Tsinghua University
Beijing, China 100084
[email protected]
Abstract
We consider a collaborative PAC learning model, in which k players attempt to
learn the same underlying concept. We ask how much more information is required to learn an accurate classifier for all players simultaneously. We refer to
the ratio between the sample complexity of collaborative PAC learning and its
non-collaborative (single-player) counterpart as the overhead. We design learning
algorithms with O(ln(k)) and O(ln2 (k)) overhead in the personalized and centralized variants our model. This gives an exponential improvement upon the na?ve
algorithm that does not share information among players. We complement our
upper bounds with an ?(ln(k)) overhead lower bound, showing that our results are
tight up to a logarithmic factor.
1
Introduction
According to Wikipedia, collaborative learning is a ?situation in which two or more people learn ...
something together,? e.g., by ?capitalizing on one another?s resources? and ?asking one another for
information.? Indeed, it seems self-evident that collaboration, and the sharing of information, can
make learning more efficient. Our goal is to formalize this intuition and study its implications.
As an example, suppose k branches of a department store, which have sales data for different items in
different locations, wish to collaborate on learning which items should be sold at each location. In
this case, we would like to use the sales information across different branches to learn a good policy
for each branch. Another example is given by k hospitals with different patient demographics, e.g.,
in terms of racial or socio-economic factors, which want to predict occurrence of a disease in patients.
In addition to requiring a classifier that performs well on the population served by each hospital, it is
natural to assume that all hospitals deploy a common classifier.
Motivated by these examples, we consider a model of collaborative PAC learning, in which k players
attempt to learn the same underlying concept. We then ask how much information is needed for all
players to simultaneously succeed in learning a desirable classifier. Specifically, we focus on the
classic probably approximately correct (PAC) setting of Valiant [14], where there is an unknown target
function f ? ? F. We consider k players with distributions D1 , . . . , Dk that are labeled according to
f ? . Our goal is to learn f ? up to an error of on each and every player distribution while requiring
only a small number of samples overall.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
A natural but na?ve algorithm that forgoes collaboration between players can achieve our objective
by taking, from each player distribution, a number of samples that is sufficient for learning the
individual task, and then training a classifier over all samples. Such an algorithm uses k times as
many samples as needed for learning an individual task ? we say that this algorithm incurs O(k)
overhead in sample complexity. By contrast, we are interested in algorithms that take advantage
of the collaborative environment, learn k tasks by sharing information, and incur o(k) overhead in
sample complexity.
We study two variants of the aforementioned model: personalized and centralized. In the personalized
setting (as in the department store example), we allow the learning algorithm to return different
functions for different players. That is, our goal is to return classifiers f1 , . . . , fk that have error of at
most on player distributions D1 , . . . , Dk , respectively. In the centralized setting (as in the hospital
example), the learning algorithm is required to return a single classifier f that has an error of at most
on all player distributions D1 , . . . , Dk . Our results provide upper and lower bounds on the sample
complexity overhead required for learning in both settings.
1.1
Overview of Results
In Section 3, we provide algorithms for personalized and centralized collaborative learning that obtain
exponential improvements over the sample complexity of the na?ve approach. In Theorem 3.1, we
introduce an algorithm for the personalized setting that has O(ln(k)) overhead in sample complexity.
For the centralized setting, in Theorem 3.2, we develop an algorithm that has O(ln2 (k)) overhead in
sample complexity. At a high level, the latter algorithm first learns a series of functions on adaptively
chosen mixtures of player distributions. These mixtures are chosen such that for any player a large
majority of the functions perform well. This allows us to combine all functions into one classifier
that performs well on every player distribution. Our algorithm is an improper learning algorithm, as
the combination of these functions may not belong to F.
In Section 4, we present lower bounds on the sample complexity of collaborative PAC learning for
the personalized and centralized variants. In particular, in Theorem 4.1 we show that any algorithm
that learns in the collaborative setting requires ?(ln(k)) overhead in sample complexity. This shows
that our upper bound for the personalized setting, as stated in Theorem 3.1, is tight. Furthermore, in
Theorem 4.5, we show that obtaining uniform convergence across F over all k player distributions
requires ?(k) overhead in sample complexity. Interestingly, our centralized algorithm (Theorem 3.2)
bypasses this lower bound by using arguments that do not depend on uniform convergence. Indeed,
this can be seen from the fact that it is an improper learning algorithm.
In Appendix D, we discuss the extension of our results to the non-realizable setting. Specifically, we
consider a setting where there is a ?good? but not ?perfect? target function f ? ? F that has a small
error with respect to every player distribution, and prove that our upper bounds carry over.
1.2
Related Work
Related work in computational and statistical learning has examined some aspects of the general
problem of learning multiple related tasks simultaneously. Below we discuss papers on multi-task
learning [4, 3, 7, 5, 10, 13], domain adaptation [11, 12, 6], and distributed learning [2, 8, 15], which
are most closely related.
Multi-task learning considers the problem of learning multiple tasks in series or in parallel. In this
space, Baxter [4] studied the problem of model selection for learning multiple related tasks. In their
work, each learning task is itself randomly drawn from a distribution over related tasks, and the
learner?s goal is to find a hypothesis space that is appropriate for learning all tasks. Ben-David and
Schuller [5] also studied the sample complexity of learning multiple related tasks. However, in their
work similarity between two tasks is represented by existence of ?transfer? functions though which
underlying distributions are related.
Mansour et al. [11, 12] consider a multi-source domain adaptation problem, where the learner is given
k distributions and k corresponding predictors that have error at most on individual distributions.
The goal of the learner is to combine these predictors to obtain error of k on any unknown mixture of
player distributions. Our work is incomparable to this line of work, as our goal is to learn classifiers,
rather than combining existing ones, and our benchmark is to obtain error on each individual
distribution. Indeed, in our setting one can learn a hypothesis that has error k on any mixture of
players with no overhead in sample complexity.
2
Distributed learning [2, 8, 15] also considers the problem of learning from k different distributions
simultaneously. However, the main objective in this space is to learn with limited communication
between the players, rather than with low sample complexity.
2
Model
Let X be an instance space and Y = {0, 1} be the set of labels. A hypothesis is a function
f : X ? Y that maps any instance x ? X to a label y ? Y. We consider a hypothesis class F
with VC dimension d. Given a distribution D over X ? Y, the error of a hypothesis f is defined as
errD (f ) = Pr(x,y)?D [f (x) 6= y].
In the collaborative learning setting, we consider k players with distributions D1 , . . . , Dk over
X ? Y. We focus on the realizable setting, where all players? distributions are labeled according to
a common target function f ? ? F, i.e., errDi (f ? ) = 0 for all i ? [k] (but see Appendix D for an
extension to the non-realizable setting). We represent an instance of the collaborative PAC learning
setting with the 3-tuple (F, f ? , {D}i?[k] ).
Our goal is to learn a good classifier with respect to every player distribution. We call this (, ?)learning in the collaborative PAC setting, and study two variants: the personalized setting, and the
centralized setting. In the personalized setting, our goal is to learn functions f1 , . . . , fk , such that
with probability 1 ? ?, errDi (fi ) ? for all i ? [k]. In the centralized setting, we require all the
output functions to be identical. Put another way, our goal is to return a single f , such that with
probability 1 ? ?, errDi (f ) ? for all i ? [k]. In both settings, we allow our algorithm to be
improper, that is, the learned functions need not belong to F.
We compare the sample complexity of our algorithms to their PAC counterparts in the realizable
setting. In the traditional realizable PAC setting, m,? denotes the number of samples needed for
(, ?)-learning F. That is, m,? is the total number of samples drawn from a realizable distribution D,
such that, with probability 1 ? ?, any classifier f ? F that is consistent with the sample set satisfies
errD (f ) ? . We denote by OF (?) the function that, for any set S of labeled samples, returns a
function f ? F that is consistent with S if such a function exists (and outputs
?none? otherwise). It is
well-known that sampling a set S of size m,? = O 1 d ln 1 + ln 1?
, and applying OF (S), is
sufficient for (, ?)-learning a hypothesis class F of VC dimension d [1]. We refer to the ratio of the
sample complexity of an algorithm in the collaborative PAC setting to that of the (non-collaborative)
realizable PAC setting as the overhead. For ease of exposition, we only consider the dependence of
the overhead on parameters k, d, and .
3
Sample Complexity Upper Bounds
In this section, we prove upper bounds on the sample complexity of (, ?)-learning in the collaborative
PAC setting. We begin by providing a simple algorithm with O(ln(k)) overhead (in terms of sample
complexity, see Section 2) for the personalized setting. We then design and analyze an algorithm for
the centralized setting with O(ln2 (k)) overhead, following a discussion of additional challenges that
arise in this setting.
3.1
Personalized Setting
The idea underlying the algorithm for the personalized setting is quite intuitive: If we were to learn a
classifier that is on average good for the players, then we have learned a classifier that is good for a
large fraction of the players. Therefore, a large fraction of the players can be simultaneously satisfied
by a single good global classifier. This process can be repeated until each player receives a good
classifier.
In more detail, let us consider an algorithm that pools together a sample set of total size m/4,? from
P
the uniform mixture D = k1 i?[k] Di over individual player distributions, and finds f ? F that
is consistent with this set. Clearly, with probability 1 ? ?, f has a small error of /4 with respect
to distribution D. However, we would like to understand how well f performs on each individual
player?s distribution.
Since errD (f ) ? /4 is also the average error of f on player distributions, with probability 1 ? ?, f
must have error of at most /2 on at least half of the players. Indeed, one can identify such players
? 1 ) samples from each player and asking whether the empirical error of f
by taking additional O(
on these sample sets is at most 3/4. Using a variant of the VC theorem, it is not hard to see that
3
for any player i such that errDi (f ) ? /2, the empirical error of f is at most 3/4, and no player
with empirical error at most 3/4 has true error that is worst than . Once players with empirical
error 3/4 are identified, one can output fi = f for any such player, and repeat the procedure for
the remaining players. After log(k) rounds, this process terminates with all players having received
functions with error of at most on their respective distributions, with probability 1 ? log(k)?.
We formalize the above discussion via Algorithm 1 and Theorem 3.1. For completeness, a more
rigorous proof of the theorem is given in Appendix A.
Algorithm 1 P ERSONALIZED L EARNING
N1 ? [k]; ? 0 ? ?/2 log(k);
for r = 1, . . . , dlog(k)e
do
?r ? 1 P
D
D
i;
i?Nr
|Nr |
? r , and f (r) ? OF (S);
Let S be a sample of size m/4,?0 drawn from D
(r)
0
Let Gr ? T EST(f , Nr , , ? );
Nr+1 ? Nr \ Gr ;
for i ? Gr do fi ? f (r) ;
end
return f1 , . . . , fk
T EST(f, N, , ?):
|
for i ? N do take sample set Ti of size O 1 ln |N
from Di ;
?
return {i | errTi (f ) ? 34 }
Theorem 3.1. For any , ? > 0, and hypothesis class F of VC dimension d, Algorithm 1 (, ?)-learns
F in the personalized collaborative PAC setting using m samples, where
ln(k)
1
k
m=O
(d + k) ln
+ k ln
.
?
Note that Algorithm 1 has O(ln(k)) overhead when k = O(d).
3.2
Centralized Setting
We next present a learning algorithm with O(ln2 (k)) overhead in the centralized setting. Recall that
our goal is to learn a single function f that has an error of on every player distribution, as opposed
to the personalized setting where players can receive different functions.
A natural first attempt at learning in the centralized setting is to combine the classifiers f1 , . . . , fk
that we learned in the personalized setting (Algorithm 1), say, through a weighted majority vote.
One challenge with this approach is that, in general, it is possible that many of the functions fj
perform poorly on the distribution of a different player i. The reason is that when Algorithm 1 finds a
suitable f (r) for players in Gr , it completely removes them from consideration for future rounds;
subsequent functions may perform poorly with respect to the distributions associated with those
players. Therefore, this approach may lead to a global classifier with large error on some player
distributions.
To overcome this problem, we instead design an algorithm that continues to take additional samples
from players for whom we have already found suitable classifiers. The key idea behind the centralized
learning algorithm is to group the players at every round based on how many functions learned so
far have large error rates on those players? distributions, and to learn from data sampled from all
the groups simultaneously. This ensures that the function learned in each round performs well on a
large fraction of the players in each group, thereby reducing the likelihood that in later stages of this
process a player appears in a group for which a large fraction of the functions perform poorly.
In more detail, our algorithm learns t = ?(ln(k)) classifiers f (1) , f (2) , . . . , f (t) , such that for any
player i ? [k], at least 0.6t functions among them achieve an error below 0 = /6 on Di . The
algorithm then returns the classifier maj({f (r) }tr=1 ), where, for a set of hypotheses F , maj(F )
denotes the classifier that, given x ? X , returns the label that the majority of hypotheses in F assign
to x. Note that any instance that is mislabeled by this classifier must be mislabeled by at least 0.1t
4
functions among the 0.6t good functions, i.e., 1/6 of the good functions. Hence, maj({f (r) }tr=1 )
has an error of at most 60 = on each distribution Di .
(r)
Throughout the algorithm, we keep track of counters ?i for any round r ? [t] and player i ? [k],
which, roughly speaking, record the number of classifiers among f (1) , f (2) , . . . , f (r) that have an
error of more than 0 on distribution Di . To learn f (r+1) , we first group distributions D1 , . . . , Dk
(r)
based on the values of ?i , draw about m0 ,? samples from the mixture of the distributions in each
group, and return a function f (r+1) that is consistent with all of the samples. Similarly to Section 3.1,
one can show that f (r+1) achieves O(0 ) error with respect to a large fraction of player distributions
(r+1)
(r)
in each group. Consequently, the counters are increased, i.e., ?i
> ?i , only for a small fraction
(t)
of players. Finally, we show that with high probability, ?i ? 0.4t for any player i ? [k], i.e., on
each distribution Di , at least 0.6t functions achieve error of at most 0 .
The algorithm is formally described in Algorithm 2. The next theorem states our sample complexity
upper bound for the centralized setting.
Algorithm 2 C ENTRALIZED L EARNING
(0)
?i l? 0 for eachmi ? [k];
t ? 52 log8/7 (k) ; 0 ? /6;
(0)
(0)
N0 ? [k]; Nc ? ? for each c ? [t];
for r = 1, 2, . . . , t do
for c = 0, 1, . . . , t ? 1 do
(r?1)
6= ? then
if Nc
(r)
e c(r?1) =
Draw a sample set Sc of size m0 /16,?/(2t2 ) from D
else
end
(r)
Sc
f (r) ? OF
1
(r?1)
|Nc
P
|
(r?1)
i?Nc
Di ;
??;
S
t?1
c=0
(r)
(r)
Sc
;
0
Gr ? T EST(f , [k], , ?/(2t));
(r)
(r?1)
for i = 1, . . . , k do ?i ? ?i
+ I [i ?
/ Gr ];
(r)
(r)
for c = 0, . . . , t do Nc ? {i ? [k] : ?i = c};
end
return maj({f (r) }tr=1 );
Theorem 3.2. For any , ? > 0, and hypothesis class F of VC dimension d, Algorithm 2 (, ?)-learns
F in the centralized collaborative PAC setting using m samples, where
2
ln (k)
1
1
m=O
(d + k) ln
+ k ln
.
?
In particular, Algorithm 2 has O(ln2 (k)) overhead when k = O(d).
(r?1)
Turning to the theorem?s proof, note that in Algorithm 2, Nc
represents the set of players for
e c(r?1) represents the mixture
whom c out of the r ? 1 functions learned so far have a large error, and D
(r?1)
of distribution of players in Nc
. Moreover, Gr is the set of players for whom f (r) has a small
error. The following lemma, whose proof appears in Appendix B.1, shows that with high probability
e c(r?1) for all c. Here and in the following, t stands for
each functionm f (r) has a small error on D
l
5
2 log8/7 (k) as in Algorithm 2.
Lemma 3.3. With probability 1 ? ?, the following two properties hold for all r ? [t]:
(r?1)
1. For any c ? {0, . . . , t ? 1} such that Nc
is non-empty, errDe (r?1) (f (r) ) ? 0 /16.
c
2. For any i ? Gr , errDi (f (r) ) ? 0 , and for any i ?
/ Gr , errDi (f (r) ) > 0 /2.
5
(r)
The next lemma gives an upper bound on |Nc | ? the number of players for whom c out of the r
learned functions have a large error.
(r)
Lemma 3.4. With probability 1 ? ?, for any r, c ? {0, . . . , t}, we have |Nc | ? rc ? 8kc .
(r)
(r)
Proof. Let nr,c = |Nc | = |{i ? [k] : ?i = c}| be the number of players for whom c functions in
f (1) , . . . , f (r) do not have a small error. We note that n0,0 = k and n0,c = 0 for c ? {1, . . . , t}. The
next technical claim, whose proof appears in Appendix B.2, asserts that to prove this lemma, it is
sufficient to show that for any r ? {1, . . . , t} and c ? {0, . . . , t}, nr,c ? nr?1,c + 81 nr?1,c?1 . Here
we assume that nr?1,?1 = 0.
Claim 3.5. Suppose that n0,0 = k, n0,c = 0 for c ? {1, . . . , t}, and nr,c ? nr?1,c + 18 nr?1,c?1
holds for any r ? {1, . . . , t} and c ? {0, . . . , t}. Then for any r, c ? {0, . . . , t}, nr,c ? rc ? 8kc .
(r)
(r)
By definition of ?c , Nc , and nr,c , we have
(r?1)
(r?1)
(r)
=c?1?i?
/ Gr }
= c} + {i ? [k] : ?i
nr,c = {i ? [k] : ?i = c} ? {i ? [k] : ?i
(r?1)
=nr?1,c + Nc?1 \ Gr .
(r?1)
e (r?1) is the mixture of all
It remains to show that |Nc?1 \ Gr | ? 81 nr?1,c?1 . Recall that D
c?1
(r?1)
distributions in Nc?1 . By Lemma 3.3, with probability 1 ? ?, errDe (r?1) (f (r) ) < 0 /16. Put another
c?1
P
(r?1)
(r?1)
(r?1)
0
way, i?N (r?1) errDi (f (r) ) < 16
? |Nc?1 |. Thus, at most 18 |Nc?1 | players i ? Nc?1 can have
c?1
errDi (f (r) ) > 0 /2. Moreover, by Lemma 3.3, for any i ?
/ Gr , we have that errDi (f (r) ) > 0 /2.
Therefore,
1
(r?1)
(r?1) 1
(r?1)
Nc?1 \ Gr ? {i ? Nc?1 : errDi (f (r) ) > 0 /2} ? Nc?1 = nr?1,c?1 .
8
8
This completes the proof.
We now prove Theorem 3.2 using Lemma 3.4.
Proof of Theorem 3.2. We first show that, with high probability, for any i ? [k], at most 0.4t functions
(t)
among f (1) , . l. . , f (t) havemerror greater than 0 , i.e., ?i < 0.4t for all i ? [k]. Note that by our
choice of t =
5
2
log8/7 (k) , we have (8/7)0.4t ? k. By Lemma 3.4 and an upper bound on binomial
coefficients, with probability 1 ? ?, for any integer c ? [0.4t, t],
c
t
k
k
et
k
|Nc(t) | ?
? c <
? c <
? 1,
c
8
c
8
(8/7)c
(t)
which implies that Nc
(t)
= ?. Therefore, with probability 1 ? ?, ?i < 0.4t for all i ? [k].
Next, we prove that f = maj({f (r) }tr=1 ) has error at most on every player distribution. Consider
(t)
distribution Di of player i. By definition, t ? ?i functions have error at most 0 on Di . We
refer to these functions as ?good? functions. Note that for any instance x that is mislabeled by
(t)
(t)
f , at least 0.5t ? ?i good functions must make a wrong prediction. Therefore, (t ? ?i )0 ?
(t)
(t)
(0.5t ? ?i ) ? errDi (f ). Moreover, with probability 1 ? ?, ?i < 0.4t for all i ? [k]. Hence,
(t)
errDi (f ) ?
t ? ?i
0.5t ?
(t)
?i
0 ?
0.6t 0
? ,
0.1t
with probability 1 ? ?. This proves that Algorithm 2 (, ?)-learns F in the centralized collaborative
PAC setting.
6
Finally, we bound the sample complexity of Algorithm 2. Recall that t = ?(ln(k)) and 0 = /6. At
each iteration of Algorithm 2, we draw total of t ? m0 /16,?/(4t2 ) samples from t mixtures. Therefore,
over t time steps, we draw a total of
2
ln (k)
1
1
t2 ? m0 /16,?/(4t2 ) = O
? d ln
+ ln
+ ln ln(k)
?
samples for learning f (1) , . . . , f (t) . Moreover, the total number samples requested for subroutine
T EST(f (r) , [k], 0 , ?/(4t)) for r = 1 . . . , t is
tk
k
ln(k)
1
1
ln2 (k)
O
? ln
=O
? k ln
+ k ln
+
k .
?
?
We conclude that the total sample complexity is
2
ln (k)
1
1
O
(d + k) ln
+ k ln
.
?
We remark that Algorithm 2 is inspired by the classic boosting scheme. Indeed, an algorithm that
is directly adapted from boosting attains a similar performance guarantee as in Theorem 3.2. The
algorithm assigns a uniform weight to each player, and learns a classifier with O() error on the
mixture distribution. Then, depending on whether the function achieves an O() error on each
distribution, the algorithm updates the players? weights, and learns the next classifier from the
weighted mixture of all distributions. An analysis similar to that of AdaBoost [9] shows that the
majority vote of all the classifiers learned over ?(ln(k)) iterations of the above procedure achieves
a small error on every distribution. Similar to Algorithm 2, this algorithm achieves an O(ln2 (k))
overhead for the centralized setting.
4
Sample Complexity Lower Bounds
In this section, we present lower bounds on the sample complexity of collaborative PAC learning. In
Section 4.1, we show that any learning algorithm for the collaborative PAC setting incurs ?(log(k))
overhead in terms of sample complexity. In Section 4.2, we consider the sample complexity required
for obtaining uniform convergence across F in the collaborative PAC setting. We show that ?(k)
overhead is necessary to obtain such results.
4.1
Tight Lower Bound for the Personalized Setting
We now turn to establishing the ?(log(k)) lower bound mentioned above. This lower bound implies
the tightness of the O(log(k)) overhead upper bound obtained by Theorem 3.1 in the personalized
setting. Moreover, the O(log2 (k)) overhead obtained by Theorem 3.2 in the centralized setting is
nearly tight, up to a log(k) multiplicative factor. Formally, we prove the following theorem.
Theorem 4.1. For any k ? N, , ? ? (0, 0.1), and (, ?)-learning algorithm A in the collaborative
PAC setting, there exist an instance with k players, and a hypothesis class of VC-dimension k, on
which A requires at least 3k ln[9k/(10?)]/(20) samples in expectation.
Hard instance distribution. We show that for any k ? N and , ? ? (0, 0.1), there is a distribution
Dk, of ?hard? instances, each with k players and a hypothesis class with VC-dimension k, such that
any (, ?)-learning algorithm A requires ?(k log(k)/) samples in expectation on a random instance
drawn from the distribution, even in the personalized setting. This directly implies Theorem 4.1,
since A must take ?(k log(k)/) samples on some instance in the support of Dk, . We define Dk,
as follows:
?
?
?
?
Instance space: Xk = {1, 2, . . . , k, ?}.
Hypothesis class: Fk is the collection of all binary functions on Xk that map ? to 0.
Target function: f ? is chosen from Fk uniformly at random.
Players? distributions: The distribution Di of player i is either a degenerate distribution that assigns
probability 1 to ?, or a Bernoulli distribution on {i, ?} with Di (i) = 2 and Di (?) = 1 ? 2.
Di is chosen from these two distributions independently and uniformly at random.
7
Note that the VC-dimension of Fk is k. Moreover, on any instance in the support of Dk, , learning
in the personalized setting is equivalent to learning in the centralized setting. This is due to the fact
that given functions f1 , f2 , . . . , fk for the personalized setting, where fi is the function assigned to
player i, we can combine these functions into a single function f ? Fk for the centralized setting
by defining f (?) = 0 and f (i) = fi (i) for all i ? [k]. Then, errDi (f ) ? errDi (fi ) for all i ? [k].1
Therefore, without loss of generality we focus below on the centralized setting.
Lower bound for k = 1. As a building block in our proof of Theorem 4.1, we establish a lower
bound for the special case of k = 1. For brevity, let D denote the instance distribution D1, . We say
that A is an (, ?)-learning algorithm for the instance distribution D if and only if on any instance
in the support of D , with probability at least 1 ? ?, A outputs a function f with error below .
The following lemma, proved in Appendix C, states that any (, ?)-learning algorithm for D takes
?(log(1/?)/) samples on a random instance drawn from D .2
Lemma 4.2. For any , ? ? (0, 0.1) and (, ?)-learning algorithm A for D , A takes at least
ln(1/?)/(6) samples in expectation on a random instance drawn from D . Here the expectation is taken over both the randomness in the samples and the randomness in drawing the instance
from D .
Now we prove Theorem 4.1 by Lemma 4.2 and a reduction from a random instance sampled from D
to instances sampled from Dk, . Intuitively, a random instance drawn from Dk, is equivalent to k
independent instances from D . We show that any learning algorithm A that simultaneously solves k
tasks (i.e., an instance from Dk, ) with probability 1 ? ? can be transformed into an algorithm A0 that
solves a single task (i.e., an instance from D ) with probability 1 ? O(?/k). Moreover, the expected
sample complexity of A0 is only an O(1/k) fraction of the complexity of A. This transformation,
together with Lemma 4.2, gives a lower bound on the sample complexity of A.
Proof Sketch of Theorem 4.1. We construct an algorithm A0 for the instance distribution D from an
algorithm A that (, ?)-learns in the centralized setting. Recall that on an instance drawn from D ,
A0 has access to a distribution D, i.e., the single player?s distribution.
? A0 generates an instance (Fk , f ? , {Di }i?[k] ) from the distribution Dk, (specifically, A0 knows
the target function f ? and the distributions), and then chooses l ? [k] uniformly at random.
? A0 simulates A on instance (Fk , f ? , {Di }i?[k] ), with Dl replaced by the distribution D. Specifically, every time A draws a sample from Dj for some j 6= l, A0 samples Dj and forwards the
sample to A. When A asks for a sample from Dl , A0 samples the distribution D instead and
replies to A accordingly, i.e., A0 returns l, together with the label, if the sample is 1 (recall that
X1 = {1, ?}), and returns ? otherwise.
? Finally, when A terminates and returns a function f on Xk , A0 checks whether errDj (f ) <
for every j 6= l. If so, A0 returns the function f 0 defined as f 0 (1) = f (l) and f 0 (?) = f (?).
Otherwise, A0 repeats the simulation process on a new instance drawn from Dk, .
Let mi be the expected number of samples drawn from the i-th distribution when A runs on an instance
drawn from Dk, . We have the following two claims, whose proofs are relegated to Appendix C.
Claim 4.3. A0 is an (, 10?/(9k))-learning algorithm for D .
Pk
Claim 4.4. A0 takes at most 10/(9k) i=1 mi samples in expectation.
Applying Lemma 4.2 to A0 gives
4.2
Pk
i=1
mi ?
3k ln[9k/(10?)]
,
20
which proves Theorem 4.1.
Lower Bound for Uniform Convergence
We next examine the sample complexity required for obtaining uniform convergence across the
hypothesis class F in the centralized collaborative PAC setting, and establish an overhead lower
bound of ?(k). Interestingly, our centralized learning algorithm (Algorithm 2) achieves O(log2 (k))
overhead ? it circumvents the lower bound by not relying on uniform convergence.
1
In fact, when fi ? Fk , errDi (f ) = errDi (fi ) for all i ? [k].
Here we only assume that A is correct for instances in the support of D , rather than being correct on every
instance.
2
8
To be more formal, we first need to define uniform convergence in the cooperative PAC learning
setting. We say that a hypothesis class F has the uniform convergence property with sample size
(k)
(k)
m,? if for any k distributions D1 , . . . , Dk , there exist integers m1 , . . . , mk that sum up to m,? ,
such that when mi samples are drawn from Di for each i ? [k], with probability 1 ? ?, any function
(k)
in F that is consistent with all the m,? samples achieves error at most on every distribution Di .
Note that the foregoing definition is a relatively weak adaptation of uniform convergence to the
cooperative setting, as the integers mi are allowed to depend on the distributions Di . But this
observation only strengthens our lower bound, which holds despite the weak requirement.
Theorem 4.5. For any k, d ? N and (, ?) ? (0, 0.1), there exists a hypothesis class F of VC(k)
dimension d, such that m,? ? dk(1 ? ?)/(4).
Proof Sketch of Theorem 4.5. Fix k, d ? N and , ? ? (0, 0.1). We define instance (F, f ? , {Di }ki=1 )
as follows. The instance space is X = ([k]?[d])?{?}, and the hypothesis class F contains all binary
functions on X that map ? to 0 and take value 1 on at most d points. The target function f ? maps
every element in X to 0. Finally, the distribution of each player i ? [k] is given by Di ((i, j)) = 2/d
for any j ? [d] and Di (?) = 1 ? 2.
Note that if a sample set contains strictly less than d/2 elements in {(i? , 1), (i? , 2), . . . , (i? , d)} for
some i? , there is a consistent function in F with error strictly greater than on Di? , namely, the
function that maps (i, j) to 1 if and only if i = i? and (i? , j) is not in the sample set.
Therefore, to achieve uniform convergence, at least d/2 elements from X \ {?} must be drawn from
each distribution. Since the probability that each sample is different from ? is 2, drawing d/2 such
samples from k distribution requires ?(dk/) samples.
A complete proof of Theorem 4.5 appears in Appendix C.
Acknowledgments
We thank the anonymous reviewers for their helpful remarks and suggesting an alternative boostingbased approach for the centralized setting. This work was partially supported by the NSF grants
CCF-1525971, CCF-1536967, CCF-1331175, IIS-1350598, IIS-1714140, CCF-1525932, and CCF1733556, Office of Naval Research grants N00014-16-1-3075 and N00014-17-1-2428, a Sloan
Research Fellowship, and a Microsoft Research Ph.D. fellowship. This work was done while Avrim
Blum was working at Carnegie Mellon University.
References
[1] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge
University Press, 1999.
[2] Maria Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. In Proceedings of the 25th Conference on Computational
Learning Theory (COLT), pages 26.1?26.22, 2012.
[3] Jonathan Baxter. A Bayesian/information theoretic model of learning to learn via multiple task
sampling. Machine learning, 28(1):7?39, 1997.
[4] Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research,
12:149?198, 2000.
[5] Shai Ben-David and Reba Schuller. Exploiting task relatedness for mulitple task learning.
In Proceedings of the 16th Conference on Computational Learning Theory (COLT), pages
567?580, 2003.
[6] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):
151?175, 2010.
[7] Rich Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
9
[8] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online
prediction. In Proceedings of the 28th International Conference on Machine Learning (ICML),
pages 713?720, 2011.
[9] Yoav Freund and Robert E Schapire. A desicion-theoretic generalization of on-line learning and
an application to boosting. In Proceedings of the 2nd European Conference on Computational
Learning Theory (EuroCOLT), pages 23?37, 1995.
[10] Abhishek Kumar and Hal Daum? III. Learning task grouping and overlap in multi-task learning.
In Proceedings of the 29th International Conference on Machine Learning (ICML), pages
1103?1110, 2012.
[11] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning
bounds and algorithms. In Proceedings of the 22nd Conference on Computational Learning
Theory (COLT), pages 19?30, 2009.
[12] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation with multiple
sources. In Proceedings of the 23rd Annual Conference on Neural Information Processing
Systems (NIPS), pages 1041?1048, 2009.
[13] Massimiliano Pontil and Andreas Maurer. Excess risk bounds for multitask learning with trace
norm regularization. In Proceedings of the 26th Conference on Computational Learning Theory
(COLT), pages 55?76, 2013.
[14] Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142,
1984.
[15] Jialei Wang, Mladen Kolar, and Nathan Srerbo. Distributed multi-task learning. In Proceedings
of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), pages
751?760, 2016.
10
| 6833 |@word multitask:2 seems:1 norm:1 nd:2 dekel:1 simulation:1 asks:1 incurs:2 thereby:1 tr:4 carry:1 reduction:1 series:2 contains:2 interestingly:2 existing:1 must:5 john:1 subsequent:1 chicago:2 remove:1 update:1 n0:5 half:1 intelligence:2 item:2 accordingly:1 xk:3 record:1 completeness:1 boosting:3 location:2 rc:2 prove:7 overhead:25 combine:4 privacy:1 introduce:1 expected:2 indeed:5 roughly:1 examine:1 multi:5 inspired:1 relying:1 eurocolt:1 begin:1 underlying:4 moreover:7 transformation:1 guarantee:1 every:13 ti:1 socio:1 classifier:26 wrong:1 sale:2 grant:2 tsinghua:2 despite:1 establishing:1 approximately:1 china:1 examined:1 studied:2 ease:1 limited:1 acknowledgment:1 block:1 procedure:2 pontil:1 empirical:4 selection:1 put:2 risk:1 applying:2 vaughan:1 equivalent:2 map:5 reviewer:1 independently:1 bachrach:1 assigns:2 population:1 classic:2 haghtalab:1 target:6 suppose:2 deploy:1 yishay:3 shamir:1 us:1 hypothesis:17 pa:2 element:3 strengthens:1 continues:1 labeled:3 cooperative:2 wang:1 worst:1 ensures:1 improper:3 counter:2 technological:1 ran:1 disease:1 intuition:1 environment:1 mentioned:1 complexity:30 reba:1 jialei:1 depend:2 tight:4 incur:1 upon:1 f2:1 learner:3 completely:1 mislabeled:3 represented:1 massimiliano:1 artificial:2 sc:3 quite:1 whose:3 foregoing:1 say:4 tightness:1 otherwise:3 drawing:2 statistic:1 itself:1 online:1 advantage:1 adaptation:5 combining:1 poorly:3 achieve:4 degenerate:1 intuitive:1 asserts:1 exploiting:1 convergence:10 empty:1 requirement:1 perfect:1 ben:3 tk:1 depending:1 develop:1 blitzer:1 received:1 solves:2 c:2 implies:3 closely:1 correct:3 vc:9 log8:3 require:1 assign:1 f1:5 fix:1 generalization:1 anonymous:1 arielpro:1 extension:2 strictly:2 hold:3 predict:1 claim:5 achieves:6 label:4 weighted:2 clearly:1 rather:3 office:1 focus:3 naval:1 improvement:2 maria:1 bernoulli:1 likelihood:1 check:1 contrast:1 rigorous:1 attains:1 rostamizadeh:2 realizable:7 helpful:1 nika:1 a0:16 kc:2 relegated:1 transformed:1 subroutine:1 interested:1 overall:1 among:5 aforementioned:1 colt:4 special:1 once:1 construct:1 having:1 beach:1 sampling:2 identical:1 represents:2 koby:1 icml:2 nearly:1 future:1 t2:4 randomly:1 simultaneously:7 ve:3 individual:6 replaced:1 n1:1 microsoft:1 attempt:3 mulitple:1 centralized:26 mixture:11 behind:1 implication:1 accurate:1 tuple:1 necessary:1 respective:1 ohad:1 maurer:1 theoretical:1 mk:1 instance:34 increased:1 asking:2 caruana:1 yoav:1 leslie:1 uniform:12 predictor:2 wortman:1 gr:14 chooses:1 adaptively:1 st:1 international:3 interdisciplinary:1 pool:1 together:4 na:3 satisfied:1 opposed:1 return:15 suggesting:1 coefficient:1 sloan:1 later:1 multiplicative:1 analyze:1 parallel:1 errd:3 shai:3 collaborative:24 il:1 identify:1 weak:2 bayesian:1 none:1 served:1 randomness:2 sharing:2 definition:3 proof:12 di:22 associated:1 mi:5 sampled:3 proved:1 ask:2 recall:5 formalize:2 appears:4 adaboost:1 done:1 though:1 generality:1 furthermore:1 stage:1 reply:1 until:1 sketch:2 receives:1 working:1 hal:1 usa:1 building:1 concept:2 requiring:2 true:1 counterpart:2 inductive:1 regularization:1 assigned:1 ccf:4 hence:2 round:5 self:1 ln2:7 evident:1 complete:1 theoretic:2 performs:4 fj:1 balcan:1 consideration:1 fi:8 wikipedia:1 common:2 overview:1 belong:2 m1:1 mellon:3 refer:3 cambridge:1 rd:1 collaborate:1 fk:12 similarly:1 dj:2 access:1 similarity:1 something:1 store:2 n00014:2 binary:2 seen:1 additional:3 greater:2 fernando:1 ii:2 branch:3 multiple:6 desirable:1 technical:1 long:1 lin:1 prediction:2 variant:5 florina:1 patient:2 cmu:2 expectation:5 iteration:2 represent:1 gilad:1 receive:1 addition:1 want:1 fellowship:2 fine:1 desicion:1 else:1 completes:1 source:2 probably:1 simulates:1 call:1 integer:3 iii:1 baxter:3 identified:1 economic:1 incomparable:1 cn:1 idea:2 andreas:1 whether:3 motivated:1 bartlett:1 speaking:1 remark:2 forgoes:1 ph:1 schapire:1 exist:2 nsf:1 track:1 carnegie:3 group:7 key:1 blum:3 drawn:13 fraction:7 sum:1 beijing:1 run:1 throughout:1 earning:2 draw:5 circumvents:1 appendix:8 bound:28 ki:1 annual:1 adapted:1 alex:1 personalized:20 generates:1 aspect:1 nathan:1 argument:1 kumar:1 relatively:1 department:4 according:3 combination:1 across:4 terminates:2 intuitively:1 dlog:1 pr:1 ariel:1 taken:1 ln:33 resource:1 remains:1 jennifer:1 discus:2 turn:1 needed:3 know:1 demographic:1 end:3 capitalizing:1 ofer:1 appropriate:1 occurrence:1 alternative:1 existence:1 denotes:2 remaining:1 binomial:1 log2:2 daum:1 k1:1 prof:2 establish:2 objective:2 already:1 dependence:1 traditional:1 nr:19 thank:1 majority:4 mail:1 whom:5 considers:2 reason:1 afshin:2 racial:1 ratio:2 providing:1 kolar:1 nc:23 robert:1 trace:1 stated:1 design:3 policy:1 unknown:2 perform:4 upper:10 observation:1 sold:1 benchmark:1 mladen:1 situation:1 defining:1 communication:3 mansour:4 ttic:1 david:3 complement:1 namely:1 required:5 qiao:1 learned:8 nip:2 below:4 kulesza:1 challenge:2 suitable:2 overlap:1 natural:3 turning:1 schuller:2 scheme:1 maj:5 freund:1 loss:1 foundation:1 sufficient:3 consistent:6 xiao:1 bypass:1 share:1 collaboration:2 mohri:2 repeat:2 supported:1 formal:1 allow:2 understand:1 bias:1 institute:2 taking:2 distributed:5 overcome:1 dimension:8 stand:1 rich:1 forward:1 collection:1 far:2 excess:1 relatedness:1 keep:1 global:2 pittsburgh:2 conclude:1 abhishek:1 learn:17 transfer:1 ca:1 obtaining:3 requested:1 mehryar:2 european:1 anthony:1 domain:5 aistats:1 pk:2 main:1 arise:1 repeated:1 allowed:1 x1:1 pereira:1 wish:1 exponential:2 toyota:1 learns:9 theorem:28 pac:22 showing:1 learnable:1 dk:18 dl:2 exists:2 grouping:1 avrim:4 valiant:2 logarithmic:1 partially:1 satisfies:1 acm:1 succeed:1 goal:10 consequently:1 exposition:1 hard:3 specifically:4 reducing:1 uniformly:3 lemma:14 total:6 hospital:4 player:73 est:4 vote:2 formally:2 procaccia:1 people:1 support:4 latter:1 crammer:1 jonathan:2 brevity:1 d1:7 |
6,450 | 6,834 | Fast Black-box Variational Inference
through Stochastic Trust-Region Optimization
Jeffrey Regier
[email protected]
Michael I. Jordan
[email protected]
Jon McAuliffe
[email protected]
Abstract
We introduce TrustVI, a fast second-order algorithm for black-box variational
inference based on trust-region optimization and the ?reparameterization trick.? At
each iteration, TrustVI proposes and assesses a step based on minibatches of draws
from the variational distribution. The algorithm provably converges to a stationary
point. We implemented TrustVI in the Stan framework and compared it to two
alternatives: Automatic Differentiation Variational Inference (ADVI) and Hessianfree Stochastic Gradient Variational Inference (HFSGVI). The former is based
on stochastic first-order optimization. The latter uses second-order information,
but lacks convergence guarantees. TrustVI typically converged at least one order
of magnitude faster than ADVI, demonstrating the value of stochastic second-order
information. TrustVI often found substantially better variational distributions than
HFSGVI, demonstrating that our convergence theory can matter in practice.
1
Introduction
The ?reparameterization trick? [1, 2, 3] has led to a resurgence of interest in variational inference (VI),
making it applicable to essentially any differentiable model. This new approach, however, requires
stochastic optimization rather than fast deterministic optimization algorithms like closed-form
coordinate ascent. Some fast stochastic optimization algorithms exist, but variational objectives have
properties that make them unsuitable: they are typically nonconvex, and the relevant expectations
cannot usually be replaced by finite sums. Thus, to date, practitioners have used SGD and its variants
almost exclusively. Automatic Differentiation Variational Inference (ADVI) [4] has been especially
successful at making variational inference based on first-order stochastic optimization accessible.
Stochastic first-order optimization, however, is slow in theory (sublinear convergence) and in practice
(thousands of iterations), negating a key benefit of VI.
This article presents TrustVI, a fast algorithm for variational inference based on second-order
trust-region optimization and the reparameterization trick. TrustVI routinely converges in tens
of iterations for models that take thousands of ADVI iterations. TrustVI?s iterations can be more
expensive, but on a large collection of Bayesian models, TrustVI typically reduced total computation
by an order of magnitude. Usually TrustVI and ADVI find the same objective value, but when they
differ, TrustVI is typically better.
TrustVI adapts to the stochasticity of the optimization problem, raising the sampling rate for assessing
proposed steps based on a Hoeffding bound. It provably converges to a stationary point. TrustVI
generalizes the Newton trust-region method [5], which converges quadratically and has performed
well at optimizing analytic variational objectives even at an extreme scale [6]. With large enough
minibatches, TrustVI iterations are nearly as productive as those of a deterministic trust region
method. Fortunately, large minibatches make effective use of single-instruction multiple-data (SIMD)
parallelism on modern CPUs and GPUs.
TrustVI uses either explicitly formed approximations of Hessians or approximate Hessian-vector
products. Explicitly formed Hessians can be fast for low-dimensional problems or problems with
sparse Hessians, particularly when expensive computations (e.g., exponentiation) already need to be
performed to evaluate a gradient. But Hessian-vector products are often more convenient. They can
be computed efficiently through forward-mode automatic differentiation, reusing the implementation
for computing gradients [7, 8]. This is the approach we take in our experiments.
Fan et al. [9] also note the limitations of first-order stochastic optimization for variational inference:
the learning rate is difficult to set, and convergence is especially slow for models with substantial
curvature. Their approach is to apply Newton?s method or L-BFGS to problems that are both
stochastic and nonconvex. All stationary points?minima, maxima, and saddle points?act as
attractors for Newton steps, however, so while Newton?s method may converge quickly, it may
also converge poorly. Trust region methods, on the other hand, are not only unharmed by negative
curvature, they exploit it: descent directions that become even steeper are among the most productive.
In section 5, we empirically compare TrustVI to Hessian-free Stochastic Gradient Variation Inference
(HFSGVI) to assess the practical importance of our convergence theory.
TrustVI builds on work from the derivative-free optimization community [10, 11, 12]. The STORM
framework [12] is general enough to apply to a derivative-free setting, as well as settings where
higher-order stochastic information is available. STORM, however, requires that a quadratic model of
the objective function can always be constructed such that, with non-trivial probability, the quadratic
model?s absolute error is uniformly bounded throughout the trust region. That requirement can
be satisfied for the kind of low-dimensional problems one can optimize without derivatives, where
the objective may be sampled throughout the trust region at a reasonable density, but not for most
variational objective functions.
2
Background
Variational inference chooses an approximation to the posterior distribution from a class of candidate
distributions through numerical optimization [13]. The candidate approximating distributions q!
are parameterized by a real-valued vector !. The variational objective function L, also known as
the evidence lower bound (ELBO), is an expectation with respect to latent variables z that follow
an approximating distribution q! :
L(!) , Eq! {log p(x, z) log q! (z)} .
(1)
Here x, the data, is fixed. If this expectation has an analytic form, L may be maximized by
deterministic optimization methods, such as coordinate ascent and Newton?s method. Realistic
Bayesian models, however, not selected primarily for computational convenience, seldom yield
variational objective functions with analytic forms.
Stochastic optimization offers an alternative. For many common classes of approximating
distributions, there exists a base distribution p0 and a function g! such that, for e ? p0 and z ? q! ,
d
g! (e) = z. In words: the random variable z whose distribution depends on !, is a deterministic
function of a random variable e whose distribution does not depend on !. This alternative expression
of the variational distribution is known as the ?reparameterization trick? [1, 2, 3, 14]. At each
iteration of an optimization procedure, ! is updated based on an unbiased Monte Carlo approximation
to the objective function:
N
X
? e1 , . . . , e N ) , 1
L(!;
{log p(x, g! (ei ))
N i=1
log q! (g! (ei ))}
(2)
for e1 , . . . , eN sampled from the base distribution.
3
TrustVI
TrustVI performs stochastic optimization of the ELBO L to find a distribution q! that approximates
the posterior. For TrustVI to converge, the ELBO only needs to satisfy Condition 1. (Subsequent
conditions apply to the algorithm specification, not the optimization problem.)
Condition 1. L : RD ! R is a twice-differentiable function of ! that is bounded above. Its gradient
has Lipschitz constant L.
Condition 1 is compatible with all models whose conditional distributions are in the exponential
family. The ELBO for a model with categorical random variables, for example, is twice differentiable
in its parameters when using a mean-field categorical variational distribution.
2
Algorithm 1 TrustVI
Require: Initial iterate !0 2 RD ; initial trust region radius
parameters listed in Table 1.
for k = 0, 1, 2, . . . do
Draw stochastic gradient gk satisfying Condition 2.
Select symmetric matrix Hk satisfying Condition 3.
Solve for sk , arg max gk| s + 12 s| Hk s : ksk ? k .
Compute m0k , gk| sk + 12 s|k Hk sk .
Select Nk satisfying Inequality 11 and Inequality 13.
Draw `0k1 , . . . , `0kNk satisfying Condition 4.
PNk 0
Compute `0k , N1k i=1
`ki .
2
if `0k ?m0k
then
k
!k+1
!k + s k
min( k , max )
k+1
else
!k+1
!k
k+1
k/
end if
end for
0
2 (0,
max ];
and settings for the
Table 1: User-selected parameters for TrustVI
name
?
?
?1
?2
?3
?0
?1
?H
max
brief description
model fitness threshold
trust region expansion factor
trust region radius constraint
tradeoff between trust region radius and objective value
tradeoff between both sampling rates
accuracy of ?good? stochastic gradients? norms
accuracy of ?good? stochastic gradients? directions
probability of ?good? stochastic gradients
probability of accepting a ?good? step
maximum norm of the quadratic models? Hessians
maximum trust region radius for enforcing some conditions
maximum trust region radius
allowable range
(0, 1/2]
(1, 1)
(0, 1)
2
( /(1
), 1)
(0, 1 ?)
(0, 1)
(0, 1 ? ?1 )
(1/2, 1)
(1/(2?0 ), 1)
[0, 1)
(0, 1]
(0, 1)
The domain of L is taken to be all of RD . If instead the domain is a proper subset of a real coordinate
space, the ELBO can often be reparameterized so that its domain is RD [4].
TrustVI iterations follow the form of common deterministic trust region methods: 1) construct a
quadratic model of the objective function restricted to the current trust region; 2) find an approximate
optimizer of the model function: the proposed step; 3) assess whether the proposed step leads to
an improvement in the objective; and 4) update the iterate and the trust region radius based on the
assessment. After introducing notation in Section 3.1, we describe proposing a step in Section 3.2
and assessing a proposed step in Section 3.3. TrustVI is summarized in Algorithm 1.
3.1
Notation
TrustVI?s iteration number is denoted by k. During iteration k, until variables are updated at its
end, !k is the iterate, k is the trust region radius, and L(!k ) is the objective-function value. As
shorthand, let Lk , L(!k ).
During iteration k, a quadratic model mk is formed based on a stochastic gradient gk of L(!k ), as
well as a local Hessian approximation Hk . The maximizer of this model on the trust region, sk , we
call the proposed step. The maximum, denoted m0k , mk (sk ), we refer to as the model improvement.
We use the ?prime? symbol to denote changes relating to a proposed step sk that is not necessarily
3
accepted; e.g., L0k = L(!k + sk ) Lk . We use the symbol to denote change across iterations; e.g.,
0
Lk = Lk+1 Lk . If a proposed step is accepted, then, for example, Lk = L0k and
k = k.
Each iteration k has two sources of randomness: mk and `0k , an unbiased estimate of L0k that
determines whether to accept proposed step sk . `0k is based on an iid random sample of size Nk
(Section 3.3).
For the random sequence m1 , `01 , m2 , `02 , . . ., it is often useful to condition on the earlier variables
when reasoning about the next. Let Mk refer to the -algebra generated by m1 , . . . , mk 1 and
`01 , . . . , `0k 1 . When we condition on Mk , we hold constant all the outcomes that precede iteration
0
0
k. Let M+
k refer to the -algebra generated by m1 , . . . , mk and `1 , . . . , `k 1 . When we condition
+
on Mk , we hold constant all the outcomes that precede drawing the sample that determines whether
to accept the kth proposed step.
Table 1 lists the user-selected parameters that govern the behavior of the algorithm. TrustVI
converges to a stationary point for any selection of parameters in the allowable range (column 3). As
shorthand, we refer to a particular trust region radius, derived from the user-selected parameters, as
!
r
?m0k
?2 ?3 krLk k
,
,
.
(3)
k , min
?2 L + ?2 ??H + 8?H
3.2
Proposing a step
At each iteration, TrustVI proposes the step sk that maximizes the local quadratic approximation
1
mk (s) = Lk + gk| s + s| Hk s : ksk ? k
(4)
2
to the function L restricted to the trust region.
We set gk to the gradient of L? at !k , where L? is evaluated using a freshly drawn sample e1 , . . . , eN .
From Equation 2 we see that gk is a stochastic gradient constructed from a minibatch of size N . We
must choose N large enough to satisfy the following condition:
Condition 2. If k ? k , then, with probability ?0 , given Mk ,
gk| rLk
and
(?1 + ?3 )krLk kkgk k + ?kgk k2
kgk k
?2 krLk k.
(5)
(6)
Condition 2 is the only restriction on the stochastic gradients: they have to point in roughly the right
direction most of the time, and they have to be of roughly the right magnitude when they do. By
constructing the stochastic gradients from large enough minibatches of draws from the variational
distribution, this condition can always be met.
In practice, we cannot observe rL, and we do not explicitly set ?1 , ?2 , and ?3 . Fortunately, Condition 2 holds as long as our stochastic gradients remain large in relation to their variance. Because
we base each stochastic gradient on at least one sizable minibatch, we always have many iid samples
to inform us about the population of stochastic gradients. We use a jackknife estimator [15] to conservatively bound the standard deviation of the norm of the stochastic gradient. If the norm of a given
stochastic gradient is small relative to its standard deviation, we double the next iteration?s sampling
rate. If it is large relative to its standard deviation, we halve it. Otherwise, we leave it unchanged.
The gradient observations may include randomness from sources other than sampling the variational
distribution too. In the ?doubly stochastic? setting [3], for example, the data is also subsampled.
This setting is fully compatible with our algorithm, though the size of the subsample may need to
vary across iterations. To simplify our presentation, we henceforth only consider stochasticity from
sampling the variational distribution.
Condition 3 is the only restriction on the quadratic models? Hessians.
Condition 3. There exists finite ?H satisfying, for the spectral norm,
for all iterations k with
k
?
k
.
kHk k ? ?H a. s.
4
(7)
For concreteness we bound the spectral norm of Hk , but a bound on any Lp norm suffices. The
algorithm specification does not involve ?H , but the convergence proof requires that ?H be finite.
This condition suffices to ensure that, when the trust region is small enough, the model?s Hessian
cannot interfere with finding a descent direction. With such mild conditions, we are free to use
nearly arbitrary Hessians. Hessians may be formed like the stochastic gradients, by sampling from
the variational distribution. The number of samples can be varied. The quadratic model?s Hessian
could even be set to the identity matrix if we prefer not to compute second-order information.
Low-dimensional models, and models with block diagonal Hessians, may be optimized explicitly
by inverting Hk + ?k I, where ?k is either zero for interior solutions, or just large enough that
( Hk + ?k I) 1 gk is on the boundary of the trust region [5]. Matrix inversion has cubic runtime
though, and even explicitly storing Hk is prohibitive for many variational objectives.
In our experiments, we instead maximize the model without explicitly storing the Hessian, through
Hessian-vector multiplication, assembling Krylov subspaces through both conjugate gradient
iterations and Lanczos iterations [16, 17]. We reuse our Hessian approximation for two consecutive
iterations if the iterate does not change (i.e., the proposed steps are rejected). A new stochastic
gradient gk is still drawn for each of these iterations.
3.3
Assessing the proposed step
Deterministic trust region methods only accept steps that improve the objective by enough. In a
stochastic setting, we must ensure that accepting ?bad? steps is improbable while accepting ?good?
steps is likely.
To assess steps, TrustVI draws new samples from the variational distribution?we may not
reuse the samples that gk and Hk are based on. The new samples are used to estimate both
L(!k ) and L(!k + sk ). Using the same sample to estimate both quantities is analogous to a
matched-pairs experiment; it greatly reduces the variance of the improvement estimator. Formally,
for i = 1, . . . , NK , let eki follow the base distribution and set
? k + sk ; eki ) L(!
? k ; eki ).
`0ki , L(!
(8)
Let
`0k ,
Nk
1 X
`0 .
Nk i=1 ki
(9)
Then, `0k is an unbiased estimate of L0k ?the quantity a deterministic trust region method would use
to assess the proposed step.
3.3.1
Choosing the sample size
To pick the sample size NK , we need additional control on the distribution of the `0ki . The next
condition gives us that.
Condition 4. For each k, there exists finite k such that the `0ki are k -subgaussian.
Unlike the quantities we have introduced earlier, such as L and ?H , the k need to be known to carry
out the algorithm. Because `0k1 , `0k2 , . . . are iid, k2 may be estimated?after the sample is drawn?by
PNk 0
the population variance formula, i.e., Nk1 1 i=1
(`ki `0k ). We discuss below, in the context of
setting Nk , how to make use of a ?retrospective? estimate of k in practice.
Two user-selected constants control what steps are accepted: ? 2 (0, 1/2) and > 0. The step
is accepted iff 1) the observed improvement `0k exceeds the fraction ? of the model improvement
m0k , and 2) the model improvement is at least a small fraction /? of the trust region radius squared.
Formally, steps are accepted iff
2
`0k ?m0k
(10)
k.
If ?m0k <
2
k,
the step is rejected regardless of `0k : we set Nk = 0.
Otherwise, we pick the smallest Nk such that
? 2
?
?
?
2 k2
?2 k + y
?m0k
2
Nk
log
,
8y
>
max
,
?
2
k
(?m0k + y)2
?1 k2
2
5
(11)
where
?1 , ?(1
2
and
)
?2 , ?(
2
2
).
(12)
Finding the smallest such Nk is a one-dimensional optimization problem. We solve it via bisection.
Inequality 11 ensures that we sample enough to reject most steps that do not improve the objective
sufficiently. If we knew exactly how a proposed step changed the objective, we could express in
closed form how many samples would be needed to detect bad steps with sufficiently high probability.
Since we do not know that, Inequality 11 is for all such change-values in a range. Nonetheless, Nk
is rarely large in practice: the second factor lower bounding Nk is logarithmic in y; in the first factor
the denominator is bounded away from zero.
Finally, if
k
?
k
, we also ensure Nk is large enough that
2 k2 log(1 ?1 )
.
?12 krLk k2 k2
Nk
(13)
Selecting Nk this large ensures that we sample enough to detect most steps that improve the value of
the objective sufficiently when the trust region is small. This bound is not high in practice. Because
of how the `0ki are collected (a ?matched-pairs experiment?), as k becomes small, k becomes small
too, at roughly the same rate.
In practice, at the end of each iteration, we estimate whether Nk was large enough to meet the conditions. If not, we set Nk+1 = 2Nk . If Nk exceeds the size of the gradient?s minibatch, and it is more
than twice as large as necessary to meet the conditions, we set Nk+1 = Nk /2. These Nk function
evaluations require little computation compared to computing gradients and Hessian-vector products.
4
Convergence to a stationary point
To show that TrustVI converges to a stationary point, we reason about the stochastic process (
where
k
In words,
k
, Lk
? k2 .
1
k )k=1 ,
(14)
is the objective function penalized by the weighted squared trust region radius.
Because TrustVI is stochastic, neither Lk nor k necessarily increase at every iteration. But, k
increases in expectation at each iteration (Lemma 1). That alone, however, does not suffice to show
TrustVI reaches a stationary point; k must increase in expectation by enough at each iteration.
Lemma 1 and Lemma 2 in combination show just that. The latter states that the trust region radius
cannot remain small unless the gradient is small too, while the former states that the expected
increase is a constant fraction of the squared trust region radius. Perhaps surprisingly, Lemma 1
does not depend on the quality of the quadratic model: Rejecting a proposed step always leads to
sufficient increase in k . Accepting a bad step, though possible, rapidly becomes less likely as the
proposed step gets worse. No matter how bad a proposed step is, k increases in expectation.
Theorem 1 uses the lemmas to show convergence by contradiction. The structure of its proof,
excluding the proofs of the lemmas, resembles the proof from [5] that a deterministic trust region
method converges. The lemmas? proofs, on the other hand, more closely resemble the style of
reasoning in the stochastic optimization literature [12].
Theorem 1. For Algorithm 1,
lim krLk k = 0 a. s.
k!1
(15)
Proof. By Condition 1, L is bounded above. The trust region radius k is positive almost surely by
construction. Therefore, k is bounded above almost surely by the constant sup L. Let the constant
c , sup L
0 . Then,
1
X
k=1
E[
k
| Mk ] ? c a. s.
6
(16)
By Lemma 1, E[ k | M+
k | Mk ], is almost surely nonnegative. Therefore,
k ], and hence E[
E[ k | Mk ] ! 0 almost surely. By an additional application of Lemma 1, k2 ! 0 almost surely
too.
Suppose there exists K0 and ? > 0 such that krLk k ? for all k > K1 . Fix K
K0 such that
1
K. By Lemma 2, (log
k meets the conditions of Lemma 2 for all k
k )K is a submartingale.
A submartingale almost surely does not go to 1, so k almost surely does not go to 0. The
contradiction implies that krLk k < ? infinitely often.
Because our choice of ? was arbitrary,
(17)
lim inf krLk k = 0 a. s.
k!1
Because
2
k
Lemma 1.
! 0 almost surely, this limit point is unique.
E
?
k
| M+
k
?
2
k
(18)
a. s.
Proof. Let ? denote the probability that the proposed step is accepted. Then,
E[
k
| M+
k ] = (1
=
2
?)[?(1
?[L0k
?2 k2 ]
) k2 ] + ?[L0k
?1 k2
+
+
?(
2
2
k.
1)]
2
k
(19)
(20)
By the lower bound on ?, ?1 0. If ?m0k < k2 , the step is rejected regardless of `k , so the lemma
holds. Also, if L0k ?2 k2 , then lemma holds for any ? 2 [0, 1]. So, consider just L0k < ?2 k2 and
2
?m0k
k.
The probability ? of accepting this step is a tail bound on the sum of iid subgaussian random
variables. By Condition 4, Hoeffding?s inequality applies. Then, Inequality 11 lets us cancel some
of the remaining iteration-specific variables:
? = P(`0k
=
?m0k | M+
k)
P(`0k
NK
X
=P
?m0k
L0k
(`0ki
L0k )
(?m0k
i=1
? exp
?
?
?1
?2 k2
(21)
L0k
(?m0k
2
2
k
L0k
|
M+
k)
L0k )Nk M+
k
L0k )2 Nk
!
(22)
(23)
(24)
2
k
(25)
.
The lemma follows from substituting Inequality 25 into Equation 20.
Lemma 2. For each iteration k, on the event
P(`0k
k
?
k
?m0k | Mk )
, we have
?0 ?1 >
1
.
2
(26)
The proof appears in Appendix A of the supplementary material.
5
Experiments
Our experiments compare TrustVI to both Automatic Differentiation Variational Inference (ADVI) [4]
and Hessian-free Stochastic Gradient Variational Inference (HFSGVI) [9]. We use the authors?
Stan [21] implementation of ADVI, and implement the other two algorithms in Stan as well.
Our study set comprises 183 statistical models and datasets from [22], an online repository of
open-source Stan models and datasets. For our trials, the variational distribution is always mean-field
multivariate Gaussian. The dimensions of ELBO domains range from 2 to 2012.
7
- 103
ADVI
TrustVI
HFSGVI
- 10
-105
-106
ELBO
ELBO
-103
-104
-10
-108
7
-109
-1010
ADVI
TrustVI
HFSGVI
4
- 105
- 106
- 107
100
101
102
103
- 108
104
100
runtime (oracle calls)
-102.95
ADVI
TrustVI
HFSGVI
104
ADVI
TrustVI
HFSGVI
-103.2
-103.4
-103.05
-103.10
-103.15
-103.6
-103.8
-103.20
-104.0
-103.25
100
103
(b) A bivariate normal hierarchical model (?Birats?)
from [19]. 132-dimensional domain.
ELBO
ELBO
-103.00
102
runtime (oracle calls)
(a) A variance components model (?Dyes?) from [18].
18-dimensional domain.
-102.90
101
101
102
103
-104.2
104
runtime (oracle calls)
100
101
102
103
104
runtime (oracle calls)
(c) A multi-level linear model (?Electric Chr?)
from [20]. 100-dimensional domain.
(d) A multi-level linear model (?Radon Redundant
Chr?) from [20]. 176-dimensional domain.
Figure 1: Each panel shows optimization paths for five runs of ADVI, TrustVI, and HFSGVI, for
a particular dataset and statistical model. Both axes are log scale.
In addition to the final objective value for each method, we compare the runtime each method
requires to produce iterates whose ELBO values are consistently above a threshold. As the threshold,
for each pair of methods we compare, we take the ELBO value reached by the worse performing
method, and subtract one nat from it.
We measure runtime in ?oracle calls? rather than wall clock time so that the units are independent
of the implementation. Stochastic gradients, stochastic Hessian-vector products, and estimates
of change in ELBO value are assigned one, two, and one oracle calls, respectively, to reflect the
number of floating point operations required to compute them. Each stochastic gradient is based
on a minibatch of 256 samples of the variational distribution. The number of variational samples
for stochastic Hessian-vector products and for estimates of change (85 and 128, respectively) are
selected to match the degree of parallelism for stochastic gradient computations.
To make our comparison robust to outliers, for each method and each model, we optimize five times,
but ignore all runs except the one that attains the median final objective value.
5.1
Comparison to ADVI
ADVI has two phases that contribute to runtime: During the first phase, a learning rate is selected
based on progress made by SGD during trials of 50 (by default) ?adaptation? SGD iterations, for
as many as six learning rates. During the second phase, the variational objective is optimized with
the learning rate that made the most progress during the trials. If the number of adaptation iterations
is small relative to the number of iterations needed to optimize the variational objective, then the
learning rate selected may be too large: what appears most productive at first may be overly ?greedy?
for a longer run. Conversely, a large number of adaptation iteration may leave little computational
budget for the actual optimization. We experimented with both more and fewer adaptation iterations
8
than the default but did not find a setting that was uniformly better than the default. Therefore, we
report on the default number of adaption iterations for our experiments.
Case studies. Figure 1 and Appendix B show the optimization paths for several models, chosen to
demonstrate typical performance. Often ADVI does not finish its adaptation phase before TrustVI
converges. Once the adaptation phase ends, ADVI generally increased the objective value function
more gradually than TrustVI did, despite having expended iterations to tune its learning rate.
Quality of optimal points. For 126 of the 183 models (69%), on sets of five runs, the median optimal
values found by ADVI and TrustVI did not differ substantively. For 51 models (28%), TrustVI found
better optimal values than ADVI. For 6 models (3%), ADVI found better optimal values than TrustVI.
Runtime. We excluded model-threshold pairs from the runtime comparison that did not require at
least five iterations to solve; they were too easy to be representative of problems where the choice
of optimization algorithm matters. For 136 of 137 models (99%) remaining in our study set, TrustVI
was faster than ADVI. For 69 models (50%), TrustVI was at least 12x faster than ADVI. For 34
models (25%), TrustVI was at least 36x faster than ADVI.
5.2
Comparison to HFSGVI
HFSGVI applies Newton?s method?an algorithm that converges for convex and deterministic
objective functions?to an objective function that is neither. But do convergence guarantees matter
in practice?
Often HFSGVI takes steps so large that numerical overflow occurs during the next iteration: the
gradient ?explodes? during the next iteration if we take a bad enough step. With TrustVI, we reject
obviously bad steps (e.g., those causing numerical overflow) and try again with a smaller trust region.
We tried several heuristics to workaround this problem with HFSGVI, including shrinking the
norm of the very large steps that would otherwise cause numerical overflow. But ?large? is relative,
depending on the problem, the parameter, and the current iterate; severely restricting step size would
unfairly limit HFSGVI?s rate of convergence. Ultimately, we excluded 23 of the 183 models from
further analysis because HFSGVI consistently generated numerical overflow errors for them, leaving
160 models in our study set.
Case studies. Figure 1 and Appendix B show that even when HFSGVI does not step so far as to
cause numerical overflow, it nonetheless often makes the objective value worse before it gets better.
HFSGVI, however, sometimes makes faster progress during the early iterations, while TrustVI is
rejecting steps as it searches for an appropriate trust region radius.
Quality of optimal points. For 107 of the 160 models (59%), on sets of five runs, the median optimal
value found by TrustVI and HFSGVI did not differ substantively. For 51 models (28%), TrustVI
found a better optimal values than HFSGVI. For 1 model (0.5%), HFSGVI found a better optimal
value than TrustVI.
Runtime. We excluded 45 model-threshold pairs from the runtime comparison that did not require
at least five iterations to solve, as in Section 5.1. For the remainder of the study set, TrustVI was
faster than HFSGVI for 61 models, whereas HFSGVI was faster than TrustVI for 54 models. As
a reminder, HFSGVI failed to converge on another 23 models that we excluded from the study set.
6
Conclusions
For variational inference, it is no longer necessary to pick between slow stochastic first-order
optimization (e.g., ADVI) and fast-but-restrictive deterministic second-order optimization. The
algorithm we propose, TrustVI, leverages stochastic second-order information, typically finding
a solution at least one order of magnitude faster than ADVI. While HFSGVI also uses stochastic
second-order information, it lacks convergence guarantees. For more than one-third of our
experiments, HFSGVI terminated at substantially worse ELBO values than TrustVI, demonstrating
that convergence theory matters in practice.
9
References
[1] Diederik Kingma and Max Welling. Auto-encoding variational Bayes. In International Conference on
Learning Representations, 2014.
[2] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and
approximate inference in deep generative models. In International Conference on Machine Learning, 2014.
[3] Michalis Titsias and Miguel L?zaro-Gredilla. Doubly stochastic variational Bayes for non-conjugate
inference. In International Conference on Machine Learning, 2014.
[4] Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M. Blei. Automatic
Differentiation Variational Inference. Journal of Machine Learning Research, 18(14):1?45, 2017.
[5] Jorge Nocedal and Stephen Wright. Numerical optimization. Springer, 2nd edition, 2006.
[6] Jeffrey Regier et al. Learning an astronomical catalog of the visible universe through scalable Bayesian
inference. arXiv preprint arXiv:1611.03404, 2016.
[7] Jeffrey Fike and Juan Alonso. Automatic differentiation through the use of hyper-dual numbers for second
derivatives. In Recent Advances in Algorithmic Differentiation, pages 163?173. Springer, 2012.
[8] Barak A Pearlmutter. Fast exact multiplication by the Hessian. Neural Computation, 6(1):147?160, 1994.
[9] Kai Fan, Ziteng Wang, Jeffrey Beck, James Kwok, and Katherine Heller. Fast second-order stochastic
backpropagation for variational inference. In Advances in Neural Information Processing Systems, 2015.
[10] Sara Shashaani, Susan Hunter, and Raghu Pasupathy. ASTRO-DF: Adaptive sampling trust-region optimization algorithms, heuristics, and numerical experience. In IEEE Winter Simulation Conference, 2016.
[11] Geng Deng and Michael C Ferris. Variable-number sample-path optimization. Mathematical Programming,
117(1):81?109, 2009.
[12] Ruobing Chen, Matt Menickelly, and Katya Scheinberg. Stochastic optimization using a trust-region
method and random models. Mathematical Programming, pages 1?41, 2017.
[13] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians.
Journal of the American Statistical Association, 2017.
[14] James Spall. Introduction to stochastic search and optimization: Estimation, simulation, and control.
John Wiley & Sons, 2005.
[15] Bradley Efron and Charles Stein. The jackknife estimate of variance. The Annals of Statistics, pages
586?596, 1981.
[16] Nicholas Gould, Stefano Lucidi, Massimo Roma, and Philippe Toint. Solving the trust-region subproblem
using the Lanczos method. SIAM Journal on Optimization, 9(2):504?525, 1999.
[17] Felix Lenders, Christian Kirches, and Andreas Potschka. trlib: A vector-free implementation of the GLTR
method for iterative solution of the trust region problem. arXiv preprint arXiv:1611.04718, 2016.
[18] OpenBugs developers. Dyes: Variance components model. http://www.openbugs.net/Examples/
Dyes.html, 2017. [Online; accessed Oct 8, 2017].
[19] OpenBugs developers. Rats: A normal hierarchical model. http://www.openbugs.net/Examples/
Rats.html, 2017. [Online; accessed Oct 8, 2017].
[20] Andrew Gelman and Jennifer Hill. Data analysis using regression and multilevel/hierarchical models.
Cambridge University Press, 2006.
[21] Bob Carpenter et al. Stan: A probabilistic programming language. Journal of Statistical Software, 20, 2016.
[22] Stan developers. https://github.com/stan-dev/example-models, 2017. [Online; accessed Jan
3, 2017; commit 6fbbf36f9d14ed69c7e6da2691a3dbe1e3d55dea].
[23] OpenBugs developers. Alligators: Multinomial-logistic regression. http://www.openbugs.net/
Examples/Aligators.html, 2017. [Online; accessed Oct 4, 2017].
[24] OpenBugs developers. Seeds: Random effect logistic regression.
Examples/Seeds.html, 2017. [Online; accessed Oct 4, 2017].
http://www.openbugs.net/
[25] David Lunn, Chris Jackson, Nicky Best, Andrew Thomas, and David Spiegelhalter. The BUGS book:
A practical introduction to Bayesian analysis. CRC press, 2012.
10
| 6834 |@word mild:1 kgk:2 repository:1 trial:3 inversion:1 norm:8 nd:1 open:1 instruction:1 simulation:2 tried:1 p0:2 pick:3 sgd:3 carry:1 initial:2 exclusively:1 selecting:1 jimenez:1 bradley:1 current:2 com:1 diederik:1 must:3 john:1 numerical:8 realistic:1 subsequent:1 visible:1 analytic:3 christian:1 update:1 stationary:7 alone:1 selected:8 prohibitive:1 greedy:1 fewer:1 generative:1 accepting:5 blei:2 iterates:1 contribute:1 accessed:5 five:6 wierstra:1 mathematical:2 constructed:2 become:1 shorthand:2 doubly:2 khk:1 introduce:1 expected:1 roughly:3 behavior:1 nor:1 multi:2 cpu:1 little:2 actual:1 becomes:3 bounded:5 notation:2 maximizes:1 matched:2 suffice:1 panel:1 what:2 kind:1 substantially:2 developer:5 proposing:2 finding:3 differentiation:7 guarantee:3 berkeley:3 every:1 act:1 runtime:12 exactly:1 k2:17 control:3 unit:1 mcauliffe:2 positive:1 before:2 felix:1 local:2 limit:2 severely:1 despite:1 encoding:1 meet:3 path:3 black:2 twice:3 katya:1 resembles:1 conversely:1 sara:1 range:4 practical:2 unique:1 zaro:1 practice:9 block:1 implement:1 backpropagation:2 procedure:1 jan:1 reject:2 convenient:1 word:2 get:2 cannot:4 convenience:1 selection:1 interior:1 gelman:2 context:1 optimize:3 knk:1 deterministic:10 restriction:2 www:4 go:2 regardless:2 convex:1 m2:1 estimator:2 contradiction:2 jackson:1 reparameterization:4 population:2 coordinate:3 variation:1 analogous:1 updated:2 annals:1 construction:1 suppose:1 user:4 exact:1 programming:3 lucidi:1 us:4 trick:4 expensive:2 particularly:1 satisfying:5 observed:1 subproblem:1 preprint:2 wang:1 thousand:2 susan:1 region:38 ensures:2 substantial:1 govern:1 workaround:1 advi:24 productive:3 ultimately:1 depend:2 solving:1 algebra:2 titsias:1 k0:2 routinely:1 fast:9 effective:1 describe:1 monte:1 hyper:1 outcome:2 choosing:1 whose:4 heuristic:2 supplementary:1 valued:1 solve:4 kai:1 drawing:1 elbo:14 otherwise:3 statistic:1 commit:1 pnk:2 online:6 final:2 eki:3 sequence:1 differentiable:3 obviously:1 shakir:1 net:4 propose:1 tran:1 product:5 adaptation:6 remainder:1 causing:1 relevant:1 date:1 rapidly:1 iff:2 poorly:1 adapts:1 description:1 bug:1 nicky:1 convergence:12 double:1 requirement:1 assessing:3 produce:1 converges:9 leave:2 depending:1 andrew:3 stat:1 miguel:1 progress:3 eq:1 sizable:1 implemented:1 c:2 resemble:1 implies:1 met:1 differ:3 direction:4 radius:14 closely:1 stochastic:48 alp:2 material:1 crc:1 require:4 multilevel:1 suffices:2 fix:1 wall:1 hold:5 sufficiently:3 wright:1 normal:2 exp:1 seed:2 algorithmic:1 substituting:1 optimizer:1 vary:1 consecutive:1 smallest:2 early:1 estimation:1 applicable:1 precede:2 weighted:1 always:5 gaussian:1 rather:2 derived:1 ax:1 rezende:1 improvement:6 consistently:2 hk:10 greatly:1 attains:1 detect:2 inference:20 typically:5 submartingale:2 accept:3 relation:1 provably:2 arg:1 among:1 html:4 dual:1 denoted:2 proposes:2 field:2 simd:1 construct:1 once:1 having:1 sampling:7 cancel:1 nearly:2 jon:3 geng:1 report:1 spall:1 simplify:1 primarily:1 modern:1 winter:1 fitness:1 subsampled:1 replaced:1 floating:1 phase:5 beck:1 jeffrey:4 attractor:1 statistician:1 interest:1 evaluation:1 extreme:1 rajesh:1 necessary:2 improbable:1 experience:1 unless:1 mk:14 increased:1 column:1 earlier:2 lunn:1 negating:1 dev:1 lanczos:2 introducing:1 deviation:3 subset:1 successful:1 too:6 chooses:1 density:1 international:3 siam:1 accessible:1 probabilistic:1 michael:2 quickly:1 squared:3 reflect:1 satisfied:1 again:1 kucukelbir:2 choose:1 hoeffding:2 juan:1 henceforth:1 worse:4 book:1 american:1 derivative:4 style:1 reusing:1 expended:1 bfgs:1 summarized:1 matter:5 satisfy:2 explicitly:6 vi:2 depends:1 performed:2 try:1 closed:2 steeper:1 sup:2 reached:1 bayes:2 ass:5 formed:4 accuracy:2 n1k:1 variance:6 efficiently:1 maximized:1 yield:1 bayesian:4 rejecting:2 hunter:1 iid:4 bisection:1 carlo:1 bob:1 randomness:2 converged:1 inform:1 reach:1 halve:1 nonetheless:2 mohamed:1 storm:2 james:2 proof:8 sampled:2 dataset:1 lim:2 substantively:2 reminder:1 astronomical:1 efron:1 appears:2 higher:1 danilo:1 follow:3 evaluated:1 box:2 though:3 just:3 rejected:3 until:1 clock:1 hand:2 trust:38 ei:2 assessment:1 lack:2 maximizer:1 minibatch:4 interfere:1 mode:1 logistic:2 quality:3 perhaps:1 openbugs:8 name:1 matt:1 effect:1 unbiased:3 former:2 hence:1 assigned:1 excluded:4 symmetric:1 regier:2 during:9 rat:2 allowable:2 hill:1 demonstrate:1 pearlmutter:1 performs:1 stefano:1 reasoning:2 variational:38 charles:1 common:2 multinomial:1 empirically:1 rl:1 tail:1 assembling:1 approximates:1 relating:1 m1:3 association:1 refer:4 cambridge:1 automatic:6 seldom:1 rd:4 stochasticity:2 language:1 specification:2 longer:2 base:4 curvature:2 posterior:2 multivariate:1 recent:1 dye:3 optimizing:1 inf:1 prime:1 nonconvex:2 inequality:7 jorge:1 minimum:1 fortunately:2 additional:2 deng:1 surely:8 converge:4 maximize:1 redundant:1 stephen:1 multiple:1 reduces:1 l0k:14 exceeds:2 faster:8 match:1 offer:1 long:1 e1:3 variant:1 regression:3 scalable:1 denominator:1 essentially:1 expectation:6 df:1 arxiv:4 iteration:40 sometimes:1 background:1 addition:1 whereas:1 else:1 median:3 source:3 leaving:1 unlike:1 ascent:2 explodes:1 jordan:2 practitioner:1 call:7 subgaussian:2 leverage:1 enough:13 easy:1 iterate:5 finish:1 andreas:1 tradeoff:2 whether:4 expression:1 six:1 reuse:2 retrospective:1 hessian:22 cause:2 deep:1 useful:1 generally:1 listed:1 involve:1 tune:1 pasupathy:1 stein:1 ten:1 reduced:1 http:5 exist:1 estimated:1 overly:1 express:1 key:1 demonstrating:3 threshold:5 drawn:3 neither:2 nocedal:1 concreteness:1 fraction:3 sum:2 run:5 exponentiation:1 parameterized:1 throughout:2 almost:9 reasonable:1 family:1 draw:5 prefer:1 appendix:3 radon:1 toint:1 bound:8 ki:8 fan:2 quadratic:9 nonnegative:1 oracle:6 constraint:1 software:1 min:2 performing:1 gpus:1 jackknife:2 gould:1 gredilla:1 combination:1 conjugate:2 across:2 remain:2 smaller:1 ziteng:1 son:1 lp:1 making:2 outlier:1 restricted:2 gradually:1 taken:1 equation:2 scheinberg:1 discus:1 jennifer:1 needed:2 know:1 end:5 raghu:1 generalizes:1 available:1 operation:1 ferris:1 apply:3 observe:1 hierarchical:3 away:1 spectral:2 appropriate:1 kwok:1 nicholas:1 alternative:3 thomas:1 remaining:2 include:1 ensure:3 michalis:1 newton:6 unsuitable:1 exploit:1 restrictive:1 k1:3 especially:2 build:1 approximating:3 overflow:5 alligator:1 unchanged:1 objective:27 already:1 quantity:3 occurs:1 diagonal:1 gradient:31 kth:1 subspace:1 alonso:1 chris:1 astro:1 collected:1 trivial:1 reason:1 enforcing:1 rom:1 difficult:1 katherine:1 gk:11 negative:1 resurgence:1 implementation:4 proper:1 observation:1 datasets:2 daan:1 finite:4 descent:2 philippe:1 reparameterized:1 excluding:1 varied:1 arbitrary:2 community:1 introduced:1 inverting:1 pair:5 required:1 david:4 optimized:2 raising:1 catalog:1 quadratically:1 kingma:1 krylov:1 usually:2 parallelism:2 below:1 max:6 including:1 event:1 improve:3 github:1 spiegelhalter:1 brief:1 stan:7 lk:9 categorical:2 auto:1 heller:1 literature:1 review:1 multiplication:2 relative:4 fully:1 ksk:2 sublinear:1 limitation:1 degree:1 sufficient:1 article:1 storing:2 m0k:16 compatible:2 changed:1 penalized:1 surprisingly:1 free:6 unfairly:1 barak:1 absolute:1 sparse:1 benefit:1 boundary:1 hfsgvi:25 dimension:1 default:4 conservatively:1 forward:1 collection:1 author:1 made:2 adaptive:1 far:1 welling:1 ranganath:1 approximate:3 ignore:1 knew:1 freshly:1 search:2 latent:1 iterative:1 sk:11 table:3 robust:1 expansion:1 necessarily:2 constructing:1 domain:8 electric:1 did:6 universe:1 terminated:1 bounding:1 subsample:1 edition:1 carpenter:1 representative:1 en:2 cubic:1 slow:3 wiley:1 shrinking:1 comprises:1 exponential:1 candidate:2 third:1 dustin:1 formula:1 theorem:2 bad:6 specific:1 symbol:2 list:1 experimented:1 evidence:1 rlk:1 exists:4 bivariate:1 restricting:1 importance:1 magnitude:4 nat:1 budget:1 nk:26 chen:1 subtract:1 led:1 logarithmic:1 saddle:1 likely:2 infinitely:1 failed:1 applies:2 springer:2 determines:2 adaption:1 minibatches:4 oct:4 conditional:1 identity:1 presentation:1 massimo:1 lipschitz:1 change:6 typical:1 except:1 uniformly:2 lemma:16 total:1 accepted:6 rarely:1 select:2 formally:2 chr:2 latter:2 evaluate:1 |
6,451 | 6,835 | Scalable Demand-Aware Recommendation
Jinfeng Yi1?, Cho-Jui Hsieh2 , Kush R. Varshney3 , Lijun Zhang4 , Yao Li2
1
AI Foundations Lab, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA
2
University of California, Davis, CA, USA
3
IBM Research AI, Yorktown Heights, NY, USA
4
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
[email protected], [email protected], [email protected],
[email protected], [email protected]
Abstract
Recommendation for e-commerce with a mix of durable and nondurable goods
has characteristics that distinguish it from the well-studied media recommendation
problem. The demand for items is a combined effect of form utility and time utility,
i.e., a product must both be intrinsically appealing to a consumer and the time must
be right for purchase. In particular for durable goods, time utility is a function of
inter-purchase duration within product category because consumers are unlikely to
purchase two items in the same category in close temporal succession. Moreover,
purchase data, in contrast to ratings data, is implicit with non-purchases not necessarily indicating dislike. Together, these issues give rise to the positive-unlabeled
demand-aware recommendation problem that we pose via joint low-rank tensor
completion and product category inter-purchase duration vector estimation. We
further relax this problem and propose a highly scalable alternating minimization
approach with which we can solve problems with millions of users and millions of
items in a single thread. We also show superior prediction accuracies on multiple
real-world data sets.
1
Introduction
E-commerce recommender systems aim to present items with high utility to the consumers [18].
Utility may be decomposed into form utility: the item is desired as it is manifested, and time utility:
the item is desired at the given point in time [28]; recommender systems should take both types of
utility into account. Economists define items to be either durable goods or nondurable goods based
on how long they are intended to last before being replaced [27]. A key characteristic of durable
goods is the long duration of time between successive purchases within item categories whereas this
duration for nondurable goods is much shorter, or even negligible. Thus, durable and nondurable
goods have differing time utility characteristics which lead to differing demand characteristics.
Although we have witnessed great success of collaborative filtering in media recommendation, we
should be careful when expanding its application to general e-commerce recommendation involving
both durable and nondurable goods due to the following reasons:
1. Since media such as movies and music are nondurable goods, most users are quite receptive
to buying or renting them in rapid succession. However, users only purchase durable goods
when the time is right. For instance, most users will not buy televisions the day after they have
already bought one. Therefore, recommending an item for which a user has no immediate
demand can hurt user experience and waste an opportunity to drive sales.
?
Now at Tencent AI Lab, Bellevue, WA, USA
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2. A key assumption made by matrix factorization- and completion-based collaborative filtering
algorithms is that the underlying rating matrix is of low-rank since only a few factors typically
contribute to an individual?s form utility [5]. However, a user?s demand is not only driven by
form utility, but is the combined effect of both form utility and time utility. Hence, even if the
underlying form utility matrix is of low-rank, the overall purchase intention matrix is likely to
be of high-rank,2 and thus cannot be directly recovered by existing approaches.
An additional challenge faced by many real-world recommender systems is the one-sided sampling
of implicit feedback [15, 23]. Unlike the Netflix-like setting that provides both positive and negative
feedback (high and low ratings), no negative feedback is available in many e-commerce systems.
For example, a user might not purchase an item because she does not derive utility from it, or just
because she was simply unaware of it or plans to buy it in the future. In this sense, the labeled training
data only draws from the positive class, and the unlabeled data is a mixture of positive and negative
samples, a problem usually referred to as positive-unlabeled (PU) learning [13].
To address these issues, we study the problem of demand-aware recommendation. Given purchase
triplets (user, item, time) and item categories, the objective is to make recommendations based on
users? overall predicted combination of form utility and time utility.
We denote purchases by the sparse binary tensor P. To model implicit feedback, we assume that
P is obtained by thresholding an underlying real-valued utility tensor to a binary tensor Y and then
revealing a subset of Y?s positive entries. The key to demand-aware recommendation is defining
an appropriate utility measure for all (user, item, time) triplets. To this end, we quantify purchase
intention as a combined effect of form utility and time utility. Specifically, we model a user?s time
utility for an item by comparing the time t since her most recent purchase within the item?s category
and the item category?s underlying inter-purchase duration d; the larger the value of d ? t, the less
likely she needs this item. In contrast, d ? t may indicate that the item needs to be replaced, and
she may be open to related recommendations. Therefore, the function h = max(0, d ? t) may be
employed to measure the time utility factor for a (user, item) pair. Then the purchase intention for a
(user, item, time) triplet is given by x ? h, where x denotes the user?s form utility. This observation
allows us to cast demand-aware recommendation as the problem of learning users? form utility tensor
X and items? inter-purchase durations vector d given the binary tensor P.
Although the learning problem can be naturally formulated as a tensor nuclear norm minimization
problem, the high computational cost significantly limits its application to large-scale recommendation
problems. To address this limitation, we first relax the problem to a matrix optimization problem with
a label-dependent loss. We note that the problem after relaxation is still non-trivial to solve since it
is a highly non-smooth problem with nested hinge losses. More severely, the optimization problem
involves mnl entries, where m, n, and l are the number of users, items, and time slots, respectively.
Thus a naive optimization algorithm will take at least O(mnl) time, and is intractable for largescale recommendation problems. To overcome this limitation, we develop an efficient alternating
minimization algorithm and show that its time complexity is only approximately proportional to the
number of nonzero elements in the purchase records tensor P. Since P is usually very sparse, our
algorithm is extremely efficient and can solve problems with millions of users and items.
Compared to existing recommender systems, our work has the following contributions and advantages:
(i) to the best of our knowledge, this is the first work that makes demand-aware recommendation by
considering inter-purchase durations for durable and nondurable goods; (ii) the proposed algorithm is
able to simultaneously infer items? inter-purchase durations and users? real-time purchase intentions,
which can help e-retailers make more informed decisions on inventory planning and marketing
strategy; (iii) by effectively exploiting sparsity, the proposed algorithm is extremely efficient and able
to handle large-scale recommendation problems.
2
Related Work
Our contributions herein relate to three different areas of prior work: consumer modeling from a
microeconomics and marketing perspective [6], time-aware recommender systems [4, 29, 8, 19], and
PU learning [20, 9, 13, 14, 23, 2]. The extensive consumer modeling literature is concerned with
descriptive and analytical models of choice rather than prediction or recommendation, but nonetheless
2
A detailed illustration can be found in the supplementary material
2
forms the basis for our modeling approach. A variety of time-aware recommender systems have
been proposed to exploit time information, but none of them explicitly consider the notion of time
utility derived from inter-purchase durations in item categories. Much of the PU learning literature is
focused on the binary classification problem, e.g. [20, 9], whereas we are in the collaborative filtering
setting. For the papers that do examine collaborative filtering with PU learning or learning with
implicit feedback [14, 23, 2, 32], they mainly focus on media recommendation and overlook users?
demands, thus are not suitable for durable goods recommendation.
Temporal aspects of the recommendation problem have been examined in a few ways: as part of
the cold-start problem [3], to capture dynamics in interests or ratings over time [17], and as part of
the context in context-aware recommenders [1]. However, the problem we address in this paper is
different from all of those aspects, and in fact could be combined with the other aspects in future
solutions. To the best of our knowledge, there is no existing work that tries to take inter-purchase
durations into account to better time recommendations as we do herein.
3
Positive-Unlabeled Demand-Aware Recommendation
Throughout the paper, we use boldface Euler script letters, boldface capital letters, and boldface
lower-case letters to denote tensors (e.g., A), matrices (e.g., A) and vectors (e.g., a), respectively.
Scalars such as entries of tensors, matrices, and vectors are denoted by lowercase letters, e.g., a. In
particular, the (i, j, k) entry of a third-order tensor A is denoted by aijk .
Given a set of m users, n items, and l time slots, we construct a third-order binary tensor P ?
{0, 1}m?n?l to represent the purchase history. Specifically, entry pijk = 1 indicates that user i has
purchased item j in time slot k. We denote kPk0 as the number of nonzero entries in tensor P.
Since P is usually very sparse, we have kPk0 mnl. Also, we assume that the n items belong to r
item categories, with items in each category sharing similar inter-purchase durations.3 We use an
n-dimensional vector c ? {1, 2, . . . , r}n to represent the category membership of each item. Given
P and c, we further generate a tensor T ? Rm?r?l where ticj k denotes the number of time slots
between user i?s most recent purchase within item category cj until time k. If user i has not purchased
within item category cj until time k, ticj k is set to +?.
3.1
Inferring Purchase Intentions from Users? Purchase Histories
In this work, we formulate users? utility as a combined effect of form utility and time utility. To this
end, we use an underlying third-order tensor X ? Rm?n?l to quantify form utility. In addition, we
employ a non-negative vector d ? Rr+ to measure the underlying inter-purchase duration times of the
r item categories. It is understood that the inter-purchase durations for durable good categories are
large, while for nondurable good categories are small, or even zero. In this study, we focus on items?
inherent properties and assume that the inter-purchase durations are user-independent. The problem
of learning personalized durations will be studied in our future work.
As discussed above, the demand is mediated by the time elapsed since the last purchase of an item
in the same category. Let dcj be the inter-purchase duration time of item j?s category cj , and let
ticj k be the time gap of user i?s most recent purchase within item category cj until time k. Then
if dcj > ticj k , a previously purchased item in category cj continues to be useful, and thus user i?s
utility from item j is weak. Intuitively, the greater the value dcj ? ticj k , the weaker the utility. On the
other hand, dcj < ticj k indicates that the item is nearing the end of its lifetime and the user may be
open to recommendations in category cj . We use a hinge loss max(0, dcj ? ticj k ) to model such time
utility. The overall utility can be obtained by comparing form utility and time utility. In more detail,
we model a binary utility indicator tensor Y ? {0, 1}m?n?l as being generated by the following
thresholding process:
yijk = 1[xijk ? max(0, dcj ? ticj k ) > ? ],
(1)
where 1(?) : R ? {0, 1} is the indicator function, and ? > 0 is a predefined threshold.
3
To meet this requirement, the granularity of categories should be properly selected. For instance, the
category ?Smart TV? is a better choice than the category ?Electrical Equipment?, since the latter category covers
a broad range of goods with different durations.
3
Note that the positive entries of Y denote high purchase intentions, while the positive entries of P
denote actual purchases. Generally speaking, a purchase only happens when the utility is high, but
a high utility does not necessarily lead to a purchase. This observation allows us to link the binary
tensors P and Y: P is generated by a one-sided sampling process that only reveals a subset of
Y?s positive entries. Given this observation, we follow [13] and include a label-dependent loss [26]
trading the relative cost of positive and unlabeled samples:
X
X
L(X , P) = ?
max[1 ? (xijk ? max(0, dcj ? ticj k )), 0]2 + (1 ? ?)
l(xijk , 0),
ijk: pijk =1
ijk: pijk =0
2
where l(x, c) = (x ? c) denotes the squared loss.
In addition, the form utility tensor X should be of low-rank to capture temporal dynamics of users?
interests, which are generally believed to be dictated by a small number of latent factors [22]. By
combining asymmetric sampling and the low-rank property together, we jointly recover the tensor X
and the inter-purchase duration vector d by solving the following tensor nuclear norm minimization
(TNNM) problem:
X
min
?
max[1 ? (xijk ? max(0, dcj ? ticj k )), 0]2
r
X ?Rm?n?l , d?R+
ijk: pijk =1
+ (1 ? ?)
X
x2ijk + ? kX k? ,
(2)
ijk: pijk =0
where kX k? denotes the tensor nuclear norm, a convex combination of nuclear norms of X ?s unfolded
? the underlying binary tensor Y can be recovered by (1).
? and d,
matrices [21]. Given the learned X
We note that although the TNNM problem (2) can be solved by optimization techniques such as block
coordinate descent [21] and ADMM [10], they suffer from high computational cost since they need
to be solved iteratively with multiple SVDs at each iteration. An alternative way to solve the problem
is tensor factorization [16]. However, this also involves iterative singular vector estimation and thus
not scalable enough. As a typical example, recovering a rank 20 tensor of size 500 ? 500 ? 500 takes
the state-of-the-art tensor factorization algorithm TenALS 4 more than 20, 000 seconds on an Intel
Xeon 2.40 GHz processor with 32 GB main memory.
3.2
A Scalable Relaxation
In this subsection, we discuss how to significantly improve the scalability of the proposed demandaware recommendation model. To this end, we assume that an individual?s form utility does not
change over time, an assumption widely-used in many collaborative filtering methods [25, 32]. Under
this assumption, the tensor X is a repeated copy of its frontal slice x::1 , i.e.,
X = x::1 ? e,
(3)
where e is an l-dimensional all-one vector and the symbol ? represents the outer product operation.
In this way, we can relax the problem of learning a third-order tensor X to the problem of learning
its frontal slice, which is a second-order tensor (matrix). For notational simplicity, we use a matrix X
to denote the frontal slice x::1 , and use xij to denote the entry (i, j) of the matrix X.
Since X is a low-rank tensor, its frontal slice X should be of low-rank as well. Hence, the minimization problem (2) simplifies to:
X
min
?
max[1 ? (xij ? max(0, dcj ? ticj k )), 0]2
m?n
X?R
d?Rr
ijk: pijk =1
+ (1 ? ?)
X
x2ij + ? kXk? := f (X, d),
(4)
ijk: pijk =0
where kXk? stands for the matrix nuclear norm, the convex surrogate of the matrix rank function. By
relaxing the optimization problem (2) to the problem (4), we recover a matrix instead of a tensor to
infer users? purchase intentions.
4
http://web.engr.illinois.edu/~swoh/software/optspace/code.html
4
4
Optimization
Although the learning problem has been relaxed, optimizing (4) is still very challenging for two main
reasons: (i) the objective is highly non-smooth with nested hinge losses, and (ii) it contains mnl
terms, and a naive optimization algorithm will take at least O(mnl) time.
To address these challenges, we adopt an alternating minimization scheme that iteratively fixes one of
d and X and minimizes with respect to the other. Specifically, we propose an extremely efficient
optimization algorithm by effectively exploring the sparse structure of the tensor P and low-rank
structure of the matrix X. We show that (i) the problem (4) can be solved within O(kPk0 (k +
log(kPk0 )) + (n + m)k 2 ) time, where k is the rank of X, and (ii) the algorithm converges to the
critical points of f (X, d). In the following, we provide a sketch of the algorithm. The detailed
description can be found in the supplementary material.
Update d
4.1
When X is fixed, the optimization problem with respect to d can be written as:
(
2 )
X
X
min
max 1 ? (xij ? max(0, dcj ? ticj k )), 0
:= g(d) :=
gijk (dcj ). (5)
d
ijk: pijk =1
ijk: pijk =1
Problem (5) is non-trivial to solve since it involves nested hinge losses. Fortunately, by carefully
analyzing the value of each term gijk (dcj ), we can show that
max(1 ? xij , 0)2 ,
if dcj ? ticj k + max(xij ? 1, 0)
gijk (dcj ) =
(1 ? (xij ? dcj + ticj k ))2 , if dcj > ticj k + max(xij ? 1, 0).
For notational simplicity, we let sijk = ticj k + max(xij ? 1, 0) for all triplets (i, j, k) satisfying
pijk = 1. Now we can focus on each category ?: for each ?, we collect the set Q = {(i, j, k) |
pijk = 1 and cj = ?} and calculate the corresponding sijk s. We then sort sijk s such that s(i1 j1 k1 ) ?
? ? ? ? s(i|Q| j|Q| k|Q| ) . For each interval [s(iq jq kq ) , s(iq+1 jq+1 kq+1 ) ], the function is quadratic, thus can
be solved in a closed form. Therefore, by scanning the solution regions from left to right according to
the sorted s values, and maintaining some intermediate computed variables, we are able to find the
optimal solution, as summarized by the following lemma:
Lemma 1. The subproblem (5) is convex with respect to d and can be solved exactly in
O(kPk0 log(kPk0 )), where kPk0 is the number of nonzero elements in tensor P.
Therefore, we can efficiently update d since P is a very sparse tensor with only a small number of
nonzero elements.
Update X
4.2
By defining
aijk =
1 + max(0, dcj ? ticj k ),
0,
the subproblem with respect to X can be written as
X
min
h(X)+?kXk
? where h(X) := ?
m?n
X?R
if pijk = 1
otherwise
max(aijk ?xij , 0)2 +(1??)
ijk: pijk =1
X
x2ij .
ijk: pijk =0
(6)
Since there are O(mnl) terms in the objective function, a naive implementation will take at least
O(mnl) time, which is computationally infeasible when the data is large. To address this issue, We
use proximal gradient descent to solve the problem. At each iteration, X is updated by
X ? S? (X ? ??h(X)),
(7)
5
where S? (?) is the soft-thresholding operator for singular values.
If X has the singular value decomposition X = U?VT , then S? (X) = U(? ? ?I)+ VT where a+ =
max(0, a).
5
5
Table 1: CPU time for solving problem (4) with different number of purchase records
m (# users) n (# items) l (# time slots)
kPk0
k CPU Time (in seconds)
1,000,000
1,000,000
1,000
11,112,400 10
595
1,000,000
1,000,000
1,000
43,106,100 10
1,791
1,000,000
1,000,000
1,000
166,478,000 10
6,496
In order to efficiently compute the top singular vectors of X ? ??h(X), we rewrite it as
?
?
X
X
X ? ??h(X) = [1 ? 2(1 ? ?)l] X + ?2(1 ? ?)
xij ? 2?
max(aijk ? xij , 0)? .
ijk: pijk =1
ijk: pijk =1
= fa (X) + fb (X).
Since fa (X) is of low-rank and fb (X) is sparse, multiplying (X ? ??h(X)) with a skinny m by
k matrix can be computed in O(nk 2 + mk 2 + kPk0 k) time. As shown in [12], each iteration of
proximal gradient descent for nuclear norm minimization only requires a fixed number of iterations
of randomized SVD (or equivalently, power iterations) using the warm start strategy, thus we have
the following lemma.
Lemma 2. A proximal gradient descent algorithm can be applied to solve problem (6) within
O(nk 2 T + mk 2 T + kPk0 kT ) time, where T is the number of iterations.
We note that the algorithm is guaranteed to converge to the true solution. This is because when we
apply a fixed number of iterations to update X via problem (7), it is equivalent to the ?inexact gradient
descent update? where each gradient is approximately computed and the approximation error is upper
bounded by a constant between zero and one. Intuitively speaking, when the gradient converges to 0,
the error will also converge to 0 at an even faster rate. See [12] for the detailed explanations.
4.3
Overall Algorithm
Combining the two subproblems together, the time complexity of each iteration of the proposed
algorithm is:
O(kPk0 log(kPk0 ) + nk 2 T + mk 2 T + kPk0 kT ).
Remark: Since each user should make at least one purchase and each item should be purchased at
least once to be included in P, n and m are smaller than kPk0 . Also, since k and T are usually
very small, the time complexity to solve problem (4) is dominated by the term kPk0 , which is a
significant improvement over the naive approach with at least O(mnl) complexity.
Since our problem has only two blocks d, X and each subproblem is convex, our optimization
algorithm is guaranteed to converge to a stationary point [11]. Indeed, it converges very fast in
practice. As a concrete example, our experiment shows that it takes only 9 iterations to optimize a
problem with 1 million users, 1 million items, and more than 166 million purchase records.
5
5.1
Experiments
Experiment with Synthesized Data
We first conduct experiments with simulated data to verify that the proposed demand-aware recommendation algorithm is computationally efficient and robust to noise. To this end, we first construct a
low-rank matrix X = WHT , where W ? Rm?10 and H ? Rn?10 are random Gaussian matrices
with entries drawn from N (1, 0.5), and then normalize X to the range of [0, 1]. We randomly assign
all the n items to r categories, with their inter-purchase durations d equaling [10, 20, . . . , 10r]. We
then construct the high purchase intension set ? = {(i, j, k) | ticj k ? dcj and xij ? 0.5}, and
sample a subset of its entries as the observed purchase records. We let n = m and vary them in the
range {10, 000, 20, 000, 30, 000, 40, 000}. We also vary r in the range {10, 20, ? ? ? , 100}. Given the
learned durations d? , we use kd ? d? k2 /kdk2 to measure the prediction errors.
6
(a)
Error vs Number of users/items
(b)
Error vs Number of categories
(c)
Error vs Noise levels
Figure 1: Prediction errors kd ? d? k2 /kdk2 as a function of number of users, items, categories, and
noise levels on synthetic data sets
Accuracy Figure 1(a) and 1(b) clearly show that the proposed algorithm can perfectly recover the
underlying inter-purchase durations with varied numbers of users, items, and categories. To further
evaluate the robustness of the proposed algorithm, we randomly flip some entries in tensor P from
0 to 1 to simulate the rare cases of purchasing two items in the same category in close temporal
succession. Figure 1(c) shows that when the ratios of noisy entries are not large, the predicted
? are close enough to the true durations, thus verifying the robustness of the proposed
durations d
algorithm.
Scalability To verify the scalability of the proposed algorithm, we fix the numbers of users and items
to be 1 million, the number of time slots to be 1000, and vary the number of purchase records (i.e.,
kPk0 ). Table 1 summarizes the running time of solving problem (4) on a computer with 32 GB main
memory using a single thread. We observe that the proposed algorithm is extremely efficient, e.g.,
even with 1 million users, 1 million items, and more than 166 million purchase records, the running
time of the proposed algorithm is less than 2 hours.
5.2
Experiment with Real-World Data
In the real-world experiments, we evaluate the proposed demand-aware recommendation algorithm
by comparing it with the six state-of the-art recommendation methods: (a) M3 F, maximum-margin
matrix factorization [24], (b) PMF, probabilistic matrix factorization [25], (c) WR-MF, weighted regularized matrix factorization [14], (d) CP-APR, Candecomp-Parafac alternating Poisson regression
[7], (e) Rubik, knowledge-guided tensor factorization and completion method [30], and (f) BPTF,
Bayesian probabilistic tensor factorization [31]. Among them, M3 F and PMF are widely-used static
collaborative filtering algorithms. We include these two algorithms as baselines to justify whether
traditional collaborative filtering algorithms are suitable for general e-commerce recommendation
involving both durable and nondurable goods. Since they require explicit ratings as inputs, we
follow [2] to generate numerical ratings based on the frequencies of (user, item) consumption pairs.
WR-MF is essentially the positive-unlabeled version of PMF and has shown to be very effective in
modeling implicit feedback data. All the other three baselines, i.e., CP-APR, Rubik, and BPTF, are
tensor-based methods that can consider time utility when making recommendations. We refer to the
proposed recommendation algorithm as Demand-Aware Recommender for One-Sided Sampling,
or DAROSS for short.
Our testbeds are two real-world data sets Tmall6 and Amazon Review7 . Since some of the baseline
algorithms are not scalable enough, we first conduct experiments on their subsets and then on the full
set of Amazon Review. In order to generate the subsets, we randomly sample 80 item categories for
Tmall data set and select the users who have purchased at least 3 items within these categories, leading
to the purchase records of 377 users and 572 items. For Amazon Review data set, we randomly select
300 users who have provided reviews to at least 5 item categories on Amazon.com. This leads to a
total of 5, 111 items belonging to 11 categories. Time information for both data sets is provided in
days, and we have 177 and 749 time slots for Tmall and Amazon Review subsets, respectively. The
full Amazon Review data set is significantly larger than its subset. After removing duplicate items, it
contains more than 72 million product reviews from 19.8 million users and 7.7 million items that
6
7
http://ijcai-15.org/index.php/repeat-buyers-prediction-competition
http://jmcauley.ucsd.edu/data/amazon/
7
(a) Category Prediction
(b) Purchase Time Prediction
Figure 2: Prediction performance on real-world data sets Tmall and Amazon Review subsets
Table 2: Estimated inter-reviewing durations for Amazon Review subset
Categories
d
Instant
Apps for
Video
Android
0
0
Automotive Baby Beauty
326
0
0
Digital
Grocery
Musical
Office
Patio ...
Pet
Music
... Food
Instruments
Products
Garden
Supplies
158
0
38
94
271
40
belong to 24 item categories. The collected reviews span a long range of time: from May 1996 to
July 2014, which leads to 6, 639 time slots in total. Comparing to its subset, the full set is a much
more challenging data set both due to its much larger size and much lower sampling rate, i.e., many
reviewers only provided a few reviews, and many items were only reviewed a small number of times.
For each user, we randomly sample 90% of her purchase records as the training data, and use the
remaining 10% as the test data. For each purchase record (u, i, t) in the test set, we evaluate all
the algorithms on two tasks: (i) category prediction, and (ii) purchase time prediction. In the first
task, we record the highest ranking of items that are within item i?s category among all items at
time t. Since a purchase record (u, i, t) may suggest that in time slot t, user u needed an item
that share similar functionalities with item i, category prediction essentially checks whether the
recommendation algorithms recognize this need. In the second task, we record the number of slots
between the true purchase time t and its nearest predicted purchase time within item i?s category.
Ideally, good recommendations should have both small category rankings and small time errors. Thus
we adopt the average top percentages, i.e., (average category ranking) / n ? 100% and (average
time error) / l ? 100%, as the evaluation metrics of category and purchase time prediction tasks,
respectively. The algorithms M3 F, PMF, and WR-MF are excluded from the purchase time prediction
task since they are static models that do not consider time information.
Figure 2 displays the predictive performance of the seven recommendation algorithms on Tmall
and Amazon Review subsets. As expected, M3 F and PMF fail to deliver strong performance since
they neither take into account users? demands, nor consider the positive-unlabeled nature of the
data. This is verified by the performance of WR-MF: it significantly outperforms M3 F and PMF
by considering the PU issue and obtains the second-best item prediction accuracy on both data sets
(while being unable to provide a purchase time prediction). By taking into account both issues,
our proposed algorithm DAROSS yields the best performance for both data sets and both tasks.
Table 2 reports the inter-reviewing durations of Amazon Review subset estimated by our algorithm.
Although they may not perfectly reflect the true inter-purchase durations, the estimated durations
clearly distinguish between durable good categories, e.g., automotive, musical instruments, and
non-durable good categories, e.g., instant video, apps, and food. Indeed, the learned inter-purchase
durations can also play an important role in applications more advanced than recommender systems,
such as inventory management, operations management, and sales/marketing mechanisms. We do
not report the estimated durations of Tmall herein since the item categories are anonymized in the
data set.
Finally, we conduct experiments on the full Amazon Review data set. In this study, we replace
category prediction with a more strict evaluation metric item prediction [8], which indicates the
predicted ranking of item i among all items at time t for each purchase record (u, i, t) in the test set.
Since most of our baseline algorithms fail to handle such a large data set, we only obtain the predictive
performance of three algorithms: DAROSS, WR-MF, and PMF. Note that for such a large data set,
8
prediction time instead of training time becomes the bottleneck: to evaluate average item rankings, we
need to compute the scores of all the 7.7 million items, thus is computationally inefficient. Therefore,
we only sample a subset of items for each user and estimate the rankings of her purchased items.
Using this evaluation method, the average item ranking percentages for DAROSS, WR-MF and PMF
are 16.7%, 27.3%, and 38.4%, respectively. In addition to superior performance, it only takes our
algorithm 10 iterations and 1 hour to converge to a good solution. Since WR-MF and PMF are both
static models, our algorithm is the only approach evaluated here that considers time utility while
being scalable enough to handle the full Amazon Review data set. Note that this data set has more
users, items, and time slots but fewer purchase records than our largest synthesized data set, and the
running time of the former data set is lower than the latter one. This clearly verifies that the time
complexity of our algorithm is dominated by the number of purchase records instead of the tensor
size. Interestingly, we found that some inter-reviewing durations estimated from the full Amazon
Review data set are much smaller than the durations reported in Table 2. This is because the estimated
durations tend to be close to the minimum reviewing/purchasing gap within each category, thus
may be affected by outliers who review/purchase durable goods in close temporal succession. The
problem of improving the algorithm robustness will be studied in our future work. On the other hand,
this result verifies the effectiveness of the PU formulation ? even if the durations are underestimated,
our algorithm still outperforms the competitors by a considerable margin. As a final note, we want
to point out that Tmall and Amazon Review may not take full advantage of the proposed algorithm,
since (i) their categories are relatively coarse and may contain multiple sub-categories with different
durations, and (ii) the time stamps of Amazon Review reflect the review time instead of purchase
time, and inter-reviewing durations could be different from inter-purchase durations. By choosing a
purchase history data set with a more proper category granularity, we expect to achieve more accurate
duration estimations and also better recommendation performance.
6
Conclusion
In this paper, we examine the problem of demand-aware recommendation in settings when interpurchase duration within item categories affects users? purchase intention in combination with
intrinsic properties of the items themselves. We formulate it as a tensor nuclear norm minimization
problem that seeks to jointly learn the form utility tensor and a vector of inter-purchase durations,
and propose a scalable optimization algorithm with a tractable time complexity. Our empirical
studies show that the proposed approach can yield perfect recovery of duration vectors in noiseless
settings; it is robust to noise and scalable as analyzed theoretically. On two real-world data sets, Tmall
and Amazon Review, we show that our algorithm outperforms six state-of-the-art recommendation
algorithms on the tasks of category, item, and purchase time predictions.
Acknowledgements
Cho-Jui Hsieh and Yao Li acknowledge the support of NSF IIS-1719097, TACC and Nvidia.
References
[1] Gediminas Adomavicius and Alexander Tuzhilin. Context-aware recommender systems. In Recommender
Systems Handbook, pages 217?253. Springer, New York, NY, 2011.
[2] Linas Baltrunas and Xavier Amatriain. Towards time-dependant recommendation based on implicit
feedback. In Workshop on context-aware recommender systems, 2009.
[3] Jes?s Bobadilla, Fernando Ortega, Antonio Hernando, and Jes?s Bernal. A collaborative filtering approach
to mitigate the new user cold start problem. Knowl.-Based Syst., 26:225?238, February 2012.
[4] Pedro G. Campos, Fernando D?ez, and Iv?n Cantador. Time-aware recommender systems: a comprehensive
survey and analysis of existing evaluation protocols. User Model. User-Adapt. Interact., 24(1-2):67?119,
2014.
[5] Emmanuel J. Cand?s and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational Mathematics, 9(6):717?772, 2009.
[6] Christopher Chatfield and Gerald J Goodhardt. A consumer purchasing model with erlang inter-purchase
times. Journal of the American Statistical Association, 68(344):828?835, 1973.
9
[7] Eric C. Chi and Tamara G. Kolda. On tensors, sparsity, and nonnegative factorizations. SIAM Journal on
Matrix Analysis and Applications, 33(4):1272?1299, 2012.
[8] Nan Du, Yichen Wang, Niao He, Jimeng Sun, and Le Song. Time-sensitive recommendation from recurrent
user activities. In NIPS, pages 3474?3482, 2015.
[9] Marthinus Christoffel du Plessis, Gang Niu, and Masashi Sugiyama. Analysis of learning from positive
and unlabeled data. In NIPS, pages 703?711, 2014.
[10] Silvia Gandy, Benjamin Recht, and Isao Yamada. Tensor completion and low-n-rank tensor recovery via
convex optimization. Inverse Problems, 27(2):025010, 2011.
[11] L. Grippo and M. Sciandrone. On the convergence of the block nonlinear Gauss-Seidel method under
convex constraints. Operations Research Letters, 26:127?136, 2000.
[12] C.-J. Hsieh and P. A. Olsen. Nuclear norm minimization via active subspace selection. In ICML, 2014.
[13] Cho-Jui Hsieh, Nagarajan Natarajan, and Inderjit S. Dhillon. PU learning for matrix completion. In ICML,
pages 2445?2453, 2015.
[14] Y. Hu, Y. Koren, and C. Volinsky. Collaborative filtering for implicit feedback datasets. In ICDM, pages
263?272. IEEE, 2008.
[15] Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In
ICDM, pages 263?272, 2008.
[16] P. Jain and S. Oh. Provable tensor factorization with missing data. In NIPS, pages 1431?1439, 2014.
[17] Yehuda Koren. Collaborative filtering with temporal dynamics. Commun. ACM, 53(4):89?97, April 2010.
[18] Dokyun Lee and Kartik Hosanagar. Impact of recommender systems on sales volume and diversity. In
Proc. Int. Conf. Inf. Syst., Auckland, New Zealand, December 2014.
[19] Bin Li, Xingquan Zhu, Ruijiang Li, Chengqi Zhang, Xiangyang Xue, and Xindong Wu. Cross-domain
collaborative filtering over time. In IJCAI, pages 2293?2298, 2011.
[20] Bing Liu, Yang Dai, Xiaoli Li, Wee Sun Lee, and Philip S. Yu. Building text classifiers using positive and
unlabeled examples. In ICML, pages 179?188, 2003.
[21] Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye. Tensor completion for estimating missing
values in visual data. IEEE Trans. Pattern Anal. Mach. Intell., 35(1):208?220, 2013.
[22] Atsuhiro Narita, Kohei Hayashi, Ryota Tomioka, and Hisashi Kashima. Tensor factorization using auxiliary
information. In ECML/PKDD, pages 501?516, 2011.
[23] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. BPR: bayesian
personalized ranking from implicit feedback. In UAI, pages 452?461, 2009.
[24] Jason D. M. Rennie and Nathan Srebro. Fast maximum margin matrix factorization for collaborative
prediction. In ICML, pages 713?719, 2005.
[25] Ruslan Salakhutdinov and Andriy Mnih. Bayesian probabilistic matrix factorization using markov chain
monte carlo. In ICML, pages 880?887, 2008.
[26] Clayton Scott et al. Calibrated asymmetric surrogate losses. Electronic Journal of Statistics, 6:958?992,
2012.
[27] Robert L. Sexton. Exploring Economics. Cengage Learning, Boston, MA, 2013.
[28] Robert L. Steiner. The prejudice against marketing. J. Marketing, 40(3):2?9, July 1976.
[29] John Z. Sun, Dhruv Parthasarathy, and Kush R. Varshney. Collaborative Kalman filtering for dynamic
matrix factorization. IEEE Trans. Signal Process., 62(14):3499?3509, 15 July 2014.
[30] Yichen Wang, Robert Chen, Joydeep Ghosh, Joshua C. Denny, Abel N. Kho, You Chen, Bradley A. Malin,
and Jimeng Sun. Rubik: Knowledge guided tensor factorization and completion for health data analytics.
In SIGKDD, pages 1265?1274, 2015.
[31] Liang X., Xi C., Tzu-Kuo H., Jeff G. S., and Jaime G. C. Temporal collaborative filtering with bayesian
probabilistic tensor factorization. In SDM, pages 211?222, 2010.
[32] Jinfeng Yi, Rong Jin, Shaili Jain, and Anil K. Jain. Inferring users? preferences from crowdsourced
pairwise comparisons: A matrix completion approach. In First AAAI Conference on Human Computation
and Crowdsourcing (HCOMP), 2013.
10
| 6835 |@word adomavicius:1 version:1 norm:8 open:2 hu:2 seek:1 decomposition:1 hsieh:3 bellevue:1 liu:2 contains:2 score:1 interestingly:1 yaoli:1 outperforms:3 existing:4 steiner:1 recovered:2 com:3 comparing:4 bradley:1 must:2 written:2 john:1 numerical:1 j1:1 update:5 v:3 chohsieh:1 stationary:1 selected:1 fewer:1 item:81 yi1:1 short:1 yamada:1 record:15 provides:1 coarse:1 contribute:1 successive:1 preference:1 org:1 zhang:1 height:2 kho:1 supply:1 marthinus:1 kdk2:2 pairwise:1 theoretically:1 inter:25 expected:1 indeed:2 rapid:1 themselves:1 planning:1 examine:2 nor:1 cand:1 chi:1 pkdd:1 buying:1 salakhutdinov:1 decomposed:1 steffen:1 unfolded:1 actual:1 cpu:2 food:2 considering:2 becomes:1 provided:3 estimating:1 moreover:1 underlying:8 bounded:1 medium:4 thieme:1 minimizes:1 informed:1 differing:2 ghosh:1 temporal:7 mitigate:1 masashi:1 exactly:1 rm:4 k2:2 classifier:1 sale:3 positive:15 nju:1 before:1 negligible:1 understood:1 limit:1 severely:1 mach:1 analyzing:1 meet:1 niu:1 approximately:2 might:1 baltrunas:1 china:1 studied:3 examined:1 collect:1 relaxing:1 challenging:2 christoph:1 factorization:16 analytics:1 range:5 commerce:5 practice:1 block:3 yehuda:2 tmall:7 cold:2 area:1 empirical:1 kohei:1 significantly:4 revealing:1 intention:8 jui:3 suggest:1 nanjing:2 cannot:1 close:5 unlabeled:9 operator:1 selection:1 context:4 lijun:1 optimize:1 equivalent:1 jaime:1 reviewer:1 center:1 missing:2 jieping:1 economics:1 duration:40 convex:7 focused:1 formulate:2 survey:1 simplicity:2 amazon:17 recovery:2 zealand:1 kartik:1 nuclear:8 oh:1 handle:3 notion:1 coordinate:1 hurt:1 updated:1 kolda:1 play:1 user:56 exact:1 element:3 satisfying:1 dcj:18 renting:1 continues:1 natarajan:1 asymmetric:2 labeled:1 observed:1 role:1 subproblem:3 electrical:1 capture:2 solved:5 svds:1 calculate:1 region:1 equaling:1 verifying:1 sun:4 wang:2 highest:1 benjamin:2 complexity:6 abel:1 ideally:1 dynamic:4 gediminas:1 gerald:1 engr:1 solving:3 rewrite:1 smart:1 reviewing:5 predictive:2 deliver:1 eric:1 bpr:1 basis:1 joint:1 jain:3 fast:2 effective:1 monte:1 choosing:1 quite:1 larger:3 solve:8 valued:1 supplementary:2 relax:3 widely:2 otherwise:1 isao:1 rennie:1 statistic:1 jointly:2 noisy:1 final:1 advantage:2 descriptive:1 rr:2 analytical:1 sdm:1 propose:3 product:6 denny:1 combining:2 wht:1 achieve:1 description:1 yijk:1 normalize:1 scalability:3 competition:1 exploiting:1 ijcai:2 convergence:1 requirement:1 perfect:1 converges:3 bernal:1 help:1 derive:1 develop:1 completion:9 chengqi:1 pose:1 iq:2 recurrent:1 nearest:1 strong:1 recovering:1 predicted:4 involves:3 indicate:1 trading:1 quantify:2 auxiliary:1 guided:2 functionality:1 lars:1 human:1 material:2 bin:1 require:1 assign:1 nagarajan:1 fix:2 exploring:2 rong:1 dhruv:1 great:1 bptf:2 vary:3 adopt:2 estimation:3 proc:1 ruslan:1 label:2 knowl:1 sensitive:1 largest:1 weighted:1 minimization:9 clearly:3 gaussian:1 aim:1 lamda:1 rather:1 beauty:1 office:1 derived:1 focus:3 parafac:1 plessis:1 she:4 properly:1 rank:15 check:1 indicates:3 notational:2 improvement:1 contrast:2 sigkdd:1 equipment:1 baseline:4 sense:1 mainly:1 durable:14 dependent:2 lowercase:1 membership:1 gandy:1 unlikely:1 typically:1 her:3 jq:2 i1:1 issue:5 overall:4 classification:1 html:1 denoted:2 among:3 plan:1 art:3 grocery:1 aware:17 construct:3 testbeds:1 beach:1 sampling:5 once:1 represents:1 broad:1 cantador:1 icml:5 yu:1 purchase:73 future:4 report:2 inherent:1 few:3 employ:1 duplicate:1 randomly:5 wee:1 simultaneously:1 national:1 recognize:1 individual:2 comprehensive:1 tuzhilin:1 intell:1 replaced:2 intended:1 skinny:1 interest:2 highly:3 mnih:1 evaluation:4 mixture:1 analyzed:1 chain:1 predefined:1 kt:2 accurate:1 erlang:1 experience:1 shorter:1 conduct:3 iv:1 pmf:9 desired:2 joydeep:1 mk:3 android:1 witnessed:1 instance:2 modeling:4 xeon:1 soft:1 cover:1 optspace:1 yichen:2 cost:3 subset:13 entry:14 euler:1 kq:2 rare:1 reported:1 scanning:1 proximal:3 synthetic:1 cho:3 combined:5 st:1 recht:2 xue:1 randomized:1 siam:1 calibrated:1 probabilistic:4 lee:2 xiangyang:1 together:3 concrete:1 yao:2 squared:1 reflect:2 tzu:1 management:2 aaai:1 nearing:1 conf:1 american:1 inefficient:1 leading:1 li:4 syst:2 account:4 diversity:1 summarized:1 waste:1 hisashi:1 int:1 explicitly:1 ranking:8 script:1 try:1 jason:1 lab:2 closed:1 netflix:1 start:3 recover:3 sort:1 crowdsourced:1 collaborative:15 contribution:2 php:1 accuracy:3 musical:2 characteristic:4 efficiently:2 succession:4 who:3 yield:2 apps:2 weak:1 bayesian:4 overlook:1 none:1 carlo:1 multiplying:1 drive:1 processor:1 history:3 sharing:1 inexact:1 competitor:1 volinsky:2 nonetheless:1 against:1 frequency:1 tamara:1 naturally:1 static:3 rubik:3 intrinsically:1 knowledge:4 subsection:1 cj:7 jes:2 musialski:1 carefully:1 day:2 follow:2 april:1 formulation:1 evaluated:1 lifetime:1 just:1 implicit:9 marketing:5 until:3 hand:2 sketch:1 web:1 christopher:1 nonlinear:1 dependant:1 xindong:1 building:1 usa:5 ye:1 effect:4 verify:2 true:4 contain:1 former:1 hence:2 xavier:1 alternating:4 excluded:1 laboratory:1 nonzero:4 iteratively:2 dhillon:1 davis:1 yorktown:2 zhanglj:1 ortega:1 cp:2 novel:1 superior:2 ji:1 volume:1 million:14 belong:2 discussed:1 association:1 he:1 synthesized:2 significant:1 refer:1 ai:3 swoh:1 mathematics:1 recommenders:1 illinois:1 sugiyama:1 aijk:4 pu:7 recent:3 dictated:1 perspective:1 optimizing:1 commun:1 driven:1 inf:1 nvidia:1 manifested:1 retailer:1 binary:8 watson:1 success:1 vt:2 baby:1 yi:1 joshua:1 minimum:1 additional:1 greater:1 relaxed:1 fortunately:1 employed:1 dai:1 shaili:1 converge:4 fernando:2 signal:1 july:3 ii:6 multiple:3 mix:1 full:7 infer:2 hcomp:1 patio:1 smooth:2 seidel:1 faster:1 adapt:1 believed:1 long:4 christoffel:1 cross:1 sijk:3 icdm:2 impact:1 prediction:20 scalable:8 involving:2 regression:1 essentially:2 metric:2 poisson:1 noiseless:1 iteration:10 represent:2 whereas:2 addition:3 want:1 interval:1 underestimated:1 campos:1 singular:4 unlike:1 strict:1 tend:1 december:1 effectiveness:1 bought:1 yang:1 granularity:2 intermediate:1 iii:1 enough:4 concerned:1 variety:1 affect:1 perfectly:2 andriy:1 simplifies:1 cn:1 bottleneck:1 kush:2 thread:2 six:2 whether:2 utility:44 gb:2 chatfield:1 song:1 suffer:1 peter:1 speaking:2 york:1 remark:1 antonio:1 useful:1 generally:2 detailed:3 category:55 tacc:1 generate:3 http:3 xij:12 percentage:2 nsf:1 estimated:6 wr:7 li2:1 affected:1 key:4 threshold:1 drawn:1 capital:1 linas:1 neither:1 verified:1 relaxation:2 inverse:1 letter:5 you:1 throughout:1 wu:1 electronic:1 draw:1 decision:1 summarizes:1 guaranteed:2 distinguish:2 display:1 nan:1 koren:3 quadratic:1 microeconomics:1 pijk:16 nonnegative:1 activity:1 gang:1 constraint:1 software:2 personalized:2 automotive:2 dominated:2 aspect:3 simulate:1 nathan:1 extremely:4 min:4 span:1 relatively:1 tv:1 according:1 combination:3 kd:2 belonging:1 smaller:2 appealing:1 amatriain:1 making:1 happens:1 intuitively:2 outlier:1 xiaoli:1 sided:3 computationally:3 previously:1 bing:1 discus:1 fail:2 mechanism:1 needed:1 flip:1 rendle:1 tractable:1 instrument:2 end:5 przemyslaw:1 available:1 operation:3 apply:1 observe:1 appropriate:1 sciandrone:1 kashima:1 alternative:1 robustness:3 schmidt:1 thomas:1 denotes:4 top:2 include:2 running:3 remaining:1 opportunity:1 hinge:4 maintaining:1 instant:2 music:2 exploit:1 k1:1 emmanuel:1 february:1 purchased:6 freudenthaler:1 tensor:48 objective:3 already:1 receptive:1 strategy:2 fa:2 traditional:1 surrogate:2 niao:1 gradient:6 subspace:1 link:1 unable:1 simulated:1 philip:1 outer:1 consumption:1 chris:1 seven:1 collected:1 considers:1 trivial:2 reason:2 boldface:3 pet:1 provable:1 consumer:6 economist:1 code:1 kalman:1 index:1 illustration:1 intension:1 ratio:1 equivalently:1 zeno:1 liang:1 robert:3 relate:1 subproblems:1 ryota:1 wonka:1 negative:4 rise:1 implementation:1 anal:1 proper:1 recommender:13 upper:1 observation:3 datasets:2 markov:1 acknowledge:1 descent:5 jin:1 ecml:1 immediate:1 defining:2 jimeng:2 rn:1 varied:1 ucsd:1 rating:6 clayton:1 pair:2 cast:1 extensive:1 california:1 elapsed:1 learned:3 herein:3 ucdavis:2 hour:2 nip:4 trans:2 address:5 able:3 usually:4 pattern:1 scott:1 candecomp:1 malin:1 sparsity:2 challenge:2 jinfeng:2 max:19 memory:2 explanation:1 video:2 garden:1 power:1 suitable:2 critical:1 warm:1 regularized:1 largescale:1 indicator:2 advanced:1 zhu:1 scheme:1 improve:1 movie:1 technology:1 kpk0:16 mediated:1 naive:4 health:1 parthasarathy:1 faced:1 prior:1 literature:2 review:20 acknowledgement:1 dislike:1 text:1 relative:1 loss:8 expect:1 limitation:2 filtering:14 proportional:1 srebro:1 digital:1 foundation:2 purchasing:3 anonymized:1 thresholding:3 share:1 ibm:3 repeat:1 last:2 copy:1 infeasible:1 weaker:1 taking:1 sparse:6 ghz:1 slice:4 feedback:10 overcome:1 world:7 stand:1 unaware:1 x2ij:2 fb:2 made:1 obtains:1 olsen:1 varshney:1 active:1 buy:2 reveals:1 handbook:1 xingquan:1 uai:1 recommending:1 xi:1 yifan:1 latent:1 iterative:1 triplet:4 table:5 reviewed:1 nature:1 learn:1 robust:2 ca:2 expanding:1 tencent:2 xijk:4 improving:1 interact:1 inventory:2 du:2 necessarily:2 protocol:1 domain:1 mnl:8 main:3 apr:2 silvia:1 noise:4 verifies:2 repeated:1 referred:1 intel:1 ny:3 tomioka:1 sub:1 inferring:2 explicit:1 stamp:1 third:4 anil:1 removing:1 symbol:1 intractable:1 intrinsic:1 workshop:1 effectively:2 television:1 demand:18 kx:2 chen:2 gap:2 nk:3 margin:3 mf:7 gantner:1 boston:1 simply:1 likely:2 ez:1 visual:1 kxk:3 scalar:1 grippo:1 recommendation:37 inderjit:1 hayashi:1 springer:1 pedro:1 nested:3 acm:1 ma:1 slot:11 sorted:1 formulated:1 careful:1 towards:1 jeff:1 replace:1 admm:1 considerable:1 change:1 included:1 specifically:3 typical:1 justify:1 prejudice:1 lemma:4 total:2 kuo:1 svd:1 buyer:1 m3:5 ijk:12 gauss:1 indicating:1 select:2 support:1 latter:2 alexander:1 frontal:4 evaluate:4 sexton:1 crowdsourcing:1 |
6,452 | 6,836 | SGD Learns the Conjugate Kernel Class of the
Network
Amit Daniely
Hebrew University and Google Research
[email protected]
Abstract
We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn,
in polynomial time, a function that is competitive with the best function in the conjugate
kernel space of the network, as de?ned in Daniely et al. [2016]. The result holds for logdepth networks from a rich family of architectures. To the best of our knowledge, it is
the ?rst polynomial-time guarantee for the standard neural network learning algorithm for
networks of depth more that two.
As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD
is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially
bounded coef?cients. Likewise, it follows that SGD on large enough networks can learn any
continuous function (not in polynomial time), complementing classical expressivity results.
1
Introduction
While stochastic gradient decent (SGD) from a random initialization is probably the most popular
supervised learning algorithm today, we have very few results that depicts conditions that guarantee
its success. Indeed, to the best of our knowledge, Andoni et al. [2014] provides the only known result
of this form, and it is valid in a rather restricted setting. Namely, for depth-2 networks, where the
underlying distribution is Gaussian, the algorithm is full gradient decent (rather than SGD), and the
task is regression when the learnt function is a constant degree polynomial.
We build on the framework of Daniely et al. [2016] to establish guarantees on SGD in a rather
general setting. Daniely et al. [2016] de?ned a framework that associates a reproducing kernel to a
network architecture. They also connected the kernel to the network via the random initialization.
Namely, they showed that right after the random initialization, any function in the kernel space can
be approximated by changing the weights of the last layer. The quality of the approximation depends
on the size of the network and the norm of the function in the kernel space.
As optimizing the last layer is a convex procedure, the result of Daniely et al. [2016] intuitively
shows that the optimization process starts from a favourable point for learning a function in the
conjugate kernel space. In this paper we verify this intuition. Namely, for a fairly general family of
architectures (that contains fully connected networks and convolutional networks) and supervised
learning tasks, we show that if the network is large enough, the learning rate is small enough, and
the number of SGD steps is large enough as well, SGD is guaranteed to learn any function in the
corresponding kernel space. We emphasize that the number of steps and the size of the network are
only required to be polynomial (which is best possible) in the relevant parameters ? the norm of the
function, the required accuracy parameter (?), and the dimension of the input and the output of the
network. Likewise, the result holds for any input distribution.
To evaluate our result, one should understand which functions it guarantee that SGD will learn.
Namely, what functions reside in the conjugate kernel space, how rich it is, and how good those
functions are as predictors. From an empirical perspective, in [Daniely et al., 2017], it is shown that
for standard convolutional networks the conjugate class contains functions whose performance is
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
close to the performance of the function that is actually learned by the network. This is based on
experiments on the standard CIFAR-10 dataset. From a theoretical perspective, we list below a few
implications that demonstrate the richness of the conjugate kernel space. These implications are valid
for fully connected networks of any depth between 2 and log(n), where n is the input dimension.
Likewise, they are also valid for convolutional networks of any depth between 2 and log(n), and with
constantly many convolutional layers.
? SGD is guaranteed to learn in polynomial time constant degree polynomials with polynomially bounded coef?cients. As a corollary, SGD is guaranteed to learn in polynomial time
conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF
formulas with constantly many literals in each term. These function classes comprise a considerable fraction of the function classes that are known to be poly-time (PAC) learnable by
any method. Exceptions include constant degree polynomial thresholds with no restriction
on the coef?cients, decision lists and parities.
? SGD is guaranteed to learn, not necessarily in polynomial time, any continuous function.
This complements classical universal approximation results that show that neural networks
can (approximately) express any continuous function (see Scarselli and Tsoi [1998] for a
survey). Our results strengthen those results and show that networks are not only able to
express those functions, but actually guaranteed to learn them.
1.1
Related work
Guarantees on SGD. As noted above, there are very few results that provide polynomial time
guarantees for SGD on NN. One notable exception is the work of Andoni et al. [2014], that proves
a result that is similar to ours, but in a substantially more restricted setting. Concretely, their result
holds for depth-2 fully connected networks, as opposed to rather general architecture and constant or
logarithmic depth in our case. Likewise, the marginal distribution on the instance space is assumed to
be Gaussian or uniform, as opposed to arbitrary in our case. In addition, the algorithm they consider
is full gradient decent, which corresponds to SGD with in?nitely large mini-batch, as opposed to
SGD with arbitrary mini-batch size in our case. Finally, the underlying task is regression in which
the target function is a constant degree polynomial, whereas we consider rather general supervised
learning setting.
Other polynomial time guarantees on learning deep architectures. Various recent papers show
that poly-time learning is possible in the case that the the learnt function can be realized by a neural
network with certain (usually fairly strong) restrictions on the weights [Livni et al., 2014, Zhang
et al., 2016a, 2015, 2016b], or under the assumption that the data is generated by a generative model
that is derived from the network architecture [Arora et al., 2014, 2016]. We emphasize that the
main difference of those results from our results and the results of Andoni et al. [2014] is that they
do not provide guarantees on the standard SGD learning algorithm. Rather, they show that under
those aforementioned conditions, there are some algorithms, usually very different from SGD on the
network, that are able to learn in polynomial time.
Connection to kernels. As mentioned earlier, our paper builds on Daniely et al. [2016], who
developed the association of kernels to NN which we rely on. Several previous papers [Mairal et al.,
2014, Cho and Saul, 2009, Rahimi and Recht, 2009, 2007, Neal, 2012, Williams, 1997, Kar and
Karnick, 2012, Pennington et al., 2015, Bach, 2015, 2014, Hazan and Jaakkola, 2015, Anselmi et al.,
2015] investigated such associations, but in a more restricted settings (i.e., for less architectures).
Some of those papers [Rahimi and Recht, 2009, 2007, Daniely et al., 2016, Kar and Karnick,
2012, Bach, 2015, 2014] also provide measure of concentration results, that show that w.h.p. the
random initialization of the network?s weights is reach enough to approximate the functions in the
corresponding kernel space. As a result, these papers provide polynomial time guarantees on the
variant of SGD, where only the last layer is trained. We remark that with the exception of Daniely
et al. [2016], those results apply just to depth-2 networks.
1.2
Discussion and future directions
We next want to place this work in the appropriate learning theoretic context, and to elaborate further
on this paper?s approach for investigating neural networks. For the sake of concreteness, let us
2
restrict the discussion to binary classi?cation over the Boolean cube. Namely, given examples from
a distribution D on {?1}n ? {0, 1}, the goal is to learn a function h : {?1}n ? {0, 1} whose 0-1
error, L0?1
D (h) = Pr(x,y)?D (h(x) ?= y), is as small as possible. We will use a bit of terminology.
A model is a distribution D on {?1}n ? {0, 1} and a model class is a collection M of models. We
n
note that any function class H ? {0, 1}{?1} de?nes a model class, M(H), consisting of all models
D such that L0?1
D (h) = 0 for some h ? H. We de?ne the capacity of a model class as the minimal
number m for which there is an algorithm such that for every D ? M the following holds. Given m
9
samples from D, the algorithm is guaranteed to return, w.p. ? 10
over the samples and its internal
1
n
randomness, a function h : {?1} ? {0, 1} with 0-1 error ? 10 . We note that for function classes
the capacity is the VC dimension, up to a constant factor.
Learning theory analyses learning algorithms via model classes. Concretely, one ?xes some model
class M and show that the algorithm is guaranteed to succeed whenever the underlying model is from
M. Often, the connection between the algorithm and the class at hand is very clear. For example, in
the case that the model is derived from a function class H, the algorithm might simply be one that
?nds a function in H that makes no mistake on the given sample. The natural choice for a model class
for analyzing SGD on NN would be the class of all functions that can be realized by the network,
possibly with some reasonable restrictions on the weights. Unfortunately, this approach it is probably
doomed to fail, as implied by various computational hardness results [Blum and Rivest, 1989, Kearns
and Valiant, 1994, Blum et al., 1994, Kharitonov, 1993, Klivans and Sherstov, 2006, 2007, Daniely
et al., 2014, Daniely and Shalev-Shwartz, 2016].
So, what model classes should we consider? With a few isolated exceptions (e.g. Bshouty et al.
[1998]) all known ef?ciently learnable model classes are either a linear model class, or contained in
an ef?ciently learnable linear model class. Namely, functions classes composed of compositions of
some prede?ned embedding with linear threshold functions, or linear functions over some ?nite ?eld.
Coming up we new tractable models would be a fascinating progress. Still, as linear function classes
are the main tool that learning theory currently has for providing guarantees on learning, it seems
natural to try to analyze SGD via linear model classes. Our work follows this line of thought, and
we believe that there is much more to achieve via this approach. Concretely, while our bounds are
polynomial, the degree of the polynomials is rather large, and possibly much better quantitative
bounds can be achieved. To be more concrete, suppose that we consider simple fully connected
architecture, with 2-layers, ReLU activation, and n hidden neurons.
? 1 ?In this case, the capacity of the
model class that our results guarantee that SGD will learn is ? n 3 . For comparison, the capacity
? ?
of the class of all functions that are realized by this network is ? n2 . As a challenge, we encourage
the reader to prove that with this architecture (possibly with an activation that is different from the
ReLU), SGD is guaranteed to learn some model class of capacity that is super-linear in n.
2
Preliminaries
Notation. We denote vectors by bold-face letters (e.g. x), matrices by upper case letters (e.g. W ),
and collection of matrices by bold-face upper case letters (e.g. W). The p-norm of x ? Rd is denoted
??
?1
d
p p
by ?x?p =
. We will also use the convention that ?x? = ?x?2 . For functions
i=1 |xi |
? : R ? R we let
?
??
?
x2
??? := EX?N (0,1) ? 2 (X) = ?12? ?? ? 2 (x)e? 2 dx .
Let G = (V, E) be a directed acyclic graph. The set of neighbors incoming to a vertex v is
also denote deg(v) = |in(v)|. Given weight function
denoted in(v) := {u ? V | uv ? E}. We ?
? : V ? [0, ?) and U ? V we let ?(U ) = u?U ?(u). The d ? 1 dimensional sphere is denoted
Sd?1 = {x ? Rd | ?x? = 1}. We use [x]+ to denote max(x, 0).
Input space. Throughout the paper we assume that each example is a sequence of n elements,
each of which? is represented
as a unit vector. Namely, we ?x n and take the input space to be
?n
X = Xn,d = Sd?1 . Each input example is denoted,
x = (x1 , . . . , xn ), where xi ? Sd?1 .
3
(1)
While this notation is slightly non-standard, it uni?es input types seen in various domains (see Daniely
et al. [2016]).
Supervised learning. The goal in supervised learning is to devise a mapping from the input
space X to an output space Y based on a sample S = {(x1 , y1 ), . . . , (xm , ym )}, where (xi , yi ) ?
X ? Y drawn i.i.d. from a distribution D over X ? Y. A supervised learning problem is further
speci?ed by an output length k and a loss function ? : Rk ? Y ? [0, ?), and the goal is to ?nd
a predictor h : X ? Rk whose loss, LD (h) := E(x,y)?D ?(h(x), y), is small. The empirical loss
?m
1
LS (h) := m
i=1 ?(h(xi ), yi ) is commonly used as a proxy for the loss LD . When h is de?ned
by a vector w of parameters, we will use the notations LD (w) = LD (h), LS (w) = LS (h) and
?(x,y) (w) = ?(h(x), y).
Regression problems correspond to k = 1, Y = R and, for instance, the squared loss ?square (?
y , y) =
(?
y ? y)2 . Binary classi?cation is captured by k = 1, Y = {?1} and, say, the zero-one loss
?0?1 (?
y , y) = 1[?
y y ? 0] or the hinge loss ?hinge (?
y , y) = [1 ? y?y]+ . Multiclass classi?cation is
captured by k being the number of classes, Y = [k], and, say, the zero-one loss ?0?1 (?
y , y) =
y, y) = ? log (py (?
y)) where p : Rk ? ?k?1 is
1[?
yy ? argmaxy? y?y? ] or the logistic loss ?log (?
given by pi (?
y) =
ey?i
?k
j=1
ey?j
. A loss ? is L-Lipschitz if for all y ? Y, the function ?y (?
y ) := ?(?
y , y) is
L-Lipschitz. Likewise, it is convex if ?y is convex for every y ? Y.
Neural network learning. We de?ne a neural network N to be a vertices weighted directed acyclic
graph (DAG) whose nodes are denoted V (N ) and edges E(N ). The weight function will be denoted
by ? : V (N ) ? [0, ?), and its sole role would be to dictate the distribution of the initial weights.
We will refer N ?s nodes by neurons. Each of non-input neuron, i.e. neuron with incoming edges, is
associated with an activation function ?v : R ? R. In this paper, an activation can be any function
? : R ? R that is right and left differentiable, square integrable with respect to the Gaussian measure
on R, and is normalized in the sense that ??? = 1. The set of neurons having only incoming edges
are called the output neurons. To match the setup of supervised learning de?ned above, a network
N has nd input neurons and k output neurons, denoted o1 , . . . , ok . A network N together with
a weight vector w = {wuv | uv ? E} ? {bv | v ? V is an internal neuron} de?nes a predictor
hN ,w : X ? Rk whose prediction is given by ?propagating? x forward through the network.
Concretely, we de?ne hv,w (?) to be the output of the subgraph of the neuron v as follows: for an
input neuron v, hv,w outputs the corresponding coordinate in x, and internal neurons, we de?ne hv,w
recursively as
??
?
w
h
(x)
+
b
hv,w (x) = ?v
.
uv
u,w
v
u?in(v)
For output neurons, we de?ne hv,w as
hv,w (x) =
?
u?in(v)
wuv hu,w (x) .
Finally, we let hN ,w (x) = (ho1 ,w (x), . . . , hok ,w (x)).
We next describe the learning algorithm that we analyze in this paper. While there is no standard
training algorithm for neural networks, the algorithms used in practice are usually quite similar
to the one we describe, both in the way the weights are initialized and the way they are updated.
We will use the popular Xavier initialization [Glorot and Bengio, 2010] for the network weights.
0
Fix 0 ? ? ? 1. We say that w0 = {wuv
}uv?E ? {bv }v?V is an internal neuron are ?-biased random
weights (or, ?-biased random initialization) if each weight wuv is sampled independently from a
normal distribution with mean 0 and variance (1 ? ?)d?(u)/?(in(v)) if u is an input neuron and
(1 ? ?)?(u)/?(in(v)) otherwise. Finally, each bias term bv is sampled independently from a normal
distribution with mean 0 and variance ?. We note that the rational behind this initialization scheme is
2
that for every example x and every neuron v we have Ew0 (hv,w0 (x)) = 1 (see Glorot and Bengio
[2010])
Kernel classes. A function ? : X ? X ? R is a reproducing kernel, or simply a kernel, if
for every x1 , . . . , xr ? X , the r ? r matrix ?i,j = {?(xi , xj )} is positive semi-de?nite. Each
kernel induces a Hilbert space
H? of functions from X to R with a corresponding norm ? ? ?? . For
??
k
k
2
h ? H? we denote ?h?? =
i=1 ?hi ?? . A kernel and its corresponding space are normalized if
?x ? X , ?(x, x) = 1.
4
Algorithm 1 Generic Neural Network Training
Input: Network N , learning rate ? > 0, batch size m, number of steps T > 0, bias parameter
0 ? ? ? 1, ?ag zero prediction layer ? {True, False}.
Let w0 be ?-biased random weights
if zero prediction layer then
0
Set wuv
= 0 whenever v is an output neuron
end if
for t = 1, . . . , T do
m
Obtain a mini-batch St = {(xti , yit )}m
i=1 ? D
Using back-propagation, calculate a stochastic gradient vt = ?LSt (wt )
Update wt+1 = wt ? ?vt
end for
S1
S2
S3
S4
Figure 1: Examples of computation skeletons.
Kernels give rise to popular benchmarks for learning algorithms. Fix a normalized kernel ? and
M > 0. It is well known that that for L-Lipschitz loss ?, the SGD algorithm is guaranteed to return
?
?2
a function h such that E LD (h) ? minh? ?Hk? , ?h? ?? ?M LD (h? ) + ? using LM
examples. In the
?
context of multiclass classi?cation, for ? > 0 we de?ne ?? : Rk ? [k] ? R by ?? (?
y , y) = 1[?
yy ?
? + maxy? ?=y y?y? ]. We say that a distribution D on X ? [k] is M -separable w.r.t. ? if there is h? ? H?k
such that ?h? ?? ? M and L1D (h? ) = 0. In this case, the perceptron algorithm is guaranteed to return
2M 2
a function h such that E L0?1
D (h) ? ? using ? examples. We note that both for perceptron and
SGD, the above mentioned results are best possible, in the sense that any algorithm with the same
guarantees, will have to use at least the same number of examples, up to a constant factor.
Computation skeletons [Daniely et al., 2016] In this section we de?ne a simple structure which
we term a computation skeleton. The purpose of a computational skeleton is to compactly describe
a feed-forward computation from an input to an output. A single skeleton encompasses a family
of neural networks that share the same skeletal structure. Likewise, it de?nes a corresponding
normalized kernel.
De?nition 1. A computation skeleton S is a DAG with n inputs, whose non-input nodes are labeled
by activations, and has a single output node out(S).
Figure 1 shows four example skeletons, omitting the designation of the activation functions. We
denote by |S| the number of non-input nodes of S. The following de?nition shows how a skeleton,
accompanied with a replication parameter r ? 1 and a number of output nodes k, induces a neural
network architecture.
5
N (S, 5, 4)
S
Figure 2: A (5, 4)-realization of the computation skeleton S with d = 2.
De?nition 2 (Realization of a skeleton). Let S be a computation skeleton and consider input
coordinates in Sd?1 as in (1). For r, k ? 1 we de?ne the following neural network N = N (S, r, k).
For each input node in S, N has d corresponding input neurons with weight 1/d. For each internal
node v ? S labelled by an activation ?, N has r neurons v 1 , . . . , v r , each with an activation ? and
weight 1/r. In addition, N has k output neurons o1 , . . . , ok with the identity activation ?(x) = x
and weight 1. There is an edge v i uj ? E(N ) whenever uv ? E(S). For every output node v in S,
each neuron v j is connected to all output neurons o1 , . . . , ok . We term N the (r, k)-fold realization
of S.
Note that the notion of the replication parameter r corresponds, in the terminology of convolutional
networks, to the number of channels taken in a convolutional layer and to the number of hidden
neurons taken in a fully-connected layer.
In addition to networks? architectures, a computation skeleton S also de?nes a normalized kernel
?S : X ? X ? [?1, 1]. To de?ne the kernel, we use the notion of a conjugate activation. For
? ? [?1,
we denote by N? the multivariate Gaussian distribution on R2 with mean 0 and covariance
? 11],
??
matrix ? 1 .
De?nition 3 (Conjugate activation). The conjugate activation of an activation ? is the function
?
? : [?1, 1] ? R de?ned as ?
? (?) = E(X,Y )?N? ?(X)?(Y ) .
The following de?nition gives the kernel corresponding to a skeleton
De?nition 4 (Compositional kernels). Let S be a computation skeleton and let 0 ? ? ? 1. For every
node v, inductively de?ne a kernel ??v : X ? X ? R as follows. For an input node v corresponding
to the ith coordinate, de?ne ??v (x, y) = ?xi , yi ?. For a non-input node v, de?ne
?
?
?
?
u?in(v) ?u (x, y)
?
+? .
?v (1 ? ?)
?v (x, y) = ?
|in(v)|
The ?nal kernel ??S is ??out(S) . The resulting Hilbert space and norm are denoted HS,? and ? ? ?S,?
respectively.
3
Main results
An activation ? : R ? R is called C-bounded if ???? , ?? ? ?? , ?? ?? ?? ? C. Fix a skeleton S
?depth(S)
and 1-Lipschitz1 convex loss ?. De?ne comp(S) = i=1
maxv?S,depth(v)=i (deg(v) + 1) and
?
C(S) = (8C)depth(S) comp(S), where C is the minimal number for which all the activations in S
are C-bounded, and depth(v)
is the maximal length of a path from an input node to v. We also de?ne
?
C ? (S) = (4C)depth(S) comp(S), where C is the minimal number for which all the activations in
S are C-Lipschitz and satisfy |?(0)| ? C. Through this and remaining sections we use ? to hide
universal constants. Likewise, we ?x the bias parameter ? and therefore omit it from the relevant
notation.
1
If ? is L-Lipschitz, we can replace ? by L1 ? and the learning rate ? by L?. The operation of algorithm 1 will
be identical to its operation before the modi?cation. Given this observation, it is very easy to derive results for
general L given our results. Hence, to save one paramater, we will assume that L = 1.
6
We note that for constant depth skeletons with maximal degree that is polynomial in n, C(S) and
C ? (S) are polynomial in n. These quantities are polynomial in n also for various log-depth skeletons.
For example, this is true for fully connected skeletons, or more generally, layered skeletons with
constantly many layers that are not fully connected.
Theorem 1. Suppose that all activations are C-bounded. Let M, ? > 0. Suppose that we run
algorithm 1 on the network N (S, r, k) with the following parameters:
? ?=
? T ?
? r?
??
r
for ? ? ?
?
(C ? (S))2
M2
?? ?
4
C 4 (T ? ? )2 M 2 (C ? (S)) log(
?2
C|S|
??
)
+d
? Zero initialized prediction layer
? Arbitrary m
Then, w.p. ? 1 ? ? over the choice of the initial weights, there is t ? [T ] such that E LD (wt ) ?
minh?HkS , ?h?S ?M LD (h) + ?. Here, the expectation is over the training examples.
?
?
We next consider ReLU activations. Here, C ? (S) = ( 32)depth(S) comp(S).
Theorem 2. Suppose that all activations are the ReLU. Let M, ? > 0. Suppose that we run algorithm
1 on the network N (S, r, k) with the following parameters:
? ?=
? T ?
? r?
??
r
for ? ? ?
?
(C ? (S))2
M2
?? ?
4
(T ? ? )2 M 2 (C ? (S)) log(
?2
|S|
??
)
+d
? Zero initialized prediction layer
? Arbitrary m
Then, w.p. ? 1 ? ? over the choice of the initial weights, there is t ? [T ] such that E LD (wt ) ?
minh?HkS , ?h?S ?M LD (h) + ?. Here, the expectation is over the training examples.
Finally, we consider the case in which the last layer is also initialized randomly. Here, we provide
guarantees in a more restricted setting of supervised learning. Concretely, we consider multiclass
classi?cation, when D is separable with margin, and ? is the logistic loss.
Theorem 3. Suppose that all activations are C-bounded, that D is M -separable with w.r.t. ?S and
let ? > 0. Suppose we run algorithm 1 on N (S, r, k) with the following parameters:
? ?=
? T ?
??
r
for ? ? ?
?2
M 2 (C(S))4
log(k)M 2
? ? ?2
4
2
? r ? C 4 (C(S)) M 2 (T ? ? ) log
?
C|S|
?
? Randomly initialized prediction layer
?
+k+d
? Arbitrary m
Then, w.p. ? 14 over the choice of the initial weights and the training examples, there is t ? [T ] such
t
that L0?1
D (w ) ? ?
7
3.1
Implications
To demonstrate our results, let us elaborate on a few implications for speci?c network architectures.
To this end, let us ?x the instance space X to be either {?1}n or Sn?1 . Also, ?x a bias parameter
1 ? ? > 0, a batch size m, and a skeleton S that is a skeleton of a fully connected network of
depth between 2 and log(n). Finally, we also ?x the activation function to be either the ReLU or a
C-bounded activation, assume that the prediction layer is initialized to 0, and ?x the loss function to
be some convex and Lipschitz loss function. Very similar results are valid for convolutional networks
with constantly many convolutional layers. We however omit the details for brevity.
Our ?rst implication shows that SGD is guaranteed to ef?ciently learn constant degree polynomials
with polynomially bounded weights. To this end, let us denote by Pt the collection of degree t
polynomials. Furthermore, for any polynomial p we denote by ?p? the ?2 norm of its coef?cients.
Corollary 4. Fix any positive integers t0 , t1 . Suppose that we run algorithm 1 on the network
N (S, r, 1) with the following parameters:
? ?
? ? ? poly n?
?
?
? T, r ? poly n? , log (1/?)
Then, w.p. ? 1 ? ? over the choice of the initial weights, there is t ? [T ] such that E LD (wt ) ?
minp?Pt0 , ?p??nt1 LD (p) + ?. Here, the expectation is over the training examples.
We note that several hypothesis classes that were studied in PAC learning can be realized by polynomial threshold functions with polynomially bounded coef?cients. This includes conjunctions, DNF
and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many
literals in each term. If we take the loss function to be the logistic loss or the hinge loss, Corollary 4
implies that SGD ef?ciently learns these hypothesis classes as well.
Our second implication shows that any continuous function is learnable (not necessarily in polynomial
time) by SGD.
Corollary 5. Fix a continuous function h? : Sn?1 ? R and ?, ? > 0. Assume that D is realized2 by
h? . Assume that we run algorithm 1 on the network N (S, r, 1). If ? > 0 is suf?ciently small and T
and r are suf?ciently large, then, w.p. ? 1 ? ? over the choice of the initial weights, there is t ? [T ]
such that E LD (wt ) ? ?.
3.2
Extensions
We next remark on two extensions of our main results. The extended results can be proved in a similar
fashion to our results. To avoid cumbersome notation, we restrict the proofs to the main theorems as
stated, and will elaborate on the extended results in an extended version of this manuscript. First, we
assume that the replication parameter is the same for all nodes. In practice, replication parameters for
different nodes are different. This can be?
captured by a vector {rv }v?Int(S) . Our main results can
be extended to this case if for all v, rv ? u?in(v) ru (a requirement that usually holds in practice).
Second, we assume that there is no weight sharing that is standard in convolutional networks. Our
results can be extended to convolutional networks with weight sharing.
We also note that we assume that in each step of algorithm 1, a fresh batch of examples is given. In
practice this is often not the case. Rather, the algorithm is given a training set of examples, and at
each step it samples from that set. In this case, our results provide guarantees on the training loss.
If the training set is large enough, this also implies guarantees on the population loss via standard
sample complexity results.
Acknowledgments
The author thanks Roy Frostig, Yoram Singer and Kunal Talwar for valuable discussions and
comments.
2
That is, if (x, y) ? D then y = h? (x) with probability 1.
8
References
A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. Learning polynomials with neural networks. In
Proceedings of the 31st International Conference on Machine Learning, pages 1908?1916, 2014.
F. Anselmi, L. Rosasco, C. Tan, and T. Poggio. Deep convolutional networks are hierarchical kernel
machines. arXiv:1508.01084, 2015.
Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep
representations. In ICML, pages 584?592, 2014.
Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski. Provable learning of noisy-or networks.
arXiv preprint arXiv:1612.08795, 2016.
F. Bach. Breaking the curse of dimensionality with convex neural networks. arXiv:1412.8690, 2014.
F. Bach. On the equivalence between kernel quadrature rules and random feature expansions. 2015.
A. Blum, M. Furst, J. Jackson, M. Kearns, Y. Mansour, and Steven Rudich. Weakly learning DNF and
characterizing statistical query learning using fourier analysis. In Proceedings of the twenty-sixth
annual ACM symposium on Theory of computing, pages 253?262. ACM, 1994.
Avrim Blum and Ronald L. Rivest. Training a 3-node neural net is NP-Complete. In David S.
Touretzky, editor, Advances in Neural Information Processing Systems I, pages 494?501. Morgan
Kaufmann, 1989.
Nader H Bshouty, Christino Tamon, and David K Wilson. On learning width two branching programs.
Information Processing Letters, 65(4):217?222, 1998.
Y. Cho and L.K. Saul. Kernel methods for deep learning. In Advances in neural information
processing systems, pages 342?350, 2009.
A. Daniely and S. Shalev-Shwartz. Complexity theoretic limitations on learning DNFs. In COLT,
2016.
A. Daniely, N. Linial, and S. Shalev-Shwartz. From average case complexity to improper learning
complexity. In STOC, 2014.
Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The
power of initialization and a dual view on expressivity. In NIPS, 2016.
Amit Daniely, Roy Frostig, Vineet Gupta, and Yoram Singer. Random features for compositional
kernels. arXiv preprint arXiv:1703.07872, 2017.
X. Glorot and Y. Bengio. Understanding the dif?culty of training deep feedforward neural networks.
In International conference on arti?cial intelligence and statistics, pages 249?256, 2010.
T. Hazan and T. Jaakkola. Steps toward deep kernel methods from in?nite neural networks.
arXiv:1508.05133, 2015.
Gautam C Kamath. Bounds on the expectation of the maximum of samples from a gaussian, 2015.
URL http://www. gautamkamath. com/writings/gaussian max. pdf.
P. Kar and H. Karnick. Random feature maps for dot product kernels. arXiv:1201.6530, 2012.
M. Kearns and L.G. Valiant. Cryptographic limitations on learning Boolean formulae and ?nite
automata. Journal of the Association for Computing Machinery, 41(1):67?95, January 1994.
Michael Kharitonov. Cryptographic hardness of distribution-speci?c learning. In Proceedings of the
twenty-?fth annual ACM symposium on Theory of computing, pages 372?381. ACM, 1993.
A.R. Klivans and A.A. Sherstov. Cryptographic hardness for learning intersections of halfspaces. In
FOCS, 2006.
A.R. Klivans and A.A. Sherstov. Unconditional lower bounds for learning intersections of halfspaces.
Machine Learning, 69(2-3):97?114, 2007.
9
| 6836 |@word h:1 version:1 polynomial:28 norm:6 seems:1 nd:3 hu:1 covariance:1 arti:1 eld:1 sgd:29 recursively:1 ld:13 initial:6 contains:2 pt0:1 ours:1 com:1 activation:22 dx:1 nt1:1 ronald:1 update:1 maxv:1 generative:1 intelligence:1 complementing:1 ith:1 provides:1 node:16 gautam:1 zhang:2 symposium:2 replication:4 focs:1 prove:1 fth:1 hardness:3 indeed:1 xti:1 curse:1 bounded:9 underlying:3 rivest:2 notation:5 what:2 substantially:1 developed:1 ag:1 guarantee:15 cial:1 quantitative:1 every:7 sherstov:3 unit:1 omit:2 positive:2 before:1 t1:1 wuv:5 sd:4 mistake:1 analyzing:1 nitely:1 path:1 approximately:1 might:1 initialization:8 studied:1 equivalence:1 dif:1 directed:2 acknowledgment:1 tsoi:1 practice:4 xr:1 procedure:1 nite:4 universal:2 empirical:2 thought:1 dictate:1 ho1:1 close:1 layered:1 andrej:1 context:2 writing:1 py:1 restriction:3 www:1 map:1 williams:1 l:3 convex:6 survey:1 independently:2 automaton:1 m2:2 rule:1 jackson:1 embedding:1 population:1 notion:2 coordinate:3 updated:1 target:1 today:1 suppose:8 strengthen:1 pt:1 tan:1 hypothesis:2 kunal:1 associate:1 element:1 roy:3 approximated:1 nader:1 labeled:1 role:1 steven:1 preprint:2 hv:7 calculate:1 connected:10 richness:1 improper:1 valuable:1 halfspaces:2 mentioned:2 intuition:1 complexity:4 skeleton:21 inductively:1 trained:1 weakly:1 linial:1 compactly:1 lst:1 various:4 represented:1 dnf:5 describe:3 query:1 shalev:3 whose:6 quite:1 say:4 otherwise:1 statistic:1 noisy:1 sequence:1 differentiable:1 net:1 coming:1 maximal:2 product:1 cients:5 relevant:2 realization:3 culty:1 subgraph:1 achieve:1 paramater:1 rst:2 requirement:1 derive:1 ac:1 propagating:1 sole:1 bshouty:2 progress:1 strong:1 implies:2 convention:1 direction:1 stochastic:3 vc:1 prede:1 fix:5 preliminary:1 extension:2 rong:2 hold:5 normal:2 mapping:1 lm:1 furst:1 purpose:1 currently:1 tool:1 weighted:1 gaussian:6 super:1 rather:8 avoid:1 jaakkola:2 wilson:1 conjunction:2 corollary:5 derived:2 l0:4 hk:1 sense:2 nn:3 hidden:2 aforementioned:1 colt:1 dual:1 denoted:8 fairly:2 marginal:1 cube:1 comprise:1 having:1 beach:1 identical:1 icml:1 future:1 np:1 few:5 randomly:2 modi:1 composed:1 scarselli:1 consisting:1 argmaxy:1 unconditional:1 behind:1 implication:6 edge:4 encourage:1 poggio:1 machinery:1 initialized:6 isolated:1 theoretical:1 minimal:3 instance:3 earlier:1 boolean:2 vertex:2 daniely:18 predictor:3 uniform:1 learnt:2 cho:2 st:3 recht:2 thanks:1 international:2 huji:1 vineet:1 michael:1 ym:1 together:1 concrete:1 sanjeev:2 squared:1 opposed:3 rosasco:1 hn:2 possibly:3 literal:2 return:3 de:30 accompanied:1 bold:2 includes:1 int:1 satisfy:1 notable:1 depends:1 try:1 view:1 hazan:2 analyze:2 competitive:1 start:1 il:1 square:2 accuracy:1 convolutional:11 variance:2 who:1 likewise:7 kaufmann:1 correspond:1 l1d:1 comp:4 cation:6 randomness:1 reach:1 coef:5 cumbersome:1 whenever:3 ed:1 sharing:2 sixth:1 touretzky:1 associated:1 proof:1 sampled:2 rational:1 dataset:1 proved:1 popular:3 knowledge:2 dimensionality:1 hilbert:2 actually:2 back:1 manuscript:1 ok:3 feed:1 supervised:8 furthermore:1 just:1 hand:1 propagation:1 google:1 logistic:3 quality:1 believe:1 usa:1 omitting:1 verify:1 normalized:5 true:2 xavier:1 hence:1 neal:1 width:1 branching:1 noted:1 pdf:1 theoretic:2 demonstrate:2 complete:1 l1:1 ef:4 association:3 doomed:1 refer:1 composition:1 dag:2 rd:2 uv:5 frostig:3 dot:1 risteski:1 multivariate:1 showed:1 recent:1 perspective:2 optimizing:1 hide:1 certain:1 kar:3 binary:2 success:1 vt:2 yi:3 devise:1 nition:6 integrable:1 seen:1 captured:3 morgan:1 speci:3 ey:2 semi:1 rv:2 full:2 rahimi:2 match:1 bach:4 long:1 cifar:1 sphere:1 prediction:7 variant:1 regression:3 expectation:4 arxiv:8 kernel:33 achieved:1 addition:3 whereas:1 want:1 biased:3 probably:2 comment:1 integer:1 ciently:6 feedforward:1 bengio:3 enough:6 decent:4 easy:1 xj:1 relu:5 architecture:12 restrict:2 multiclass:3 t0:1 url:1 cnf:4 compositional:2 remark:2 deep:6 generally:1 clear:1 s4:1 induces:2 http:1 s3:1 yy:2 skeletal:1 express:2 four:1 terminology:2 threshold:3 blum:4 drawn:1 yit:1 changing:1 nal:1 graph:2 concreteness:1 fraction:1 run:5 talwar:1 letter:4 place:1 family:3 reasonable:1 reader:1 throughout:1 decision:1 bit:1 layer:16 bound:5 hi:1 guaranteed:13 fold:1 fascinating:1 annual:2 bv:3 x2:1 sake:1 fourier:1 klivans:3 separable:3 tengyu:2 ned:6 conjugate:9 slightly:1 s1:1 maxy:1 intuitively:1 restricted:4 pr:1 taken:2 fail:1 singer:3 dnfs:1 ge:2 tractable:1 end:4 operation:2 apply:1 hierarchical:1 appropriate:1 generic:1 save:1 batch:6 anselmi:2 remaining:1 include:1 hinge:3 yoram:3 amit:4 build:2 establish:1 classical:2 prof:1 uj:1 implied:1 realized:4 quantity:1 concentration:1 rudich:1 gradient:5 capacity:5 w0:3 mail:1 toward:2 fresh:1 provable:2 panigrahy:1 ru:1 length:2 o1:3 mini:3 providing:1 hebrew:1 setup:1 unfortunately:1 stoc:1 kamath:1 stated:1 rise:1 cryptographic:3 twenty:2 upper:2 neuron:23 observation:1 benchmark:1 minh:3 january:1 extended:5 y1:1 mansour:1 reproducing:2 arbitrary:5 david:2 complement:1 namely:7 required:2 connection:2 learned:1 expressivity:2 nip:2 able:2 below:1 usually:4 xm:1 challenge:1 encompasses:1 program:1 max:2 power:1 natural:2 rely:1 scheme:1 hks:2 ne:18 arora:3 sn:2 understanding:2 fully:8 loss:20 designation:1 suf:2 limitation:2 acyclic:2 degree:9 proxy:1 minp:1 editor:1 pi:1 share:1 last:4 parity:1 bias:4 understand:1 perceptron:2 deeper:1 saul:2 neighbor:1 face:2 characterizing:1 livni:1 depth:17 dimension:3 valid:4 xn:2 rich:2 karnick:3 reside:1 concretely:5 collection:3 commonly:1 forward:2 author:1 polynomially:4 approximate:1 emphasize:2 uni:1 deg:2 investigating:1 incoming:3 mairal:1 assumed:1 xi:6 shwartz:3 continuous:5 learn:14 channel:1 ca:1 expansion:1 investigated:1 poly:4 necessarily:2 domain:1 main:6 s2:1 n2:1 quadrature:1 x1:3 depicts:1 elaborate:3 fashion:1 breaking:1 learns:2 bhaskara:1 formula:5 rk:5 theorem:4 pac:2 favourable:1 list:2 learnable:4 x:1 r2:1 gupta:1 glorot:3 andoni:4 false:1 pennington:1 valiant:3 avrim:1 margin:1 intersection:2 logarithmic:1 simply:2 aditya:1 contained:1 corresponds:2 constantly:7 acm:4 ma:2 succeed:1 goal:3 identity:1 labelled:1 lipschitz:6 replace:1 considerable:1 wt:7 classi:5 kearns:3 called:2 e:1 exception:4 internal:5 brevity:1 evaluate:1 ex:1 |
6,453 | 6,837 | Noise-Tolerant Interactive Learning Using
Pairwise Comparisons
Yichong Xu* , Hongyang Zhang* , Kyle Miller? , Aarti Singh* , and Artur Dubrawski?
*
Machine Learning Department, Carnegie Mellon University, USA
?
Auton Lab, Carnegie Mellon University, USA
{yichongx, hongyanz, aarti, awd}@cs.cmu.edu,
[email protected]
Abstract
We study the problem of interactively learning a binary classifier using noisy
labeling and pairwise comparison oracles, where the comparison oracle answers
which one in the given two instances is more likely to be positive. Learning from
such oracles has multiple applications where obtaining direct labels is harder but
pairwise comparisons are easier, and the algorithm can leverage both types of
oracles. In this paper, we attempt to characterize how the access to an easier
comparison oracle helps in improving the label and total query complexity. We
show that the comparison oracle reduces the learning problem to that of learning a
threshold function. We then present an algorithm that interactively queries the label
and comparison oracles and we characterize its query complexity under Tsybakov
and adversarial noise conditions for the comparison and labeling oracles. Our lower
bounds show that our label and total query complexity is almost optimal.
1
Introduction
Given high costs of obtaining labels for big datasets, interactive learning is gaining popularity in
both practice and theory of machine learning. On the practical side, there has been an increasing
interest in designing algorithms capable of engaging domain experts in two-way queries to facilitate
more accurate and more effort-efficient learning systems (c.f. [26, 31]). On the theoretical side, study
of interactive learning has led to significant advances such as exponential improvement of query
complexity over passive learning under certain conditions (c.f. [5, 6, 7, 15, 19, 27]). While most of
these approaches to interactive learning fix the form of an oracle, e.g., the labeling oracle, and explore
the best way of querying, recent work allows for multiple diverse forms of oracles [12, 13, 16, 33].
The focus of this paper is on this latter setting, also known as active dual supervision [4]. We
investigate how to recover a hypothesis h that is a good approximator of the optimal classifier h? ,
in terms of expected 0/1 error PrX [h(X) 6= h? (X)], given limited access to labels on individual
instances X ? X and pairwise comparisons about which one of two given instances is more likely to
belong to the +1/-1 class.
Our study is motivated by important applications where comparisons are easier to obtain than labels,
and the algorithm can leverage both types of oracles to improve label and total query complexity.
For example, in material design, synthesizing materials for specific conditions requires expensive
experimentation, but with an appropriate algorithm we can leverage expertize of material scientists,
for whom it may be hard to accurately assess the resulting material properties, but who can quickly
compare different input conditions and suggest which ones are more promising. Similarly, in clinical
settings, precise assessment of each individual patient?s health status can be difficult, expensive and/or
risky (e.g. it may require application of invasive sensors or diagnostic surgeries), but comparing
relative statuses of two patients at a time may be relatively easy and accurate. In both these scenarios
we may have access to a modest amount of individually labeled data, but the bulk of more accessible
training information is available via pairwise comparisons. There are many other examples where
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Refine Sampling Space
Learn Classifier
Request Batch
Labeling Oracle
Figure 1: Explanation of work flow of ADGAC-based algorithms. Left: Procedure of typical active
learning algorithms. Right: Procedure of our proposed ADGAC-based interactive learning algorithm
which has access to both pairwise comparison and labeling oracles.
Table 1: Comparison of various methods for learning of generic hypothesis class (Omitting log(1/?)
factors).
Label Noise
Work
# Label
# Query
Tolcomp
2??2
2??2
? 1
? 1
d?
O
d?
N/A
Tsybakov (?)
[18] O
?
?
2??2
2??2
1
1
?
?
Tsybakov (?)
Ours
O
O
? + d?
O(?2? )
?
?
?
?
Adversarial (? = O(?)) [19]
O(d?)
O(d?)
N/A
?
?
Adversarial (? = O(?)) Ours
O(1)
O(d?)
O(?2 )
humans find it easier to perform pairwise comparisons rather than providing direct labels, including
content search [17], image retrieval [31], ranking [21], etc.
Despite many successful applications of comparison oracles, many fundamental questions remain.
One of them is how to design noise-tolerant, cost-efficient algorithms that can approximate the
unknown target hypothesis to arbitrary accuracy while having access to pairwise comparisons. On
one hand, while there is theoretical analysis on the pairwise comparisons concerning the task of
learning to rank [3, 22], estimating ordinal measurement models [28] and learning combinatorial
functions [11], much remains unknown how to extend these results to more generic hypothesis classes.
On the other hand, although we have seen great progress on using single or multiple oracles with the
same form of interaction [9, 16], classification using both comparison and labeling queries remains
an interesting open problem. Independently of our work, Kane et al. [23] concurrently analyzed
a similar setting of learning to classify using both label and comparison queries. However, their
algorithms work only in the noise-free setting.
Our Contributions: Our work addresses the aforementioned issues by presenting a new algorithm,
Active Data Generation with Adversarial Comparisons (ADGAC), which learns a classifier with both
noisy labeling and noisy comparison oracles.
? We analyze ADGAC under Tsybakov (TNC) [30] and adversarial noise conditions for
the labeling oracle, along with the adversarial noise condition for the comparison oracle. Our general framework can augment any active learning algorithm by replacing the
batch sampling in these algorithms with ADGAC. Figure 1 presents the work flow of our
framework.
? We propose A2 -ADGAC algorithm, which can learn an arbitrary hypothesis class. The label
complexity of the algorithm is as small as learning a threshold function under both TNC and
adversarial noise condition, independently of the structure of the hypothesis class. The total
query complexity improves over previous best-known results under TNC which can only
access the labeling oracle.
? We derive Margin-ADGAC to learn the class of halfspaces. This algorithm has the same
label and total query complexity as A2 -ADGAC, but is computationally efficient.
? We present lower bounds on total query complexity for any algorithm that can access both
labeling and comparison oracles, and a noise tolerance lower bound for our algorithms.
These lower bounds demonstrate that our analysis is nearly optimal.
An important quantity governing the performance of our algorithms is the adversarial noise level
of comparisons: denote by Tolcomp (?, ?, A) the adversarial noise tolerance level of comparisons that
guarantees an algorithm A to achieve an error of ?, with probability at least 1 ? ?. Table 1 compares
our results with previous work in terms of label complexity, total query complexity, and Tolcomp for
generic hypothesis class C with error ?. We see that our results significantly improve over prior
2
Table 2: Comparison of various methods for learning of halfspaces (Omitting log(1/?) factors).
Label Noise
Work
# Label
# Query
Tolcomp Efficient?
?
?
Massart
[8]
O(d)
O(d)
N/A
No
Massart
[5]
poly(d)
poly(d)
N/A
Yes
?
?
Massart
Ours
O(1)
O(d)
O(?2 )
Yes
2??2
2??2
1
1
?
?
d?)
O(
d?)
N/A
No
Tsybakov (?)
[19] O(
?
?
2??2
2??2
1
1
2?
?
?
Tsybakov (?)
Ours
O
O
+d
O(? )
Yes
?
Adversarial (? = O(?))
Adversarial (? = O(?))
Adversarial (? = O(?))
[34]
[6]
Ours
?
?
O(d)
? 2)
O(d
?
O(1)
?
O(d)
? 2)
O(d
?
O(d)
N/A
N/A
O(?2 )
No
Yes
Yes
work with the extra comparison oracle. Denote by d the VC-dimension of C and ? the disagreement
coefficient. We also compare the results in Table 2 for learning halfspaces under isotropic log-concave
distributions. In both cases, our algorithms enjoy small label complexity that is independent of ? and
d. This is helpful when labels are very expensive to obtain. Our algorithms also enjoy better total
query complexity under both TNC and adversarial noise condition for efficiently learning halfspaces.
2
Preliminaries
Notations: We study the problem of learning a classifier h : X ? Y = {?1, 1}, where X
and Y are the instance space and label space, respectively. Denote by PX Y the distribution over
X ? Y and let PX be the marginal distribution over X . A hypothesis class C is a set of functions
h : X ? Y. For any function h, define the error of h under distribution D over X ? Y as
errD (h) = Pr(X,Y )?D [h(X) 6= Y ]. Let err(h) = errPX Y (h). Suppose that h? ? C satisfies
err(h? ) = inf h?C err(h). For simplicity, we assume that such an h? exists in class C.
We apply the concept of disagreement coefficient from Hanneke [18] for generic hypothesis class
in this paper. In particular, for any set V ? C, we denote by DIS(V ) = {x ? X : ?h1 , h2 ?
?
,r))]
V, h1 (x) 6= h2 (x)}. The disagreement coefficient is defined as ? = supr>0 Pr[DIS(B(h
, where
r
?
?
B(h , r) = {h ? C : PrX?PX [h(X) 6= h (X)] ? r}.
Problem Setup: We analyze two kinds of noise conditions for the labeling oracle, namely, adversarial
noise condition and Tsybakov noise condition (TNC). We formally define them as follows.
Condition 1 (Adversarial Noise Condition for Labeling Oracle). Distribution PX Y satisfies adversarial noise condition for labeling oracle with parameter ? ? 0, if ? = Pr(X,Y )?PX Y [Y 6= h? (X)].
Condition 2 (Tsybakov Noise Condition for Labeling Oracle). Distribution PX Y satisfies Tsybakov
noise condition for labeling oracle with parameters ? ? 1, ? ? 0, if ?h : X ? {?1, 1}, err(h) ?
err(h? ) ? ? PrX?PX [h(X) 6= h? (X)]? . Also, h? is the Bayes optimal classifier, i.e., h? (x) =
sign(?(x) ? 1/2). 1 where ?(x) = Pr[Y = 1|X = x]. The special case of ? = 1 is also called
Massart noise condition.
In the classic active learning scenario, the algorithm has access to an unlabeled pool drawn from PX .
The algorithm can then query the labeling oracle for any instance from the pool. The goal is to find
an h ? C such that the error Pr[h(X) 6= h? (X)] ? ?2 . The labeling oracle has access to the input
x ? X , and outputs y ? {?1, 1} according to PX Y . In our setting, however, an extra comparison
oracle is available. This oracle takes as input a pair of instances (x, x0 ) ? X ? X , and returns a
variable Z(x, x0 ) ? {?1, 1}, where Z(x, x0 ) = 1 indicates that x is more likely to be positive, while
Z(x, x0 ) = ?1 otherwise. In this paper, we discuss an adversarial noise condition for the comparison
oracle. We discuss about dealing with TNC on the comparison oracle in appendix.
Condition 3 (Adversarial Noise Condition for Comparison Oracle). Distribution PX X Z satisfies
adversarial noise with parameter ? 0 ? 0, if ? 0 = Pr[Z(X, X 0 )(h? (X) ? h? (X 0 )) < 0].
The assumption that h? is Bayes optimal classifier can be relaxed if the approximation error of h? can be
quantified under assumptions on the decision boundary (c.f. [15]).
2
Note that we use the disagreement Pr[h(X) 6= h? (X)] instead of the excess error err(h) ? err(h? ) in
some of the other literatures. The two conditions can be linked by assuming a two-sided version of Tsybakov
noise (see e.g., Audibert 2004).
1
3
Notation
C
X, X
Y, Y
Z, Z
d
?
h?
g?
Table 3: Summary of notations.
Meaning
Notation
?
?
?0
errD (h)
SClabel
SCcomp
Tollabel
Tolcomp
Hypothesis class
Instance & Instance space
Label & Label space
Comparison & Comparison space
VC dimension of C
Disagreement coefficient
Optimal classifier in C
Optimal scoring function
Meaning
Tsybakov noise level (labeling)
Adversarial noise level (labeling)
Adversarial noise level (comparison)
Error of h on distribution D
Label complexity
Comparison complexity
Noise tolerance (labeling)
Noise tolerance (comparison)
Note that we do not make any assumptions on the randomness of Z: Z(X, X 0 ) can be either random
or deterministic as long as the joint distribution PX X Z satisfies Condition 3.
For an interactive learning algorithm A, given error ? and failure probability ?, let SCcomp (?, ?, A)
and SClabel (?, ?, A) be the comparison and label complexity, respectively. The query complexity of A
is defined as the sum of label and comparison complexity. Similar to the definition of Tolcomp (?, ?, A),
define Tollabel (?, ?, A) as the maximum ? such that algorithm A achieves an error of at most ? with
probability 1 ? ?. As a summary, A learns an h such that Pr[h(X) 6= h? (X)] ? ? with probability
1 ? ? using SCcomp (?, ?, A) comparisons and SClabel (?, ?, A) labels, if ? ? Tollabel (?, ?, A) and
? 0 ? Tolcomp (?, ?, A). We omit the parameters of SCcomp , SClabel , Tolcomp , Tollabel if they are clear
? to ignore
from the context. We use O(?) to express sample complexity and noise tolerance, and O(?)
the log(?) terms. Table 3 summarizes the main notations throughout the paper.
3
Active Data Generation with Adversarial Comparisons (ADGAC)
The hardness of learning from pairwise comparisons follows from the error of comparison oracle: the
comparisons are noisy, and can be asymmetric and intransitive, meaning that the human might give
contradicting preferences like x1 4 x2 4 x1 or x1 4 x2 4 x3 4 x1 (here 4 is some preference).
This makes traditional methods, e.g., defining a function class {h : h(x) = Z(x, x
?), x
? ? X }, fail,
because such a class may have infinite VC dimension.
In this section, we propose a novel algorithm, ADGAC, to address this issue. Having access to
both comparison and labeling oracles, ADGAC generates a labeled dataset by techniques inspired
from group-based binary search. We show that ADGAC can be combined with any active learning
procedure to obtain interactive algorithms that can utilize both labeling and comparison oracles. We
provide theoretical guarantees for ADGAC.
3.1 Algorithm Description
To illustrate ADGAC, we start with a general active learning framework in Algorithm 1. Many
active learning algorithms can be adapted to this framework, such as A2 [7] and margin-based active
algorithms [6, 5]. Here U represents the querying space/disagreement region of the algorithm (i.e.,
we reject an instance x if x 6? U ), and V represents a version space consisting of potential classifiers.
For example, A2 algorithm can be adapted to Algorithm 1 straightforwardly by keeping U as the
sample space and V as the version space. More concretely, A2 algorithm [7] for adversarial noise can
be characterized by
U0 = X , V0 = C, fV (U, V, W, i) = {h : |W |errW (h) ? ni ?i }, fU (U, V, W, i) = DIS(V ),
where ?i and ni are parameters of the A2 algorithm, and DIS(V ) = {x ? X : ?h1 , h2 ? V, h1 (x) 6=
h2 (x)} is the disagreement region of V . Margin-based active learning [6] can also be fitted into
Algorithm 1 by taking V as the halfspace that (approximately) minimizes the hinge loss, and U as
the region within the margin of that halfspace.
To efficiently apply the comparison oracle, we propose to replace step 4 in Algorithm 1 with a
subroutine, ADGAC, that has access to both comparison and labeling oracles. Subroutine 2 describes
ADGAC. It takes as input a dataset S and a sampling number k. ADGAC first runs Quicksort
algorithm on S using feedback from comparison oracle, which is of form Z(x, x0 ). Given that the
comparison oracle Z(?, ?) might be asymmetric w.r.t. its two arguments, i.e., Z(x, x0 ) may not equal
to Z(x0 , x), for each pair (xi , xj ), we randomly choose (xi , xj ) or (xj , xi ) as the input to Z(?, ?).
? and does
After Quicksort, the algorithm divides the data into multiple groups of size ?m = ?|S|,
4
Algorithm 1 Active Learning Framework
Input: ?, ?, a sequence of ni , functions fU , fV .
1: Initialize U ? U0 ? X , V ? V0 ? C.
2: for i = 1, 2, ..., log(1/?) do
? x ? U }.
3:
Sample unlabeled dataset S? of size ni . Let S ? {x : x ? S,
4:
Request the labels of x ? S and obtain W ? {(xi , yi ) : xi ? S}.
5:
Update V ? fV (U, V, W, i), U ? fU (U, V, W, i).
? ?V.
Output: Any classifier h
Subroutine 2 Active Data Generation with Adversarial Comparison (ADGAC)
Input: Dataset S with |S| = m, n, ?, k.
?n
.
1: ? ? 2m
2: Define preference relation on S according to Z. Run Quicksort on S to rank elements in an
increasing order. Obtain a sorted list S = (x1 , x2 , ..., xm ).
3: Divide S into groups of size ?m: Si = {x(i?1)?m+1 , ..., xi?m }, i = 1, 2, ..., 1/? .
4: tmin ? 1, tmax ? 1/?.
5: while tmin < tmax do
. Do binary search
6:
t = (tmin + tmax )/2.
Sample k points uniformly without replacement from St and obtain the labels Y =
7:
{y1 , ..., yk }.
Pk
8:
If
i=1 yi ? 0, then tmax = t; else tmin = t + 1.
9: For t0 > t and xi ? St0 , let y?i ? 1.
10: For t0 < t and xi ? St0 , let y?i ? ?1.
11: For xi ? St , let y?i be the majority of labeled points in St .
Output: Predicted labels y?1 , y?2 , ..., y?m .
group-based binary search by sampling k labels from each group and determining the label of each
group by majority vote.
For active learning algorithm A, let A-ADGAC be the algorithm of replacing step 4 with ADGAC
using parameters (Si , ni , ?i , ki ), where ?i , ki are chosen as additional parameters of the algorithm. We
establish results for specific A: A2 and margin-based active learning in Sections 4 and 5, respectively.
3.2 Theoretical Analysis of ADGAC
Before we combine ADGAC with active learning algorithms, we provide theoretical results for
ADGAC. By the algorithmic procedure, ADGAC reduces the problem of labeling the whole dataset
S to binary searching a threshold on the sorted list S. One can show that the conflicting instances
cannot be too many within each group Si , and thus binary search performs well in our algorithm. We
also use results in [3] to give an error estimate of Quicksort. We have the following result based on
the above arguments.
0
Theorem
4. Suppose
that Conditions 2 and 3 hold for ? ? 1, ? ? 0, and n =
2??1
? = n is sampled i.i.d. from PX and S ? S? is
? 1?
log(1/?) . Assume a set S? with |S|
an arbitrary subset of S? with |S| = m. There exist absolute constants C1 , C2 , C3 such that if we
1 2??2
run Subroutine 2 with ? < C1 , ? 0 ? C2 ?2? ?, k = k (1) (?, ?) := C3 log log(1/?)
, it will
?
?
output a labeling of S such that |{xi ? S : y?i 6= h? (xi )}| ? ?n, with probability at least 1 ? ?.
The expected number of comparisons
required is O(m log m),
and the number of sample-label pairs
m
1 2??2
?
required is SClabel (?, ?) = O log
log(1/?)
.
?n
?
Similarly, we analyze ADGAC under adversarial noise condition w.r.t. labeling oracle with ? = O(?).
Theorem 5. Suppose that Conditions 1 and 3 hold for ?, ? 0 ? 0, and n = ? 1? log(1/?) . Assume
? = n is sampled i.i.d. from PX and S ? S? is an arbitrary subset of S? with
a set S? with |S|
|S| = m. There exist absolute constants C1 ,
C2 , C3 , C4 such that if we run Subroutine 2 with
0
2
(2)
? < C1 , ? ? C2 ? ?, k = k (?, ?) := C3 log log(1/?)
, and ? ? C4 ?, it will output a labeling
?
5
of S such that |{xi ? S : y?i 6= h? (xi )}| ? ?n, with probability at least 1 ? ?. The expected
number of comparisons
requiredis O(m
log m), and the number of sample-label pairs required is
m
SClabel (?, ?) = O log ?n
.
log log(1/?)
?
Proof Sketch. We call a pair (xi , xj ) an inverse pair if Z(xi , xj ) = ?1, h? (xi ) = 1, h? (xj ) = ?1,
and an anti-sort pair if h? (xi ) = 1, h? (xj ) = ?1, and i < j. We show that the expectation of inverse
pairs is n(n ? 1)?? . By the results in [3] the numbers of inverse pairs and anti-sort pairs have the
same expectation, and the actual number of anti-sort pairs can be bounded by Markov?s inequality.
Then we show that the majority label of each group must be all -1 starting from beginning the list,
and changes to all 1 at some point of the list. With a careful choice of k, we may obtain the true
majority with k labels under Tsybakov noise; we will thus end up in the turning point of the list. The
error is then bounded by the size of groups. See appendix for the complete proof.
Theorems 4 and 5 show that ADGAC gives a labeling of dataset with arbitrary small error using label
complexity independent of the data size. Moreover, ADGAC is computationally efficient since it only
involves binary search. These nice properties of ADGAC lead to improved query complexity when
we combine ADGAC with other active learning algorithms.
4
A2 -ADGAC: Learning of Generic Hypothesis Class
In this section, we combine ADGAC with A2 algorithm to learn a generic hypothesis class. We use
the framework in Algorithm 1: let A2 -ADGAC be the algorithm that replaces step 4 in Algorithm 1
with ADGAC of parameters (S, ni , ?i , ki ), where ni , ?i , ki are parameters to be specified later. Under
TNC, we have the following result.
Theorem 6. Suppose that Conditions 2 and 3 hold, and h? (x) = sign(?(x) ? 1/2). There exist
2
0
2?
global constants C1 , C
2 such that if we run A -ADGAC with ? <
C1 , ?, ? ?Tolcomp (?, ?) = C2 ? ?,
2??1
?
log(1/?) , ki = k (1) ?i , 4 log(1/?)
?i = 2?(i+2) , ni = ? ?1i (d log(1/?)) + ?1i
with k (1)
? with
specified in Theorem 4, with probability at least 1 ? ?, the algorithm will return a classifier h
?
?
Pr[h(X) 6= h (X)] ? ? with comparison and label complexity
!!
2??2
1
1
1
2
? ? log
E[SCcomp ] = O
log(d?)
d log
+
log(1/?)
,
?
?
?
2??2 !
1
1
1
? log
SClabel = O
log min
,?
log(1/?)
.
?
?
?
The dependence on log2 (1/?) in SCcomp can be reduced to log(1/?) under Massart noise.
We can prove a similar result for adversarial noise condition.
Theorem 7. Suppose that Conditions 1 and 3 hold. There exist global constants C1 , C2 , C3 such that
0
2
if we run A2 -ADGAC
with ? <
C1 , ?, ? ? Tolcomp (?,
?) = C2 ? ?,? ? Tollabel (?, ?) = C3 ?, ?i =
?
? 1 d log 1 log(1/?) , ki = k (2) ?i ,
2?(i+2) , ni = ?
with k (2) specified in Theorem
?i
?i
4 log(1/?)
? with Pr[h(X)
?
5, with probability at least 1 ? ?, the algorithm will return a classifier h
6= h? (X)] ? ?
with comparison and label complexity
? ?d log(?d) log 1 log(1/?) ,
E[SCcomp ] = O
?
i
1
1
? log
SClabel = O
log min
,?
log(1/?) .
?
?
Proof of Theorems 6 and 7 uses Theorem 4 and Theorem 5 with standard manipulations in VC theory.
Theorems 6 and 7 show that having access to even a biased comparison function can reduce the
problem of learning a classifier in high-dimensional space to that of learning a threshold classifier in
one-dimensional space as the label complexity matches that of actively learning a threshold classifier.
Given the fact that comparisons are usually easier to obtain, A2 -ADGAC will save a lot in practice
due to its small label complexity. More importantly, we improve the total query complexity under
TNC by separating the dependence on d and ?; The query complexity is now the sum of the two
terms instead of the product of them. This observation shows the power of pairwise comparisons for
learning classifiers. Such small label/query complexity is impossible without access to a comparison
6
2??2
oracle, since query complexity with only labeling oracle is at least ? d 1?
and ? d log 1?
under TNC and adversarial noise conditions, respectively [19]. Our results also matches the lower
bound of learning with labeling and comparison oracles up to log factors (see Section 6).
We note that Theorems 6 and 7 require rather small Tolcomp , equal to O(?2? ?) and O(?2 ?), respectively. We will show in Section 6.3 that it is necessary to require Tolcomp = O(?2 ) in order to obtain
a classifier of error ?, if we restrict the use of labeling oracle to only learning a threshold function.
Such restriction is able to reach the near-optimal label complexity as specified in Theorems 6 and 7.
5
Margin-ADGAC: Learning of Halfspaces
In this section, we combine ADGAC with margin-based active learning [6] to efficiently learn the
class of halfspaces. Before proceeding, we first mention a naive idea of utilizing comparisons: we
can i.i.d. sample pairs (x1 , x2 ) from PX ? PX , and use Z(x1 , x2 ) as the label of x1 ? x2 , where
Z is the feedback from comparison oracle. However, this method cannot work well in our setting
without additional assumption on the noise condition for the labeling Z(x1 , x2 ).
Before proceeding, we assume that PX is isotropic log-concave on Rd ; i.e., PX has mean 0, covariance I and the logarithm of its density function is a concave function [5, 6]. The hypothesis
class of halfspaces can be represented as C = {h : h(x) = sign(w ? x), w ? Rd }. Denote
by h? (x) = sign(w? P
? x) for some w? ? Rd . Define l? (w, x, y) = max (1 ? y(w ? x)/?, 0)
1
and l? (w, W ) = |W
(x,y)?W l? (w, x, y) as the hinge loss. The expected hinge loss of w is
|
L? (w, D) = Ex?D [l? (w, x, sign(w? ? x))].
Margin-based active learning [6] is a concrete example of Algorithm 1 by taking V as (a singleton set of) the hinge loss minimizer, while taking U as the margin region around that minimizer. More concretely, take U0 = X and V0 = {w0 } for some w0 such that ?(w0 , w? ) ? ?/2.
The algorithm works with constants M ? 2, ? < 1/2 and a set of parameters ri , ?i , bi , zi
that equal to ?(M ?i ) (see proof in Appendix for formal definition of these parameters). V always contains a single hypothesis. Suppose V = {wi?1 } in iteration i ? 1. Let vi satisfies
l?i (vi , W ) ? minv:kv?wi?1 k2 ?ri ,kvk2 ?1 l?i (v, W ) + ?/8, where wi is the content of V in iteration i.
o
n
We also have fV (V, W, i) = {wi } = kvviik2 and fU (U, V, W, i) = {x : |wi ? x| ? bi }.
Let Margin-ADGAC be the algorithm obtained by replacing the sampling step in margin-based active
learning with ADGAC using parameters (S, ni , ?i , ki ), where ni , ?i , ki are additional parameters
to be specified later. We have the following results under TNC and adversarial noise conditions,
respectively.
Theorem 8. Suppose that Conditions 2 and 3 hold, and h? (x) = sign(w? ? x) = sign(?(x) ? 1/2).
There are settings of M, ?, ri , ?i , bi , ?i , ki , and constants C1 , C2 such that for all ? ? C1 , ? 0 ?
Tolcomp (?, ?) = C2 ?2? ?, if we run Margin-ADGAC
with w0 such that ?(w0 , w? ) ? ?/2, and ni =
? 1 d log3 (dk/?) + 1 2??1 log(1/?) , it finds w
O
? such that Pr[sign(w
? ? X) 6= sign(w? ? X)] ? ?
?i
?
with probability at least 1 ? ?. The comparison and label complexity are
!!
2??2
1
2
4
?
E[SCcomp ] = O log (1/?) d log (d/?) +
log(1/?)
,
?
2??2 !
1
?
SClabel = O log(1/?) log(1/?)
.
?
The dependence on log2 (1/?) in SCcomp can be reduced to log(1/?) under Massart noise.
Theorem 9. Suppose that Conditions 1 and 3 hold. There are settings of M, ?, ri , ?i , bi , ?i , ki , and
2?
constants C1 , C2 , C3 such that for all ? ? C1 , ? 0 ? Tolcomp (?,
?) = C2 ? ?, ? ? Tolcomp (?, ?) =
3
? 1 d log (dk/?) and w0 such that ?(w0 , w? ) ? ?/2,
C3 ?, if we run Margin-ADGAC with ni = O
?i
it finds w
? such that Pr[sign(w
? ? X) 6= sign(w? ? X)] ? ? with probability at least 1 ? ?. The
comparison and label complexity are
? log(1/?) d log4 (d/?) , SClabel = O
? (log(1/?) log(1/?)) .
E[SCcomp ] = O
The proofs of Theorems 8 and 9 are different from the conventional analysis of margin-based active
learning in two aspects: a) Since we use labels generated by ADGAC, which is not independently
7
sampled from the distribution PX Y , we require new techniques that can deal with adaptive noises; b)
We improve the results of [6] over the dependence of d by new Rademacher analysis.
Theorems 8 and 9 enjoy better label and query complexity than previous results (see Table 2). We
mention that while Yan and Zhang [32] proposed a perceptron-like algorithm with label complexity
? log(1/?)) under Massart and adversarial noise conditions, their algorithm works only
as small as O(d
under uniform distributions over the instance space. In contrast, our algorithm Margin-ADGAC works
under broad log-concave distributions. The label and total query complexity of Margin-ADGAC
improves over that of traditional active learning. The lower bounds in Section 6 show the optimality
of our complexity.
6
Lower Bounds
In this section, we give lower bounds on learning using labeling and pairwise comparison. In Section
6.1, we give a lower bound on the optimal label complexity SClabel . In Section 6.2 we use this result
to give a lower bound on the total query complexity, i.e., the sum of comparison and label complexity.
Our two methods match these lower bounds up to log factors. In Section 6.3, we additionally give an
information-theoretic bound on Tolcomp , which matches our algorithms in the case of Massart and
adversarial noise.
Following from [19, 20], we assume that there is an underlying score function g ? such that h? (x) =
sign(g ? (x)). Note that g ? does not necessarily have relation with ?(x); We only require that g ? (x)
represents how likely a given x is positive. For instance, in digit recognition, g ? (x) represents how
an image looks like a 7 (or 9); In the clinical setting, g ? (x) measures the health condition of a patient.
Suppose that the distribution of g ? (X) is continuous, i.e., the probability density function exists and
for every t ? R, Pr[g ? (X) = t] = 0.
6.1 Lower Bound on Label Complexity
The definition of g ? naturally induces a comparison oracle Z with Z(x, x0 ) = sign(g ? (x) ? g ? (x0 )).
We note that this oracle is invariant to shifting w.r.t. g ? , i.e., g ? and g ? + t lead to the same comparison
oracle. As a result, we cannot distinguish g ? from g ? + t without labels. In other words, pairwise
comparisons do not help in improving label complexity when we are learning a threshold function
on R, where all instances are in the natural order. So the label complexity of any algorithm is lower
bounded by that of learning a threshold classifier, and we formally prove this in the following theorem.
Theorem 10. For any algorithm A that can access both labeling and comparison oracles, sufficiently
small ?, ?, and any score function g that takes at least two values on X , there exists a distribution
PX Y satisfying Condition 2 such that the optimal function is in the form of h? (x) = sign(g(x) + t)
for some t ? R and
SClabel (?, ?, A) = ? (1/?)
2??2
log(1/?) .
(1)
If PX Y satisfies Condition 1 with ? = O(?), SClabel satisfies (1) with ? = 1.
The lower bound in Theorem 10 matches the label complexity of A2 -ADGAC and Margin-ADGAC
up to a log factor. So our algorithm is near-optimal.
6.2 Lower Bound on Total Query Complexity
We use Theorem 10 to give lower bounds on the total query complexity of any algorithm which can
access both comparison and labeling oracles.
Theorem 11. For any algorithm A that can access both labeling and comparison oracles, and
sufficiently small ?, ?, there exists a distribution PXY satisfying Condition 2, such that
2??2
SCcomp (?, ?, A) + SClabel (?, ?, A) = ? (1/?)
log(1/?) + d log(1/?) .
(2)
If PX Y satisfies Condition 1 with ? = O(?), SCcomp + SClabel satisfies (2) with ? = 1.
The first term of (2) follows from Theorem 10, whereas the second term follows from transforming a
lower bound of active learning with access to only the labeling oracle. The lower bounds in Theorem
11 match the performance of A2 -ADGAC and Margin-ADGAC up to log factors.
6.3 Adversarial Noise Tolerance of Comparisons
Note that label queries are typically expensive in practice. Thus it is natural to ask the following
question: what is the minimal requirement on ? 0 , given that we are only allowed to have access to
minimal label complexity as in Theorem 10? We study this problem in this section. More concretely,
8
we study the requirement on ? 0 when we learn a threshold function using labels. Suppose that the
comparison oracle gives feedback using a scoring function g?, i.e., Z(x, x0 ) = sign(?
g (x) ? g?(x0 )),
and has error ? 0 . We give a sharp minimax bound on the risk of the optimal classifier in the form of
h(x) = sign(?
g (x) ? t) for some t ? R below.
?
Theorem 12. Suppose that min{Pr[h? (X) = 1], Pr[h? (X) = ?1]} ? ? 0 and both g?(X) and
0
g ? (X) have probability density functions. If g?(X)
? induces an oracle with error ? , then we have
?
0
mint maxg?,g? Pr[sign(?
g (X) ? t) 6= h (X)] = ? .
The proof is technical and omitted. By Theorem 12, we see that the condition of ? 0 = ?2 is necessary
if labels from g ? are only used to learn a threshold on g?. This matches our choice of ? 0 under Massart
and adversarial noise conditions for labeling oracle (up to a factor of ?).
7
Conclusion
We presented a general algorithmic framework, ADGAC, for learning with both comparison and
labeling oracles. We proposed two variants of the base algorithm, A2 -ADGAC and Margin-ADGAC,
to facilitate low query complexity under Tsybakov and adversarial noise conditions. The performance
of our algorithms matches lower bounds for learning with both oracles. Our analysis is relevant to
a wide range of practical applications where it is easier, less expensive, and/or less risky to obtain
pairwise comparisons than labels.
There are multiple directions for future works. One improvement over our work is to show complexity
bounds for excess risk err(h) ? err(h? ) instead of Pr[h 6= h? ]. Also, our bound on comparison
complexity is in expectation due to limits of quicksort; deriving concentration inequalities on the
comparison complexity would be helpful. Also, an adaptive algorithm that applies to different levels
of noise w.r.t. labels and comparisons would be interesting; i.e., use labels when comparisons are
noisy and use comparisons when labels are noisy. Other directions include using comparisons (or
more broadly, rankings) for other ML tasks like regression or matrix completion.
Acknowledgments
This research is supported in part by AFRL grant FA8750-17-2-0212. We thank Chicheng Zhang for
insightful ideas on improving results in [6] using Rademacher complexity.
References
[1] S. Agarwal and P. Niyogi. Stability and generalization of bipartite ranking algorithms. In Annual
Conference on Learning Theory, pages 32?47, 2005.
[2] S. Agarwal and P. Niyogi. Generalization bounds for ranking algorithms via algorithmic stability. Journal
of Machine Learning Research, 10:441?474, 2009.
[3] N. Ailon and M. Mohri. An efficient reduction of ranking to classification. arXiv preprint arXiv:0710.2889,
2007.
[4] J. Attenberg, P. Melville, and F. Provost. A unified approach to active dual supervision for labeling features
and examples. In Machine Learning and Knowledge Discovery in Databases, pages 40?55. Springer, 2010.
[5] P. Awasthi, M.-F. Balcan, N. Haghtalab, and H. Zhang. Learning and 1-bit compressed sensing under
asymmetric noise. In Annual Conference on Learning Theory, pages 152?192, 2016.
[6] P. Awasthi, M.-F. Balcan, and P. M. Long. The power of localization for efficiently learning linear separators
with noise. Journal of the ACM, 63(6):50, 2017.
[7] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proceedings of the 23rd
international conference on Machine learning, pages 65?72. ACM, 2006.
[8] M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In Annual Conference On Learning
Theory, pages 35?50, 2007.
[9] M.-F. Balcan and S. Hanneke. Robust interactive learning. In COLT, pages 20?1, 2012.
[10] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave distributions. In Annual Conference on Learning Theory, pages 288?316, 2013.
[11] M.-F. Balcan, E. Vitercik, and C. White. Learning combinatorial functions from pairwise comparisons.
arXiv preprint arXiv:1605.09227, 2016.
[12] M.-F. Balcan and H. Zhang. Noise-tolerant life-long matrix completion via adaptive sampling. In Advances
in Neural Information Processing Systems, pages 2955?2963, 2016.
[13] A. Beygelzimer, D. J. Hsu, J. Langford, and C. Zhang. Search improves label for active learning. In
Advances in Neural Information Processing Systems, pages 3342?3350, 2016.
[14] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: A survey of some recent advances.
ESAIM: probability and statistics, 9:323?375, 2005.
[15] R. M. Castro and R. D. Nowak. Minimax bounds for active learning. IEEE Transactions on Information
Theory, 54(5):2339?2353, 2008.
9
[16] O. Dekel, C. Gentile, and K. Sridharan. Selective sampling and active learning from single and multiple
teachers. Journal of Machine Learning Research, 13:2655?2697, 2012.
[17] J. F?rnkranz and E. H?llermeier. Preference learning and ranking by pairwise comparison. In Preference
learning, pages 65?82. Springer, 2010.
[18] S. Hanneke. Adaptive rates of convergence in active learning. In COLT. Citeseer, 2009.
[19] S. Hanneke. Theory of active learning, 2014.
[20] S. Hanneke and L. Yang. Surrogate losses in passive and active learning. arXiv preprint arXiv:1207.3772,
2012.
[21] R. Heckel, N. B. Shah, K. Ramchandran, and M. J. Wainwright. Active ranking from pairwise comparisons
and the futility of parametric assumptions. arXiv preprint arXiv:1606.08842, 2016.
[22] K. G. Jamieson and R. Nowak. Active ranking using pairwise comparisons. In Advances in Neural
Information Processing Systems, pages 2240?2248, 2011.
[23] D. M. Kane, S. Lovett, S. Moran, and J. Zhang. Active classification with comparison queries. arXiv
preprint arXiv:1704.03564, 2017.
[24] A. Krishnamurthy. Interactive Algorithms for Unsupervised Machine Learning. PhD thesis, Carnegie
Mellon University, 2015.
[25] L. Lov?sz and S. Vempala. The geometry of logconcave functions and sampling algorithms. Random
Structures & Algorithms, 30(3):307?358, 2007.
[26] S. Maji and G. Shakhnarovich. Part and attribute discovery from relative annotations. International Journal
of Computer Vision, 108(1-2):82?96, 2014.
[27] S. Sabato and T. Hess. Interactive algorithms: from pool to stream. In Annual Conference On Learning
Theory, pages 1419?1439, 2016.
[28] N. B. Shah, S. Balakrishnan, J. Bradley, A. Parekh, K. Ramchandran, and M. Wainwright. When is it better
to compare than to score? arXiv preprint arXiv:1406.6618, 2014.
[29] N. Stewart, G. D. Brown, and N. Chater. Absolute identification by relative judgment. Psychological
review, 112(4):881, 2005.
[30] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, pages
135?166, 2004.
[31] C. Wah, G. Van Horn, S. Branson, S. Maji, P. Perona, and S. Belongie. Similarity comparisons for
interactive fine-grained categorization. In IEEE Conference on Computer Vision and Pattern Recognition,
pages 859?866, 2014.
[32] S. Yan and C. Zhang. Revisiting perceptron: Efficient and label-optimal active learning of halfspaces.
arXiv preprint arXiv:1702.05581, 2017.
[33] L. Yang and J. G. Carbonell. Cost complexity of proactive learning via a reduction to realizable active
learning. Technical report, CMU-ML-09-113, 2009.
[34] C. Zhang and K. Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances in Neural
Information Processing Systems, pages 442?450, 2014.
10
| 6837 |@word version:3 dekel:1 open:1 covariance:1 citeseer:1 mention:2 harder:1 reduction:2 contains:1 score:3 ours:5 fa8750:1 err:9 bradley:1 comparing:1 beygelzimer:2 si:3 must:1 hongyang:1 update:1 isotropic:2 beginning:1 preference:5 zhang:10 along:1 c2:11 direct:2 kvk2:1 prove:2 combine:4 x0:11 pairwise:18 lov:1 hardness:1 expected:4 inspired:1 actual:1 increasing:2 estimating:1 notation:5 bounded:3 moreover:1 underlying:1 agnostic:2 what:1 kind:1 minimizes:1 unified:1 st0:2 guarantee:2 every:1 concave:5 interactive:11 futility:1 classifier:20 k2:1 grant:1 enjoy:3 omit:1 jamieson:1 positive:3 before:3 scientist:1 limit:1 despite:1 approximately:1 lugosi:1 might:2 tmax:4 quantified:1 kane:2 branson:1 limited:1 bi:4 range:1 practical:2 acknowledgment:1 horn:1 practice:3 minv:1 x3:1 digit:1 procedure:4 yan:2 significantly:1 reject:1 word:1 suggest:1 cannot:3 unlabeled:2 context:1 impossible:1 risk:2 restriction:1 conventional:1 deterministic:1 starting:1 independently:3 survey:1 simplicity:1 artur:1 utilizing:1 importantly:1 deriving:1 classic:1 searching:1 stability:2 krishnamurthy:1 haghtalab:1 annals:1 target:1 suppose:11 awd:1 us:1 designing:1 hypothesis:14 engaging:1 element:1 expensive:5 recognition:2 satisfying:2 asymmetric:3 labeled:3 database:1 preprint:7 revisiting:1 region:4 halfspaces:8 yk:1 transforming:1 complexity:52 singh:1 shakhnarovich:1 localization:1 bipartite:1 joint:1 various:2 represented:1 maji:2 query:31 labeling:40 otherwise:1 compressed:1 melville:1 niyogi:2 statistic:2 noisy:6 sequence:1 propose:3 interaction:1 product:1 relevant:1 chaudhuri:1 achieve:1 description:1 kv:1 convergence:1 requirement:2 rademacher:2 categorization:1 help:2 derive:1 andrew:1 illustrate:1 completion:2 progress:1 c:1 predicted:1 involves:1 direction:2 attribute:1 vc:4 human:2 material:4 require:5 fix:1 generalization:2 preliminary:1 hold:6 around:1 sufficiently:2 great:1 algorithmic:3 achieves:1 a2:15 omitted:1 aarti:2 label:66 combinatorial:2 individually:1 awasthi:2 concurrently:1 sensor:1 always:1 rather:2 chater:1 focus:1 improvement:2 rank:2 indicates:1 contrast:1 adversarial:33 realizable:1 helpful:2 typically:1 perona:1 relation:2 selective:1 subroutine:5 issue:2 dual:2 classification:4 aforementioned:1 augment:1 colt:2 special:1 initialize:1 marginal:1 equal:3 having:3 beach:1 sampling:8 represents:4 broad:1 look:1 unsupervised:1 nearly:1 future:1 report:1 randomly:1 individual:2 geometry:1 consisting:1 replacement:1 attempt:1 interest:1 investigate:1 intransitive:1 analyzed:1 accurate:2 fu:4 capable:1 nowak:2 necessary:2 modest:1 supr:1 divide:2 logarithm:1 theoretical:5 minimal:2 fitted:1 psychological:1 instance:13 classify:1 stewart:1 cost:3 subset:2 uniform:1 successful:1 too:1 characterize:2 straightforwardly:1 answer:1 teacher:1 combined:1 st:4 density:3 fundamental:1 international:2 broder:1 accessible:1 pool:3 quickly:1 concrete:1 thesis:1 interactively:2 choose:1 expert:1 return:3 actively:1 potential:1 singleton:1 coefficient:4 ranking:8 audibert:1 stream:1 vi:2 later:2 h1:4 lot:1 lab:1 proactive:1 analyze:3 linked:1 start:1 recover:1 bayes:2 sort:3 errd:2 annotation:1 aggregation:1 halfspace:2 contribution:1 ass:1 chicheng:1 ni:13 accuracy:1 who:1 efficiently:4 miller:1 judgment:1 yes:5 identification:1 accurately:1 sccomp:12 hanneke:5 comp:1 parekh:1 randomness:1 tnc:10 reach:1 definition:3 failure:1 invasive:1 naturally:1 proof:6 sampled:3 hsu:1 dataset:6 ask:1 knowledge:1 improves:3 afrl:1 improved:1 governing:1 langford:2 hand:2 sketch:1 replacing:3 assessment:1 facilitate:2 omitting:2 usa:3 concept:1 true:1 brown:1 boucheron:1 deal:1 white:1 presenting:1 complete:1 demonstrate:1 theoretic:1 performs:1 passive:3 balcan:8 image:2 meaning:3 kyle:1 novel:1 heckel:1 extend:1 belong:1 mellon:3 significant:1 measurement:1 hess:1 rd:4 similarly:2 access:18 supervision:2 similarity:1 v0:3 etc:1 base:1 recent:2 inf:1 mint:1 scenario:2 manipulation:1 certain:1 inequality:2 binary:7 life:1 yi:2 scoring:2 seen:1 additional:3 relaxed:1 gentile:1 u0:3 multiple:6 reduces:2 technical:2 match:8 characterized:1 hongyanz:1 clinical:2 long:5 retrieval:1 concerning:1 variant:1 regression:1 patient:3 cmu:3 expectation:3 vision:2 arxiv:14 iteration:2 agarwal:2 c1:11 whereas:1 fine:1 else:1 sabato:1 extra:2 biased:1 massart:9 logconcave:1 balakrishnan:1 flow:2 sridharan:1 call:1 near:2 leverage:3 yang:2 easy:1 xj:7 zi:1 restrict:1 reduce:1 idea:2 t0:2 motivated:1 effort:1 tol:1 clear:1 amount:1 tsybakov:14 induces:2 reduced:2 exist:4 llermeier:1 sign:17 diagnostic:1 popularity:1 bulk:1 diverse:1 broadly:1 carnegie:3 express:1 group:9 threshold:10 drawn:1 utilize:1 sum:3 run:8 inverse:3 almost:1 throughout:1 decision:1 appendix:3 summarizes:1 bit:1 bound:24 ki:10 distinguish:1 replaces:1 refine:1 oracle:59 annual:5 adapted:2 x2:7 ri:4 bousquet:1 generates:1 aspect:1 argument:2 min:3 optimality:1 vempala:1 relatively:1 px:22 department:1 ailon:1 according:2 request:2 remain:1 describes:1 wi:5 castro:1 invariant:1 pr:17 sided:1 computationally:2 remains:2 discus:2 fail:1 ordinal:1 auton:1 end:1 available:2 experimentation:1 apply:2 appropriate:1 generic:6 disagreement:8 attenberg:1 save:1 batch:2 shah:2 include:1 log2:2 hinge:4 establish:1 surgery:1 question:2 quantity:1 parametric:1 concentration:1 dependence:4 traditional:2 surrogate:1 thank:1 separating:1 majority:4 w0:7 carbonell:1 whom:1 vitercik:1 dubrawski:1 assuming:1 providing:1 difficult:1 setup:1 synthesizing:1 design:2 unknown:2 perform:1 observation:1 datasets:1 markov:1 anti:3 defining:1 tmin:4 precise:1 y1:1 provost:1 arbitrary:5 sharp:1 namely:1 pair:12 required:4 c3:7 specified:5 wah:1 c4:1 fv:4 conflicting:1 nip:1 address:2 able:1 beyond:1 usually:1 below:1 xm:1 pattern:1 gaining:1 including:1 explanation:1 max:1 shifting:1 power:2 wainwright:2 natural:2 turning:1 minimax:2 improve:4 esaim:1 risky:2 naive:1 health:2 prior:1 literature:1 nice:1 discovery:2 review:1 determining:1 relative:3 loss:5 lovett:1 interesting:2 generation:3 querying:2 approximator:1 h2:4 summary:2 mohri:1 supported:1 free:1 keeping:1 dis:4 side:2 formal:1 perceptron:2 wide:1 taking:3 absolute:3 tolerance:6 van:1 boundary:1 dimension:3 feedback:3 rnkranz:1 concretely:3 adaptive:4 log3:1 transaction:1 excess:2 approximate:1 ignore:1 status:2 dealing:1 ml:2 global:2 active:39 tolerant:3 quicksort:5 sz:1 belongie:1 xi:17 search:7 continuous:1 maxg:1 table:7 additionally:1 promising:1 learn:7 robust:1 ca:1 obtaining:2 improving:3 poly:2 necessarily:1 separator:2 domain:1 pk:1 main:1 big:1 noise:50 whole:1 prx:3 contradicting:1 allowed:1 xu:1 x1:9 exponential:1 learns:2 grained:1 theorem:27 specific:2 insightful:1 sensing:1 list:5 dk:2 moran:1 exists:4 phd:1 ramchandran:2 margin:20 easier:6 led:1 likely:4 explore:1 applies:1 springer:2 minimizer:2 satisfies:10 acm:2 goal:1 sorted:2 careful:1 replace:1 content:2 hard:1 change:1 typical:1 infinite:1 uniformly:1 total:13 called:1 vote:1 formally:2 log4:1 latter:1 ex:1 |
6,454 | 6,838 | Analyzing Hidden Representations in End-to-End
Automatic Speech Recognition Systems
Yonatan Belinkov and James Glass
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
{belinkov, glass}@mit.edu
Abstract
Neural networks have become ubiquitous in automatic speech recognition systems.
While neural networks are typically used as acoustic models in more complex
systems, recent studies have explored end-to-end speech recognition systems
based on neural networks, which can be trained to directly predict text from input
acoustic features. Although such systems are conceptually elegant and simpler
than traditional systems, it is less obvious how to interpret the trained models.
In this work, we analyze the speech representations learned by a deep end-to-end
model that is based on convolutional and recurrent layers, and trained with a
connectionist temporal classification (CTC) loss. We use a pre-trained model to
generate frame-level features which are given to a classifier that is trained on frame
classification into phones. We evaluate representations from different layers of the
deep model and compare their quality for predicting phone labels. Our experiments
shed light on important aspects of the end-to-end model such as layer depth, model
complexity, and other design choices.
1
Introduction
Traditional automatic speech recognition (ASR) systems are composed of multiple components,
including an acoustic model, a language model, a lexicon, and possibly other components. Each of
these is trained independently and combined during decoding. As such, the system is not directly
trained on the speech recognition task from start to end. In contrast, end-to-end ASR systems aim
to map acoustic features directly to text (words or characters). Such models have recently become
popular in the ASR community thanks to their simple and elegant architecture [1, 2, 3, 4]. Given
sufficient training data, they also perform fairly well. Importantly, such models do not receive explicit
phonetic supervision, in contrast to traditional systems that typically rely on an acoustic model trained
to predict phonetic units (e.g. HMM phone states). Intuitively, though, end-to-end models have
to generate some internal representation that allows them to abstract over phonological units. For
instance, a model that needs to generate the word ?bought? should learn that in this case ?g? is not
pronounced as the phoneme /g/.
In this work, we investigate if and to what extent end-to-end models implicitly learn phonetic
representations. The hypothesis is that such models need to create and exploit internal representations
that correspond to phonetic units in order to perform well on the speech recognition task. Given a
pre-trained end-to-end ASR system, we use it to extract frame-level features from an acoustic signal.
For example, these may be the hidden representations of a recurrent neural network (RNN) in the
end-to-end system. We then feed these features to a classifier that is trained to predict a phonetic
property of interest such as phone recognition. Finally, we evaluate the performance of the classifier
as a measure of the quality of the input features, and by proxy the quality of the original ASR system.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
We aim to provide quantitative answers to the following questions:
1. To what extent do end-to-end ASR systems learn phonetic information?
2. Which components of the system capture more phonetic information?
3. Do more complicated models learn better representations for phonology? And is ASR
performance correlated with the quality of the learned representations?
Two main types of end-to-end models for speech recognition have been proposed in the literature:
connectionist temporal classification (CTC) [1, 2] and sequence-to-sequence learning (seq2seq) [3, 4].
We focus here on CTC and leave exploration of the seq2seq model for future work.
We use a phoneme-segmented dataset for the phoneme recognition task, as it comes with time
segmentation, which allows for accurate mapping between speech frames and phone labels. We
define a frame classification task, where given representations from the CTC model, we need to
classify each frame into a corresponding phone label. More complicated tasks can be conceived
of?for example predicting a single phone given all of its aligned frames?but classifying frames is a
basic and important task to start with.
Our experiments reveal that the lowest layers in a deep end-to-end model are best suited for representing phonetic information. Applying one convolution on input features improves the representation,
but a second convolution greatly degrades phone classification accuracy. Subsequent recurrent layers
initially improve the quality of the representations. However, after a certain recurrent layer performance again drops, indicating that the top layers do not preserve all the phonetic information coming
from the bottom layers. Finally, we cluster frame representations from different layers in the deep
model and visualize them in 2D, observing different quality of grouping in different layers.
We hope that our results would promote the development of better ASR systems. For example, understanding representation learning at different layers of the end-to-end model can guide joint learning
of phoneme recognition and ASR, as recently proposed in a multi-task learning framework [5].
2
2.1
Related Work
End-to-end ASR
End-to-end models for ASR have become increasingly popular in recent years. Important studies
include models based on connectionist temporal classification (CTC) [1, 2, 6, 7] and attention-based
sequence-to-sequence models [3, 4, 8]. The CTC model is based on a recurrent neural network
that takes acoustic features as input and is trained to predict a symbol per each frame. Symbols are
typically characters, in addition to a special blank symbol. The CTC loss then marginalizes over
all possible sequences of symbols given a transcription. The sequence-to-sequence approach, on
the other hand, first encodes the sequence of acoustic features into a single vector and then decodes
that vector into the sequence of symbols (characters). The attention mechanism improves upon this
method by conditioning on a different summary of the input sequence at each decoding step.
Both these of these approaches to end-to-end ASR usually predict a sequence of characters, although
there have also been initial attempts at directly predicting words [9, 10].
2.2
Analysis of neural representations
While end-to-end neural network models offer an elegant and relatively simple architecture, they are
often thought to be opaque and uninterpretable. Thus researchers have started investigating what
such models learn during the training process. For instance, previous work evaluated neural network
acoustic models on phoneme recognition using different acoustic features [11] or investigated how
such models learn invariant representations [12] and encode linguistic features [13, 14]. Others have
correlated activations of gated recurrent networks with phoneme boundaries in autoencoders [15] and
in a text-to-speech system [16]. Recent work analyzed different speaker representations [17]. A joint
audio-visual model of speech and lip movements was developed in [18], where phoneme embeddings
were shown to be closer to certain linguistic features than embeddings based on audio alone. Other
joint audio-visual models have also analyzed the learned representations in different ways [19, 20, 21].
Finally, we note that analyzing neural representations has also attracted attention in other domains
2
Table 1: The ASR models used in this work.
(a) DeepSpeech2.
(b) DeepSpeech2-light.
Layer
Type
Input Size
Output Size
1
2
3
4
5
6
7
8
9
10
cnn1
cnn2
rnn1
rnn2
rnn3
rnn4
rnn5
rnn6
rnn7
fc
161
1952
1312
1760
1760
1760
1760
1760
1760
1760
1952
1312
1760
1760
1760
1760
1760
1760
1760
29
Layer
Type
Input Size
Output Size
1
2
3
4
5
6
7
8
cnn1
cnn2
lstm1
lstm2
lstm3
lstm4
lstm5
fc
161
1952
1312
600
600
600
600
600
1952
1312
600
600
600
600
600
29
like vision and natural language processing, including word and sentence representations [22, 23, 24],
machine translation [25, 26], and joint vision-language models [27]. To our knowledge, hidden
representations in end-to-end ASR systems have not been thoroughly analyzed before.
3
Methodology
We follow the following procedure for evaluating representations in end-to-end ASR models. First,
we train an ASR system on a corpus of transcribed speech and freeze its parameters. Then, we use the
pre-trained ASR model to extract frame-level feature representations on a phonemically transcribed
corpus. Finally, we train a supervised classifier using the features coming from the ASR system,
and evaluate classification performance on a held-out set. In this manner, we obtain a quantitative
measure of the quality of the representations that were learned by the end-to-end ASR model. A
similar procedure has been previously applied to analyze a DNN-HMM phoneme recognition system
[14] as well as text representations in neural machine translation models [25, 26].
More formally, let x denote a sequence of acoustic features such as a spectrogram of frequency
magnitudes. Let ASRt (x) denote the output of the ASR model at the t-th input. Given a corresponding
label sequence, l, we feed ASRt (x) to a supervised classifier that is trained to predict a corresponding
label, lt . In the simplest case, we have a label at each frame and perform frame classification. As we
are interested in analyzing different components of the ASR model, we also extract features from
different layers k, such that ASRkt (x) denotes the output of the k-th layer at the t-th input frame.
We next describe the ASR model and the supervised classifier in more detail.
3.1
ASR model
The end-to-end model we use in this work is DeepSpeech2 [7], an acoustics-to-characters system
based on a deep neural network. The input to the model is a sequence of audio spectrograms
(frequency magnitudes), obtained with a 20ms Hamming window and a stride of 10ms. With a
sampling rate of 16kHz, we have 161 dimensional input features. Table 1a details the different layers
in this model. The first two layers are convolutions where the number of output feature maps is 32
at each layer. The kernel sizes of the first and second convolutional layers are 41x11 and 21x11
respectively, where a convolution of TxF has a size T in the time domain and F in the frequency
domain. Both convolutional layers have a stride of 2 in the time domain while the first layer also has
a stride of 2 in the frequency domain. This setting results in 1952/1312 features per time frame after
the first/second convolutional layers.
The convolutional layers are followed by 7 bidirectional recurrent layers, each with a hidden state
size of 1760 dimensions. Notably, these are simple RNNs and not gated units such as long short-term
memory networks (LSTM) [28], as this was found to produce better performance. We also consider a
simpler version of the model, called DeepSpeech2-light, which has 5 layers of bidirectional LSTMs,
each with 600 dimensions (Table 1b). This model runs faster but leads to worse recognition results.
3
Each convolutional or recurrent layer is followed by batch normalization [29, 30] and a ReLU nonlinearity. The final layer is a fully-connected layer that maps onto the number of symbols (29 symbols:
26 English letters plus space, apostrophe, and a blank symbol).
The network is trained with a CTC loss [31]:
L=
log p(l|x)
where the probability of a label sequence l given an input sequence x is defined as:
p(l|x) =
X
?2B
p(?|x) =
1 (l)
X
?2B
1 (l)
T
Y
ASRK
t (x)[?t ]
t=1
where B removes blanks and repeated symbols, B 1 is its inverse image, T is the length of the
label sequence l, and ASRK
t (x)[j] is unit j of the model output after the top softmax layer at time t,
interpreted as the probability of observing label j at time t. This formulation allows mapping long
frame sequences to short character sequences by marginalizing over all possible sequences containing
blanks and duplicates.
3.2
Supervised Classifier
The frame classifier takes features from different layers of the DeepSpeech2 model as input and
predicts a phone label. The size of the input to the classifier thus depends on which layer in
DeepSpeech2 is used to generate features. We model the classifier as a feed-forward neural network
with one hidden layer, where the size of the hidden layer is set to 500.1 This is followed by dropout
(rate of 0.5) and a ReLU non-linearity, then a softmax layer mapping onto the label set size (the
number of unique phones). We chose this simple formulation as we are interested in evaluating the
quality of the representations learned by the ASR model, rather than improving the state-of-the-art on
the supervised task.
We train the classifier with Adam [32] with the recommended parameters (? = 0.001, 1 = 0.9,
8
) to minimize the cross-entropy loss. We use a batch size of 16, train the model
2 = 0.999, ? = e
for 30 epochs, and choose the model with the best development loss for evaluation.
4
Tools and Data
We use the deepspeech.torch [33] implementation of Baidu?s DeepSpeech2 model [7], which
comes with pre-trained models of both DeepSpeech2 and the simpler variant DeepSpeech2-light.
The end-to-end models are trained on LibriSpeech [34], a publicly available corpus of English read
speech, containing 1,000 hours sampled at 16kHz. The word error rates (WER) of the DeepSpeech2
and DeepSpeech2-light models on the Librispeech-test-clean dataset are 12 and 15, respectively [33].
For the phoneme recognition task, we use TIMIT, which comes with time segmentation of phones.
We use the official train/development/test split and extract frames for the frame classification task.
Table 2 summarizes statistics of the frame classification dataset. Note that due to sub-sampling at
the DeepSpeech2 convolutional layers, the number of frames decreases by a factor of two after each
convolutional layer. The possible labels are the 60 phone symbols included in TIMIT (excluding the
begin/end silence symbol h#). We also experimented with the reduced set of 48 phones used by [35].
The code for all of our experiments is publicly available.2
Table 2: Frame classification data extracted from TIMIT.
Train Development
Test
Utterances
Frames (input)
Frames (after cnn1)
Frames (after cnn2)
3,696
988,012
493,983
233,916
1
400
107,620
53,821
25,469
192
50,380
25,205
11,894
We also experimented with a linear classifier and found that it produces lower results overall but leads to
similar trends when comparing features from different layers.
2
http://github.com/boknilev/asr-repr-analysis
4
(a) DS2, w/ strides.
(b) DS2, w/o strides.
(c) DS2-light, w/ strides.
(d) DS2-light, w/o strides.
Figure 1: Frame classification accuracy using representations from different layers of DeepSpeech2
(DS2) and DeepSpeech2-light (DS2-light), with or without strides in the convolutional layers.
5
Results
Figure 1a shows frame classification accuracy using features from different layers of the DeepSpeech2
model. The results are all above a majority baseline of 7.25% (the phone ?s?). Input features
(spectrograms) lead to fairly good performance, considering the 60-wise classification task. The
first convolution further improves the results, in line with previous findings about convolutions as
feature extractors before recurrent layers [36]. However, applying a second convolution significantly
degrades accuracy. This can be attributed to the filter width and stride, which may extend across
phone boundaries. Nevertheless, we find the large drop quite surprising.
The first few recurrent layers improve the results, but after the 5th recurrent layer accuracy goes down
again. One possible explanation to this may be that higher layers in the model are more sensitive to
long distance information that is needed for the speech recognition task, whereas the local information
that is needed for classifying phones is better captured in lower layers. For instance, to predict a word
like ?bought?, the model would need to model relations between different characters, which would
be better captured at the top layers. In contrast, feed-forward neural networks trained on phoneme
recognition were shown to learn increasingly better representations at higher layers [13, 14]; such
networks do not need to model the full speech recognition task, different from end-to-end models.
In the following sections, we first investigate three aspects of the model: model complexity, effect of
strides in the convolutional layers, and effect of blanks. Then we visualize frame representations in
2D and consider classification into abstract sound classes. Finally, Appendix A provides additional
experiments with windows of input features and a reduced phone set, all exhibiting similar trends.
5.1
Model complexity
Figure 1c shows the results of using features from the DeepSpeech2-light model. This model has
less recurrent layers (5 vs. 7) and smaller hidden states (600 vs. 1760), but it uses LSTMs instead of
simple RNNs. A first observation is that the overall trend is the same as in DeepSpeech2: significant
drop after the first convolutional layer, then initial increase followed by a drop in the final layers.
Comparing the two models (figures 1a and 1c), a number of additional observations can be made.
First, the convolutional layers of DeepSpeech2 contain more phonetic information than those of
5
DeepSpeech2-light (+1% and +4% for cnn1 and cnn2, respectively). In contrast, the recurrent layers
in DeepSpeech2-light are better, with the best result of 37.77% in DeepSpeech2-light (by lstm3)
compared to 33.67% in DeepSpeech2 (by rnn5). This suggests again that higher layers do not model
phonology very well; when there are more recurrent layers, the convolutional layers compensate and
generate better representations for phonology than when there are fewer recurrent layers. Interestingly,
the deeper model performs better on the speech recognition task while its deep representations are
not as good at capturing phonology, suggesting that its top layers focus more on modeling character
sequences, while its lower layers focus on representing phonetic information.
5.2
Effect of strides
The original DeepSpeech2 models have convolutions with strides (steps) in the time dimension [7].
This leads to subsampling by a factor of 2 at each convolutional layer, resulting in reduced dataset
size (Table 2). Consequently, the comparison between layers before and after convolutions is not
entirely fair. To investigate this effect, we ran the trained convolutions without strides during feature
generation for the classifier.
Figure 1b shows the results at different layers without using strides in the convolutions. The general
trend is similar to the strided case: large drop at the 2nd convolutional layer, then steady increase in
the recurrent layers with a drop at the final layers. However, the overall shape of the accuracy in the
recurrent layers is less spiky; the initial drop is milder and performance does not degrade as much at
the top layers. A similar pattern is observed in the non-strided case of DeepSpeech2-light (Figure 1d).
These results can be attributed to two factors. First, running convolutions without strides maintains the
number of examples available to the classifier, which means a larger training set. More importantly,
however, the time resolution remains high which can be important for frame classification.
5.3
Effect of blank symbols
Recall that the CTC model predicts either a letter in the alphabet, a space, or a blank symbol. This
allows the model to concentrate probability mass on a few frames that are aligned to the output
symbols in a series of spikes, separated by blank predictions [31]. To investigate the effect of blank
symbols on phonetic representation, we generate predictions of all symbols using the CTC model,
including blanks and repetitions. Then we break down the classifier?s performance into cases where
the model predicted a blank, a space, or another letter.
Figure 2 shows the results using representations from the best recurrent layers in DeepSpeech2 and
DeepSpeech2-light, run with and without strides in the convolutional layers. In the strided case, the
hidden representations are of highest quality for phone classification when the model predicts a blank.
This appears counterintuitive, considering the spiky behavior of CTC models, which should be more
confident when predicting non-blank. However, we found that only 5% of the frames are predicted as
blanks, due to downsampling in the strided convolutions. When the model is run without strides, we
observe a somewhat different behavior. Note that in this case the model predicts many more blanks
(more than 50% compared to 5% in the non-strided case), and representations of frames predicted as
blanks are not as good, which is more in line with the common spiky behavior of CTC models [31].
Figure 2: Frame classification accuracy at frames predicted as blank, space, or another letter by
DeepSpeech2 and DeepSpeech2-light, with and without strides in the convolutional layers.
6
5.4
Clustering and visualizing representations
In this section, we visualize frame representations from different layers of DeepSpeech2. We first ran
the DeepSpeech2 model on the entire development set of TIMIT and extracted feature representations
for every frame from all layers. This results in more than 100K vectors of different sizes (we use the
model without strides in convolutional layers to allow for comparable analysis across layers). We
followed a similar procedure to that of [20]: We clustered the vectors in each layer with k-means
(k = 500) and plotted the cluster centroids using t-SNE [37]. We assigned to each cluster the phone
label that had the largest number of examples in the cluster. As some clusters are quite noisy, we also
consider pruning clusters where the majority label does not cover enough of the cluster members.
Figure 3 shows t-SNE plots of cluster centroids from selected layers, with color and shape coding for
the phone labels (see Figure 9 in Appendix B for other layers). The input layer produces clusters
which show a fairly clean separation into groups of centroids with the same assigned phone. After the
input layer it is less easy to detect groups, and lower layers do not show a clear structure. In layers
rnn4 and rnn5 we again see some meaningful groupings (e.g. ?z? on the right side of the rnn5 plot),
after which rnn6 and rnn7 again show less structure.
Figure 3: Centroids of frame representation clusters using features from different layers.
Figure 10 (in Appendix B) shows clusters that have a majority label of at least 10-20% of the examples
(depending on the number of examples left in each cluster after pruning). In this case groupings are
more observable in all layers, and especially in layer rnn5.
We note that these observations are mostly in line with our previous findings regarding the quality
of representations from different layers. When frame representations are better separated in vector
space, the classifier does a better job at classifying frames into their phone labels; see also [14] for a
similar observation.
5.5
Sound classes
Speech sounds are often organized in coarse categories like consonants and vowels. In this section,
we investigate whether the ASR model learns such categories. The primary question we ask is: which
parts of the model capture most information about coarse categories? Are higher layer representations
more informative for this kind of abstraction above phones? To answer this, we map phones to their
corresponding classes: affricates, fricatives, nasals, semivowels/glides, stops, and vowels. Then we
train classifiers to predict sound classes given representations from different layers of the ASR model.
Figure 4 shows the results. All layers produce representations that contain a non-trivial amount
of information about sound classes (above the vowel majority baseline). As expected, predicting
sound classes is easier than predicting phones, as evidenced by a much higher accuracy compared to
our previous results. As in previous experiments, the lower layers of the network (input and cnn1)
produce the best representations for predicting sound classes. Performance then first drops at cnn2
and increases steadily with each recurrent layer, finally decreasing at the last recurrent layer. It
appears that higher layers do not generate better representations for abstract sound classes.
Next we analyze the difference between the input layer and the best recurrent layer (rnn5), broken
down to specific sound classes. We calculate the change in F1 score (harmonic mean of precision and
recall) when moving from input representations to rnn5 representations, where F1 is calculated in two
7
Figure 5: Difference in F1 score using representations from layer rnn5 compared to the
input layer.
Figure 4: Accuracy of classification into
sound classes using representations from different layers of DeepSpeech2.
(a) input
(b) cnn2
(c) rnn5
Figure 6: Confusion matrices of sound class classification using representations from different layers.
ways. The inter-class F1 is calculated by directly predicting coarse sound classes, thus measuring how
often the model confuses two separate sound classes. The intra-class F1 is obtained by predicting
fine-grained phones and micro-averaging F1 inside each coarse sound class (not counting confusion
outside the class). It indicates how often the model confuses different phones in the same sound class.
As Figure 5 shows, in most cases representations from rnn5 degrade the performance, both within
and across classes. There are two notable exceptions. Affricates are better predicted at the higher
layer, both compared to other sound classes and when predicting individual affricates. It may be that
more contextual information is needed in order to detect a complex sound like an affricate. Second,
the intra-class F1 for nasals improves with representations from rnn5, whereas the inter-class F1 goes
down, suggesting that rnn5 is better at distinguishing between different nasals.
Finally, Figure 6 shows confusion matrices of predicting sound classes using representations from the
input, cnn2, and rnn5 layers. Much of the confusion arises from confusing relatively similar classes:
semivowels/vowels, affricates/stops, affricates/fricatives. Interestingly, affricates are less confused at
layer rnn5 than in lower layers, which is consistent with our previous observation.
6
Conclusion
In this work, we analyzed representations in a deep end-to-end ASR model that is trained with a CTC
loss. We empirically evaluated the quality of the representations on a frame classification task, where
each frame is classified into its corresponding phone label. We compared feature representations from
different layers of the ASR model and observed striking differences in their quality. We also found
that these differences are partly correlated with the separability of the representations in vector space.
In future work, we would like to extend this analysis to other speech features, such as speaker and
dialect ID, and to larger speech recognition datasets. We are also interested in experimenting with
other end-to-end systems, such as sequence-to-sequence models and acoustics-to-words systems.
Another venue for future work is to improve the end-to-end model based on our insights, for example
by improving the representation capacity of certain layers in the deep neural network.
8
Acknowledgements
We would like to thank members of the MIT spoken language systems group for helpful discussions.
This work was supported by the Qatar Computing Research Institute (QCRI).
References
[1] A. Graves and N. Jaitly, ?Towards End-To-End Speech Recognition with Recurrent Neural
Networks,? in Proceedings of the 31st International Conference on Machine Learning (ICML14), T. Jebara and E. P. Xing, Eds. JMLR Workshop and Conference Proceedings, 2014, pp.
1764?1772.
[2] Y. Miao, M. Gowayyed, and F. Metze, ?EESEN: End-to-end speech recognition using deep RNN
models and WFST-based decoding,? in 2015 IEEE Workshop on Automatic Speech Recognition
and Understanding (ASRU). IEEE, 2015, pp. 167?174.
[3] J. Chorowski, D. Bahdanau, K. Cho, and Y. Bengio, ?End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results,? arXiv preprint arXiv:1412.1602,
2014.
[4] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, ?Listen, Attend and Spell: A Neural Network for
Large Vocabulary Conversational Speech Recognition,? in 2016 IEEE International Conference
on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 4960?4964.
[5] S. Toshniwal, H. Tang, L. Lu, and K. Livescu, ?Multitask Learning with Low-Level Auxiliary
Tasks for Encoder-Decoder Based Speech Recognition,? arXiv preprint arXiv:1704.01631,
2017.
[6] F. Eyben, M. W?llmer, B. Schuller, and A. Graves, ?From Speech to Letters - Using a Novel
Neural Network Architecture for Grapheme Based ASR,? in 2009 IEEE Workshop on Automatic
Speech Recognition and Understanding (ASRU), Nov 2009, pp. 376?380.
[7] D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen,
M. Chrzanowski, A. Coates, G. Diamos et al., ?Deep Speech 2: End-to-End Speech Recognition
in English and Mandarin,? in Proceedings of The 33rd International Conference on Machine
Learning, 2016, pp. 173?182.
[8] D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Bengio, ?End-to-End Attention-based
Large Vocabulary Speech Recognition,? in 2016 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 4945?4949.
[9] H. Soltau, H. Liao, and H. Sak, ?Neural Speech Recognizer: Acoustic-to-Word LSTM Model
for Large Vocabulary Speech Recognition,? in Interspeech 2017, 2017.
[10] K. Audhkhasi, B. Ramabhadran, G. Saon, M. Picheny, and D. Nahamoo, ?Direct Acoustics-toWord Models for English Conversational Speech Recognition,? in Interspeech 2017, 2017.
[11] A.-r. Mohamed, G. Hinton, and G. Penn, ?Understanding how deep belief networks perform
acoustic modelling,? in 2012 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP). IEEE, 2012, pp. 4273?4276.
[12] D. Yu, M. L. Seltzer, J. Li, J.-T. Huang, and F. Seide, ?Feature Learning in Deep Neural
Networks - Studies on Speech Recognition Tasks,? in International Conference on Learning
Representations (ICLR), 2013.
[13] T. Nagamine, M. L. Seltzer, and N. Mesgarani, ?Exploring How Deep Neural Networks Form
Phonemic Categories,? in Interspeech 2015, 2015.
[14] ??, ?On the Role of Nonlinear Transformations in Deep Neural Network Acoustic Models,?
in Interspeech 2016, 2016, pp. 803?807.
[15] Y.-H. Wang, C.-T. Chung, and H.-y. Lee, ?Gate Activation Signal Analysis for Gated Recurrent
Neural Networks and Its Correlation with Phoneme Boundaries,? in Interspeech 2017, 2017.
[16] Z. Wu and S. King, ?Investigating gated recurrent networks for speech synthesis,? in 2016 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016,
pp. 5140?5144.
9
[17] S. Wang, Y. Qian, and K. Yu, ?What Does the Speaker Embedding Encode?? in Interspeech
2017, 2017, pp. 1497?1501. [Online]. Available: http://dx.doi.org/10.21437/Interspeech.
2017-1125
[18] R. Chaabouni, E. Dunbar, N. Zeghidour, and E. Dupoux, ?Learning weakly supervised multimodal phoneme embeddings,? in Interspeech 2017, 2017.
[19] G. Chrupa?a, L. Gelderloos, and A. Alishahi, ?Representations of language in a model of
visually grounded speech signal,? in Proceedings of the 55th Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Papers). Association for Computational
Linguistics, 2017, pp. 613?622.
[20] D. Harwath and J. Glass, ?Learning Word-Like Units from Joint Audio-Visual Analysis,? in
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers). Association for Computational Linguistics, 2017, pp. 506?517.
[21] A. Alishahi, M. Barking, and G. Chrupa?a, ?Encoding of phonology in a recurrent neural
model of grounded speech,? in Proceedings of the 21st Conference on Computational Natural
Language Learning (CoNLL 2017). Association for Computational Linguistics, 2017, pp.
368?378.
[22] A. K?hn, ?What?s in an Embedding? Analyzing Word Embeddings through Multilingual
Evaluation,? in Proceedings of the 2015 Conference on Empirical Methods in Natural Language
Processing. Lisbon, Portugal: Association for Computational Linguistics, September 2015,
pp. 2067?2073. [Online]. Available: http://aclweb.org/anthology/D15-1246
[23] P. Qian, X. Qiu, and X. Huang, ?Investigating Language Universal and Specific
Properties in Word Embeddings,? in Proceedings of the 54th Annual Meeting of the
Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany:
Association for Computational Linguistics, August 2016, pp. 1478?1488. [Online]. Available:
http://www.aclweb.org/anthology/P16-1140
[24] Y. Adi, E. Kermany, Y. Belinkov, O. Lavi, and Y. Goldberg, ?Fine-grained Analysis of Sentence
Embeddings Using Auxiliary Prediction Tasks,? in International Conference on Learning
Representations (ICLR), April 2017.
[25] X. Shi, I. Padhi, and K. Knight, ?Does String-Based Neural MT Learn Source Syntax?? in
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.
Austin, Texas: Association for Computational Linguistics, November 2016, pp. 1526?1534.
[26] Y. Belinkov, N. Durrani, F. Dalvi, H. Sajjad, and J. Glass, ?What do Neural Machine Translation
Models Learn about Morphology?? in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational
Linguistics, 2017, pp. 861?872.
[27] L. Gelderloos and G. Chrupa?a, ?From phonemes to images: levels of representation in a
recurrent neural model of visually-grounded language learning,? in Proceedings of COLING
2016, the 26th International Conference on Computational Linguistics: Technical Papers.
Osaka, Japan: The COLING 2016 Organizing Committee, December 2016, pp. 1309?1319.
[28] S. Hochreiter and J. Schmidhuber, ?Long short-term memory,? Neural Computation, vol. 9,
no. 8, pp. 1735?1780, 1997.
[29] S. Ioffe and C. Szegedy, ?Batch Normalization: Accelerating Deep Network Training by
Reducing Internal Covariate Shift,? in Proceedings of the 32Nd International Conference on
International Conference on Machine Learning (ICML), vol. 37, 2015, pp. 448?456.
[30] C. Laurent, G. Pereyra, P. Brakel, Y. Zhang, and Y. Bengio, ?Batch Normalized Recurrent
Neural Networks,? in 2016 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP). IEEE, 2016, pp. 2657?2661.
[31] A. Graves, S. Fern?ndez, F. Gomez, and J. Schmidhuber, ?Connectionist Temporal Classification:
Labelling Unsegmented Sequence Data with Recurrent Neural Networks,? in Proceedings of
the 23rd International Conference on Machine Learning (ICML), 2006, pp. 369?376.
[32] D. Kingma and J. Ba, ?Adam: A Method for Stochastic Optimization,? arXiv preprint
arXiv:1412.6980, 2014.
[33] S. Naren, ?deepspeech.torch,? https://github.com/SeanNaren/deepspeech.torch, 2016.
10
[34] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, ?Librispeech: an ASR corpus based on
public domain audio books,? in 2015 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP). IEEE, 2015, pp. 5206?5210.
[35] K.-F. Lee and H.-W. Hon, ?Speaker-independent phone recognition using hidden markov
models,? IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 11, pp.
1641?1648, 1989.
[36] T. N. Sainath, O. Vinyals, A. Senior, and H. Sak, ?Convolutional, Long Short-Term Memory,
fully connected Deep Neural Networks,? in 2015 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 4580?4584.
[37] L. v. d. Maaten and G. Hinton, ?Visualizing data using t-SNE,? Journal of Machine Learning
Research, vol. 9, pp. 2579?2605, 2008.
[38] H. Sak, F. de Chaumont Quitry, T. Sainath, K. Rao et al., ?Acoustic Modelling with CDCTC-SMBR LSTM RNNs,? in 2015 IEEE Workshop on Automatic Speech Recognition and
Understanding (ASRU). IEEE, 2015, pp. 604?609.
11
| 6838 |@word multitask:1 version:1 nd:2 initial:3 ndez:1 series:1 score:2 qatar:1 panayotov:1 interestingly:2 blank:17 comparing:2 com:2 surprising:1 contextual:1 activation:2 grapheme:1 attracted:1 dx:1 subsequent:1 chrupa:3 informative:1 shape:2 remove:1 drop:8 plot:2 v:2 alone:1 intelligence:1 fewer:1 selected:1 short:4 wfst:1 provides:1 coarse:4 lexicon:1 org:3 simpler:3 zhang:1 direct:1 become:3 baidu:1 seide:1 dalvi:1 inside:1 manner:1 inter:2 notably:1 expected:1 behavior:3 multi:1 morphology:1 decreasing:1 window:2 considering:2 begin:1 confused:1 linearity:1 mass:1 lowest:1 what:6 kind:1 interpreted:1 string:1 developed:1 spoken:1 finding:2 transformation:1 temporal:4 quantitative:2 every:1 shed:1 classifier:17 unit:6 penn:1 before:3 attend:1 local:1 encoding:1 analyzing:4 rnn1:1 id:1 laurent:1 rnns:3 plus:1 chose:1 suggests:1 catanzaro:1 unique:1 mesgarani:1 procedure:3 rnn:2 empirical:2 universal:1 thought:1 phonemically:1 significantly:1 pre:4 word:11 onto:2 applying:2 www:1 map:4 shi:1 go:2 attention:5 independently:1 sainath:2 resolution:1 qian:2 insight:1 importantly:2 counterintuitive:1 osaka:1 embedding:2 lstm2:1 us:1 distinguishing:1 hypothesis:1 jaitly:2 livescu:1 goldberg:1 trend:4 recognition:33 predicts:4 bottom:1 observed:2 role:1 preprint:3 wang:2 capture:2 calculate:1 connected:2 aclweb:2 movement:1 decrease:1 highest:1 knight:1 ran:2 broken:1 complexity:3 trained:19 weakly:1 upon:1 icassp:7 joint:5 multimodal:1 alphabet:1 train:7 separated:2 dialect:1 describe:1 deepspeech:3 doi:1 artificial:1 outside:1 quite:2 larger:2 encoder:1 statistic:1 noisy:1 final:3 online:3 sequence:24 coming:2 aligned:2 barking:1 organizing:1 pronounced:1 cluster:12 produce:5 adam:2 leave:1 depending:1 recurrent:29 mandarin:1 semivowel:2 phonemic:1 job:1 auxiliary:2 predicted:5 come:3 exhibiting:1 concentrate:1 filter:1 stochastic:1 exploration:1 seltzer:2 public:1 f1:8 clustered:1 exploring:1 visually:2 mapping:3 predict:8 visualize:3 rnn6:2 recognizer:1 label:18 sensitive:1 largest:1 repetition:1 create:1 tool:1 hope:1 mit:2 aim:2 rather:1 fricative:2 d15:1 linguistic:2 encode:2 focus:3 modelling:2 indicates:1 experimenting:1 greatly:1 contrast:4 centroid:4 baseline:2 detect:2 glass:4 helpful:1 milder:1 abstraction:1 nn:1 typically:3 entire:1 torch:3 p16:1 initially:1 hidden:9 relation:1 dnn:1 interested:3 germany:1 x11:2 classification:22 overall:3 hon:1 development:5 art:1 special:1 fairly:3 softmax:2 asr:31 phonological:1 beach:1 sampling:2 yu:2 icml:2 lavi:1 promote:1 future:3 connectionist:4 others:1 duplicate:1 few:2 strided:5 micro:1 composed:1 preserve:1 individual:1 vowel:4 attempt:1 interest:1 investigate:5 intra:2 evaluation:2 analyzed:4 light:16 held:1 affricate:7 accurate:1 closer:1 plotted:1 battenberg:1 seq2seq:2 instance:3 classify:1 modeling:1 rao:1 cover:1 measuring:1 answer:2 combined:1 thoroughly:1 thanks:1 st:3 lstm:3 confident:1 venue:1 international:15 cho:1 lee:2 decoding:3 synthesis:1 again:5 containing:2 hn:1 choose:1 possibly:1 marginalizes:1 transcribed:2 huang:2 worse:1 book:1 chung:1 li:1 japan:1 szegedy:1 chorowski:2 suggesting:2 de:1 stride:19 coding:1 eesen:1 notable:1 depends:1 break:1 analyze:3 observing:2 start:2 xing:1 maintains:1 complicated:2 timit:4 minimize:1 publicly:2 accuracy:9 convolutional:19 phoneme:13 correspond:1 naren:1 conceptually:1 decodes:1 fern:1 lu:1 researcher:1 classified:1 ed:1 chrzanowski:1 frequency:4 steadily:1 james:1 obvious:1 pp:27 mohamed:1 attributed:2 hamming:1 sampled:1 stop:2 dataset:4 massachusetts:1 popular:2 ask:1 recall:2 knowledge:1 color:1 improves:4 ubiquitous:1 segmentation:2 organized:1 listen:1 appears:2 feed:4 bidirectional:2 higher:7 miao:1 supervised:6 follow:1 methodology:1 april:1 formulation:2 evaluated:2 though:1 spiky:3 autoencoders:1 correlation:1 hand:1 lstms:2 nonlinear:1 unsegmented:1 quality:12 reveal:1 usa:1 effect:6 contain:2 normalized:1 spell:1 assigned:2 read:1 laboratory:1 visualizing:2 during:3 width:1 interspeech:8 speaker:4 steady:1 anthology:2 m:2 syntax:1 confusion:4 performs:1 image:2 wise:1 harmonic:1 novel:1 recently:2 common:1 ctc:13 empirically:1 mt:1 conditioning:1 khz:2 volume:4 extend:2 association:11 interpret:1 significant:1 freeze:1 cambridge:1 automatic:6 rd:2 portugal:1 nonlinearity:1 repr:1 language:10 had:1 moving:1 supervision:1 recent:3 chan:1 phone:29 schmidhuber:2 phonetic:12 yonatan:1 certain:3 meeting:4 captured:2 additional:2 somewhat:1 spectrogram:3 recommended:1 signal:11 multiple:1 full:1 sound:18 segmented:1 technical:1 faster:1 offer:1 long:10 cross:1 compensate:1 prediction:3 variant:1 basic:1 liao:1 vision:2 arxiv:6 kernel:1 normalization:2 grounded:3 hochreiter:1 receive:1 addition:1 whereas:2 fine:2 source:1 elegant:3 bahdanau:2 member:2 december:1 bought:2 counting:1 split:1 embeddings:6 enough:1 easy:1 confuses:2 bengio:3 relu:2 lstm3:2 architecture:3 regarding:1 texas:1 shift:1 whether:1 accelerating:1 speech:46 deep:16 clear:1 nasal:3 amount:1 category:4 simplest:1 reduced:3 generate:7 http:5 coates:1 harwath:1 conceived:1 per:2 vol:4 group:3 nevertheless:1 clean:2 povey:1 year:1 run:3 inverse:1 letter:5 wer:1 opaque:1 striking:1 wu:1 separation:1 maaten:1 summarizes:1 appendix:3 uninterpretable:1 comparable:1 confusing:1 dropout:1 layer:104 capturing:1 entirely:1 followed:5 gomez:1 conll:1 nahamoo:1 annual:4 encodes:1 anubhai:1 rnn2:1 aspect:2 librispeech:3 conversational:2 relatively:2 amodei:1 saon:1 across:3 smaller:1 increasingly:2 character:8 separability:1 intuitively:1 invariant:1 previously:1 remains:1 mechanism:1 committee:1 needed:3 end:64 available:6 observe:1 sak:3 batch:4 gate:1 original:2 top:5 denotes:1 include:1 subsampling:1 running:1 clustering:1 linguistics:12 phonology:5 exploit:1 especially:1 ramabhadran:1 question:2 spike:1 degrades:2 primary:1 traditional:3 september:1 iclr:2 distance:1 separate:1 thank:1 berlin:1 capacity:1 hmm:2 majority:4 decoder:1 degrade:2 extent:2 trivial:1 length:1 code:1 downsampling:1 mostly:1 sne:3 ds2:6 ba:1 lstm1:1 design:1 implementation:1 perform:4 gated:4 convolution:13 observation:5 datasets:1 markov:1 november:1 hinton:2 excluding:1 frame:41 august:1 jebara:1 community:1 evidenced:1 sentence:2 acoustic:26 learned:5 hour:1 kingma:1 nip:1 usually:1 pattern:1 including:3 memory:3 explanation:1 belief:1 natural:4 rely:1 lisbon:1 predicting:11 schuller:1 representing:2 improve:3 github:2 technology:1 started:1 extract:4 utterance:1 text:4 epoch:1 literature:1 understanding:5 acknowledgement:1 marginalizing:1 graf:3 loss:6 fully:2 generation:1 sufficient:1 proxy:1 consistent:1 classifying:3 translation:3 casper:1 austin:1 summary:1 supported:1 last:1 english:4 silence:1 guide:1 allow:1 deeper:1 side:1 institute:2 senior:1 boundary:3 depth:1 dimension:3 evaluating:2 calculated:2 vocabulary:3 rnn4:2 alishahi:2 forward:2 made:1 brakel:2 transaction:1 picheny:1 pruning:2 observable:1 nov:1 implicitly:1 transcription:1 multilingual:1 investigating:3 ioffe:1 corpus:4 consonant:1 continuous:1 table:6 lip:1 learn:9 ca:1 cnn2:7 serdyuk:1 improving:2 adi:1 investigated:1 complex:2 domain:6 official:1 main:1 qiu:1 repeated:1 fair:1 precision:1 sub:1 explicit:1 jmlr:1 extractor:1 learns:1 grained:2 tang:1 coling:2 down:4 specific:2 covariate:1 symbol:16 explored:1 experimented:2 grouping:3 workshop:4 diamos:1 magnitude:2 labelling:1 chen:2 easier:1 suited:1 entropy:1 lt:1 fc:2 visual:3 vinyals:2 khudanpur:1 extracted:2 ma:1 king:1 consequently:1 towards:1 asru:3 toword:1 change:1 included:1 reducing:1 averaging:1 glide:1 called:1 partly:1 meaningful:1 chaumont:1 indicating:1 formally:1 exception:1 internal:3 arises:1 cnn1:5 evaluate:3 audio:6 correlated:3 |
6,455 | 6,839 | Generative Local Metric Learning for
Kernel Regression
Yung-Kyun Noh
Seoul National University, Rep. of Korea
[email protected]
Masashi Sugiyama
RIKEN / The University of Tokyo, Japan
[email protected]
Kee-Eung Kim
KAIST, Rep. of Korea
[email protected]
Frank C. Park
Seoul National University, Rep. of Korea
[email protected]
Daniel D. Lee
University of Pennsylvania, USA
[email protected]
Abstract
This paper shows how metric learning can be used with Nadaraya-Watson (NW)
kernel regression. Compared with standard approaches, such as bandwidth selection, we show how metric learning can significantly reduce the mean square error
(MSE) in kernel regression, particularly for high-dimensional data. We propose a
method for efficiently learning a good metric function based upon analyzing the
performance of the NW estimator for Gaussian-distributed data. A key feature of
our approach is that the NW estimator with a learned metric uses information from
both the global and local structure of the training data. Theoretical and empirical
results confirm that the learned metric can considerably reduce the bias and MSE
for kernel regression even when the data are not confined to Gaussian.
1
Introduction
The Nadaraya-Watson (NW) estimator has long been widely used for nonparametric regression
[16, 26]. The NW estimator uses paired samples to compute a locally weighted average via a kernel
function, K(?, ?): RD ? RD ? R, where D is the dimensionality of data samples. The resulting
estimated output for an input x ? RD is given by the equation:
PN
K(xi , x)yi
yb(x) = Pi=1
(1)
N
i=1 K(xi , x)
D
for data D = {xi , yi }N
and yi ? R, and a translation-invariant kernel
i=1 with xi ? R
2
K(xi , x) = K((x ? xi ) ). This estimator is regarded as a fundamental canonical method in
supervised learning for modeling non-linear relationships using local information. It has previously
been used to interpret predictions using kernel density estimation [11], memory retrieval, decision
making models [19], minimum empirical mean square error (MSE) with local weights [10, 23], and
sampling-based Bayesian inference [25]. All of these interpretations utilize the fact that the estimator
will asymptotically converge to the optimal Ep(y|x) [y] with minimum MSE given an infinite number
of data samples.
However, with finite samples, the NW output yb(x) is no longer optimal and can deviate significantly
from the true conditional expectation. In particular, the weights given along the directions of large
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: Metric dependency of kernels. The level curves of kernels are hyper-spheres for isotropic
kernels in (a), while they are hyper-ellipsoids for kernels with the Mahalanobis metric as shown in (b).
The principal directions of hyper-ellipsoids are the eigenvectors of the symmetric positive definite
matrix A which is used in the Mahalanobis distance. When the target variable y varies along the ?y
direction in the figure, the weighted average will give less bias if the metric is extended along the
orthogonal direction of ?y as shown in (b).
variability in y?e.g. the direction of ?y as in Fig. 1(a)?causes significant deviation. In this case,
naively changing the kernel shape, as shown in Fig. 1(b), can alleviate the deviation. In this work, we
investigate more sophisticated methods for finding appropriate kernel shapes via metric learning.
Metric learning is used to find specific directions with increased variability. Using information from
the training examples, metric learning shrinks or extends distances in directions that are more or
less important. A number of studies have focused on using metric learning for nearest neighbor
classification [3, 6, 8, 17, 27], and many recent works have applied it to kernel methods as well
[12, 13, 28]. Most of these approaches focus on modifying relative distances using triplet relationships
or minimizing empirical error with some regularization.
In conventional NW regression, the deviation due to finite sampling is mitigated by controlling the
bandwidth of the kernel function. The bandwidth controls the balance between the bias and the
variance of the estimator, and the finite-sample deviation is reduced with appropriate selection of the
bandwidth [9, 20, 21]. Other approaches include trying to explicitly subtract an estimated bias [5, 24]
or using a higher-order kernel which eliminates the leading-order terms of the bias [22]. However,
many of these direct approaches behave improperly in high-dimensional spaces for two reasons;
distance information is dominated by noise, and by using only nearby data, local algorithms suffer
due to the small number of data used effectively by the algorithms.
In this work, we apply a metric learning method for mitigating the bias. Differently from conventional
metric learning methods, we analyze the metric effect on the asymptotic bias and variance of the NW
estimator. Then we apply a generative model to alleviate the bias in a high-dimensional space. Our
theoretical analysis shows that with a jointly Gaussian assumption on x and y, the metric learning
method reduces to a simple eigenvector problem of finding a two-dimensional embedding space
where the noise is effectively removed. Our approach is similar to the previous method in applying a
simple generative model to mitigate the bias [18], but our analysis shows that there always exists a
metric that eliminates the leading-order bias for any shape of Gaussians, and two dimensionality is
enough to achieve the zero bias. The algorithm based on this analysis shows a good performance for
many benchmark datasets. We interpret the result to mean that the NW estimator indirectly uses the
global information through the rough generative model, and the results are improved because the
information from the global covariance structure is additionally used, which would never be used in
NW estimation otherwise.
One well-known extension of NW regression for reducing its bias is locally linear regression (LLR)
[23]. LLR shows a zero-bias as well for data from Gaussian, but the parameter is solely estimated
locally, which is prone to overfitting in high-dimensional problems. In our experiments, we compare
our method with LLR and demonstrate that our method compares favorably with LLR and other
competitive methods..
The rest of the paper is organized as follows. In Section 2, we explain our metric learning formulation
for kernel regression. In Section 3, we derive the bias and its relationship to the metric, and our
proposed algorithm is introduced in Section 4. In Section 5, we provide experiments with other
standard regression methods, and conclude with a discussion in Section 6.
2
2
Metric Learning in Kernel Methods
We consider a Mahalanobis-type distance for metric learning. The Mahalanobis-type distance between
two data points xi ? RD and xj ? RD is defined in this work as
q
||xi ? xj ||A = (xi ? xj )> A(xi ? xj ),
A 0, A> = A, |A| = 1
(2)
with a symmetric positive definite matrix A ? RD?D and |A|, the determinant of A. By using this
metric, we consider a metric space where the distance is extended or shrunk along the directions
of eigenvectors of A, while the volume of the hypersphere is kept the same due to the determinant
constraint. With an identity matrix A = I, we obtain the conventional Euclidean distance.
A kernel function capturing the local information typically decays rapidly outside a certain distance;
conventionally a bandwidth parameter h is used to control the effective number of data within the
range of interests. If we use the Gaussian kernel as an example, with the aforementioned metric and
bandwidth, the kernel function can be written as
||xi ? x||A
1
1
>
(3)
K(xi , x) = K
=? D
exp ? 2 (xi ? x) A (xi ? x) ,
h
2h
2? hD
where the ?relative? bandwidths along individual directions are determined by A, and the overall size
of the kernel is determined by h.
3
Bias of Nadaraya-Watson Kernel Estimator
We first note that our target function is the conditional expectation y(x) = E[y|x], and it is invariant
to metric change. When we consider a D-dimensional vector x ? RD and its stochastic prediction
y ? R, the conditional expectation y(x) = E[y|x] minimizes the MSE. If we consider two different
spaces with coordinates x ? RD and z ? RD and a linear transformation between these two spaces,
z = L> x, with a full-rank square matrix L ? RD?D , the expectation of y is invariant to the
coordinate change satisfying E[y|x] = E[y|z], because the conditional density is preserved by the
metric change: p(y|x) = p(y|z) for all corresponding x and z, and
Z
Z
E[y|x] = y p(y|x)dy = y p(y|z)dy = E[y|z].
(4)
The equivalence means that the target function is invariant to metric change with A = LL> , and
considering that the NW estimator achieves the optimal prediction E[y|x] with infinite data, optimal
prediction is achieved with infinite data regardless of the choice of metric. Thus the metric dependency
is actually a finite sampling effect along with the bias and the variance.
3.1
Metric Effects on Bias
The bias is the expected deviation of the estimator from the true mean of the target variable y(x):
"P
#
N
i=1 K(xi , x)yi
Bias = E [b
y (x) ? y(x)] = E PN
? y(x) .
(5)
i=1 K(xi , x)
Standard methods for calculating the bias assume asymptotic concentration around the means, both in
the numerator and in the denominator of the NW estimator. Usually, the numerator and denominator
of the bias are approximated separately, and the bias of the whole
NW estimator
R
R is calculated using
aR plug-in method [15, 23]. We assume a kernel satisfying K(z)dz = 1, zK(z)dz = 0, and
zz> K(z)dz = I. For example, the Gaussian kernel in Eq. (3) satisfies all of these conditions.
Then we can first approximate the denominator as1
#
"
N
1 X
h2
Ex1 ,...,xN
K(xi , x) = p(x) + ?2 p(x) + O(h4 ),
(6)
N i=1
2
1
See Appendix in the supplementary material for the detailed derivation.
3
with Laplacian ?2 , the trace of the Hessian with respect to x. Similarly, the expectation of the
numerator becomes "
#
N
1 X
h2
Ex1 , . . . , xN ,
K(x, xi )yi = p(x)y(x) + ?2 [p(x)y(x)] + O(h4 ).
(7)
2
y1 , . . . , yN N
i=1
Using the plug-ins of Eq. (6) and Eq. (7), we can find the leading-order terms of the NW estimation,
and the bias of the NW estimator can be obtained as follows:
"P
#
>
N
? p(x)?y(x) ?2 y(x)
2
i=1 K(x, xi )yi
E PN
? y(x) = h
+
+ O(h4 ).
(8)
p(x)
2
K(x,
x
)
i
i=1
Here, all gradients ? and Laplacians ?2 are with respect to x. We have noted that the target
y(x) = E[y|x] is invariant to the metric change, and the metric dependency comes from the finite
sample deviation terms. Here, both the gradient and the Laplacian in the deviation are dependent on
the change of metric A.
3.2
Conventional Methods of Reducing Bias
Previously, there have been works intended to reduce the deviation [9, 20, 21]. A standard approach
is to adapt the size of bandwidth parameter h under the minimum MSE criterion. Bandwidth selection
has an intuitive motivation of balancing the tradeoff between the bias and the variance; the bias can
be reduced by using a small bandwidth but at the cost of increasing the variance. Therefore, for
bandwidth selection, the bias and variance criteria have to be used at the same time.
Another straightforward and well-known extension of the NW estimator is the locally linear regression
(LLR) [2, 23]. Considering that Eq. (1) is the solution minimizing the local empirical MSE:
y(x) = arg min
??R
N
X
2
(yi ? ?) K(xi , x),
(9)
i=1
the LLR extends this objective function to
[y(x), ? ? (x)] = arg
min
??R,??RD
N
X
2
yi ? ? ? ? > (xi ? x) K(xi , x),
(10)
i=1
to eliminate the noise produced by the linear component of the target function. The vector parameter
? ? (x) ? RD is the estimated local gradient using local data, and this vector often overfits in a
high-dimensional space resulting in a poor solution of ?.
However, LLR asymptotically produces the bias of
h2 2
BiasLLR =
? y(x) + O(h4 ).
(11)
2
Eq. (11) can be compared with the NW bias in Eq. (8), where the bias term from the linear variation
>
of y with respect to x, h2 ? p?y
, is eliminated.
p
4
Metric for Nadaraya-Watson Regression
In this section, we propose a metric that appropriately reduces the metric-dependent bias of the NW
estimator.
4.1
Nadaraya-Watson Regression for Gaussian
In order to obtain a metric, we first provide the following theorem which guarantees the existence of
a good metric that eliminates the leading order bias at any point regardless of the configuration of
Gaussian.
Theorem 1: At any point x, there exists a metric matrix A, such that for data x ? RD and the output
y ? R jointly generated from any (D + 1)-dimensional Gaussian, the NW regression with distance
d(x, x0 ) = ||x ? x0 ||A , for x, x0 ? RD , has a zero leading-order bias.
4
Based on the theorem, we will consider using the corresponding metric space for NW regression at
each point. The theorem is proven using the following Proposition 2 and Lemma 3, which are general
claims without the Gaussian assumptions.
Proposition 2: There exists a symmetric positive definite matrix A that eliminates the first term
?>p(x)?y(x)
inside the bias in Eq. (8), when used with the metric in Eq. (2), and when there exist two
p(x)
linearly independent gradients of p(x) and y(x), and p(x) is away from zero.
Proof: We consider a coordinate transformation z = L> x with L satisfying A = LL> . The gradient
of a differentiable function y(.) and a density function p(.) with respect to z is
1 ?1
?1
?z y(z)
=
L
?
y(x)
,
?
p(z)
=
L ?x p(x),
(12)
x
z
>
>
|L|
z=L x
z=L x
and the scalar ?>p(x)?y(x) in the Euclidean space can be rewritten in the transformed space as
1 >
?z p(z)?z y(z) + ?>zy(z)?z p(z)
(13)
?>z p(z)?z y(z) =
2
1
?> ?1
=
?>
L ?x y(x) + ?x y(x)L?> L?1 ?>
(14)
x p(x)L
x p(x)
2|L|
?1
1
>
=
?x y(x)?>
(15)
1 tr A
x p(x) + ?x p(x)?x y(x) .
2|A| 2
The symmetric matrix B = ?y(x)?> p(x) + ?p(x)?> y(x) has rank two with independent ?y(x)
and ?p(x) and can be eigen-decomposed as
h
h
i
i>
?1 0
B = u1 u2
u1 u2
(16)
0 ?2
with eigenvectors u1 and u2 and nonzero eigenvalues ?1 and ?2 . A sufficient condition for the
existence of A is that the two eigenvalues have different signs, in other words, ?1 ?2 < 0.
Let ?1 > 0 and ?2 < 0 without loss of generality, and we choose a positive definite matrix having
the following eigenvector decomposition:
?
?
?1
0
???
h
i
h
i>
?
?
A = u1 u2 ? ? ? ? 0 ??2
(17)
? u1 u2 ? ? ? .
..
..
.
.
Then Eq. (15) becomes zero, yielding a zero value for the first term of the bias with nonzero p(x).
Therefore, we can always find A that eliminates the first term of the bias once B has one positive and
one negative eigenvalue, and the following Lemma 3 proves that B always has one positive and one
negative eigenvalue.
Lemma 3: A symmetric matrix B = (B 0 +B 0> )/2 has two nonzero eigenvalues for a rank one matrix
B 0 = v1 v2> with two linearly independent vectors, v1 and v2 . Here, one of the two eigenvalues is
positive, and the other is negative.
Proof: We can reformulate B as
h
i
i>
1h
1
0 1
v1 v2 .
(18)
B = (v1 v2> + v2 v1> ) = v1 v2
1 0
2
2
h
i> h
i
If we make a new square matrix of size two, M = v1 v2 B v1 v2 , the determinant of the
matrix is as follows using the eigen-decomposition of B with eigenvectors u1 and u2 and eigenvalues
?1 and ?2 :
h
i> h
i
|M | = v1 v2 B v1 v2
(19)
h
h
i> h
i
i> h
i
?1 0
= v1 v2
u1 u2
u1 u2
v1 v2
(20)
0 ?2
2
= ?1 ?2 v1> u1 v2> u2 ? v1> u2 v2> u1 ,
(21)
5
and at the same time, |M | is always negative by the following derivation:
h
i> h
i> h
i2
i
1 h
0 1
< 0.
v1 v2
|M | = v1 v2 B v1 v2 = v1 v2
1 0
2
(22)
From these calculations, ?1 ?2 < 0, and ?1 and ?2 always have different signs.
With Proposition 2 and Lemma 3, we always have a metric space associated with A in Eq. (17) that
eliminates the leading order bias of a Gaussian, because ?2 y(x) = 0 is always satisfied for x and y
which are jointly Gaussian, eliminating the second term of Eq. (8) as well.
4.2
Gaussian Model for Metric Learning
We now know there exists an interesting scaling by a metric change where the NW regression achieves
the bias O(h4 ). The metric we use is as follows:
?+
0
ANW = ?[u+ u? ]
[u+ u? ]> + ?I,
for |ANW | = 1.
(23)
0 ???
Here, ? is the constant determined from the constraint |ANW | = 1. We use one positive and one
negative eigenvalue, ?+ > 0 and ?? < 0, from matrix B:
B = ?y(x)?> p(x) + ?p(x)?> y(x),
(24)
and their corresponding eigenvectors u+ and u? . A small positive regularization constant ? is added
after being multiplied by the identity matrix.
By adding a regularization term to the metric, the deviation
and ?y(x)
with exact ?p(x)
becomes
nonzero, but a small value,
?1
h2
2p(x) tr[ANW B]
=
h2
2p(x)?
?+
?+ +?
?
??
?? +?
=
?h2
2p(x)?
?+ ? ??
?+ ??
+
2
O(? ). However, with small ?, the deviation is still low unless p(x) is close to zero, or ?p(x) and
?y(x) are parallel.
The matrix ANW is obtained for every point of interest, and the NW regression of each point is
performed with a different ANW calculated at each point. ANW is a function of x, but the changing
part is only the rank two matrix, and the calculation is simple, since we only have to solve the
eigenvector problem of a 2 ? 2 matrix for each query point regardless of the original dimensionality.
Note that the bandwidth h is not yet included for the optimization when we obtain the metric. After
we obtain the metric, we can still use bandwidth selection for even better MSE.
In order to obtain the metric ANW , at every query, we need the information of ?p(x) and ?y(x).
The knowledge of true y(x) and p(x) is unknown, and we need to obtain the gradient information
from data again. Previously, the gradient information was obtained locally with a small number of
samples [4, 7], but such methods are not preferred here because we need to overcome the corruption
of the local information in high-dimensional cases. Instead, we use a global parametric model: Using
a single Gaussian model for all data, we estimate the gradient of true y(x) and p(x) at each point
from the global configuration of data fitted by a single Gaussian:
y
?y
?y ?yx
p
=N
,
.
(25)
x
?x
?xy ?x
In fact, the target function y(x) = ?yx ??1
x (x ? ?x ) + ?y (See Appendix) can be analytically
obtained in a closed form when we estimate the parameters of the Gaussian, but we reuse y(x) for
enhancement of the NW regression, and the NW regression updates y(x) using local information. The
?p(x)
b ?1
b
b ?1
gradients for metric learning can be obtained using ?y(x) = ?
bx )
x ?xy and p(x) = ??x (x ? ?
b x, ?
b xy , and ?
from the estimated parameters ?
bx if the global model is Gaussian. A pseudo-code of
the proposed method is presented in Algorithm 1.
4.3
Interpretation of the Metric
The learned metric ANW considers the two-dimensional subspace spanned by ?p(x) =
?1
?p(x)??1
x (x ? ?x ) and ?y(x) = ?x ?xy . The two-dimensionality analysis of the metric shows
that the distant points are used for those in the space orthogonal to this two-dimensional subspace.
6
Algorithm 1 Generative Local Metric Learning for NW Regression
Input: data D = {xi , yi }N
i=1 and point for regression x
Output: regression output yb(x)
Procedure:
?y ?yx
?y
1: Find joint covariance matrix ? =
and mean vector ? =
from data D.
?xy ?x
?x
2: Obtain two eigenvectors
and
u2 =
?p(x)
?y
?
,
||?p(x)|| ||?y||
(26)
1
(?y > ?p + ||?y||||?p||) and
2p(x)
?2 =
1
(?y > ?p ? ||?y||||?p||),
2p(x)
(27)
u1 =
?p(x)
?y
+
||?p(x)|| ||?y||
and their corresponding eigenvalues
?1 =
using
?p(x) = ?p(x)??1
x (x ? ?x )
and ?y = ??1
x ?xy .
3: Obtain the transform matrix L using u1 , u2 , ?1 , and ?2 :
????1 + ?/T
?
|
?
|
? u1 u2
L=?
?||u1 || ||u2 ||
|
Uo
|
??
??
??
(28)
?
??2 + ?/T
?
?/T
?
?/T
.
.
.
?
?/T
?
?
?
(29)
1
with T = (?1 + 1)(??2 + ?)? D?2 2D , a small constant ?, and an orthonomal matrix Uo ?
RD?(D?2) spanning the normal space of u1 and u2 .
4: Perform NW regression at z = L> x using transformed data zi = L> xi , i = 1, . . . , N .
This fact has the effect of virtually increasing the amount of data compared with algorithms with
isotropic kernels, particularly in high-dimensional space.
The following proposition gives an intuitive explanation that the bias reduction is more important
in high-dimensional space than the reduction of the variance once the optimal bandwidth has been
selected balancing the leading terms of the bias and variance after the change of metric. Proposition
2, Lemma 3, and the following Proposition 4 are obtained without any Gaussian assumption.
Proposition 4: Let us simplify the MSE as the squared bias obtained from the leading terms in
Eq. (8) and the variance2 , i.e.,
1
f (h) = h4 C1 +
C2 .
(31)
N hD
Then, at some h? , it has the the minimum f (h? ) = C1 in the limit with infinite D, where D is the
dimensionality of data.
(h)
Proof: The optimal h can be obtained using ?f?h
= 0, and the optimal h is
h=h?
1
h? = N ? D+4
2
D ? C2
4 ? C1
See Section 6 of the Appendix:
>
2
? p(x)?y(x)
?2 y(x)
C1 =
+
p(x)
2
7
and
1
D+4
.
C2 =
(32)
?y2 (x)
1
? D
(2 ?) p(x)
(30)
1
0
?1
?y(x)
?1
0
(a)
1
(b)
(c)
Figure 2: (a) Metric calculation for a Gaussian and gradient ?y. (b) Empirical MSEs with and
without the metric. (c) Leading order terms in MSE with optimal bandwidth for various numbers of
data.
By plugging h? into f (h) in Eq. (31), we obtain
4
D !
D+4
D+4
4
D
4
D
4
? D+4
f (h? ) = N
+
C1D+4 C2D+4 ' C1 . (for D 4).
4
D
(33)
In Proposition 4, the first term h4 C1 is the square of the bias, and the second term N 1hD C2 is the
derived variance. The MSE is minimized in a high-dimensional space only through the minimization
of the bias when it is accompanied by the optimization with respect to the bandwidth h. The plot of
MSE in Fig. 2(c) shows that the MSE with bandwidth selection quickly approaches C1 in particular
with a small number of data. The derivation shows that we can ignore the variance optimization with
respect to the metric change. We only focus on achieving a small bias and rather than minimizing the
variance, the bandwidth selection follows later.
5
Experiments
The proposed algorithm is evaluated using both synthetic and real datasets. For a Gaussian, Fig. 2(a)
depicts the eigenvectors along with the eigenvalues of the matrix B = ?y?>p + ?p?>y at different
points in the two-dimensional subspace spanned by ?y and ?p. The metric can be compared with
the adaptive scaling proposed in [14], which determines the metric according to the average amount
of ?y. Our metric also uses ?y, but the metric is determined using the relationship with ?p.
Fig. 2(a) shows the metric eigenvalues and eigenvectors at each point for a two-dimensional Gaussian
with a covariance contour in the figure. With Gaussian data, the MSE with the proposed metric is
shown along with MSE with the Euclidean metric in Fig. 2(b). The metric is obtained from the
estimated parameter of a jointly Gaussian model, where the result with a learned metric shows a huge
difference in the MSE.
For real-data experiments, we used the Delve datasets (Abalone, Bank-8fm, Bank-32fh, CPU), UCI
datasets (Community, NavalC, NavalT, Protein, Slice), KEEL datasets (Ailerons, Elevators, Puma32h)
[1], and datasets from a previous paper (Pendulum, Pol) [15]. The datasets include dozens of features
and several thousands to tens of thousands of data. Using a Gaussian model with regularized
maximum likelihood estimated parameters, we apply a metric which minimizes the bias with a fixed
? = max(|?1 |, |?2 |) ? 10?2 , and we choose h from a pre-chosen validation set. NW estimation with
the proposed metric (NW+GMetric) is compared with the conventional NW estimation (NW), LLR
(LLR), the previous metric learning method for NW regression (NW+WMetric [28], NW+KMetric
[14]), a more flexible Gaussian process regression (GPR) with the Gaussian kernel, and the Gaussian
b yx ?
b ?1
globally linear model (GGL) using y(x) = ?
bx ) + ?
by .
x (x ? ?
For eleven datasets among a total of fourteen datasets, the NW estimation with the proposed metric
statistically achieves one of the best performances. Even when the estimation does not achieve
the best performance, the metric always reduces the MSE from the original NW estimation. In
particular, in the Slice, Pol, CPU, NavalC, and NavalT datasets, GGL performs poorly showing the
non-Gaussianity of data, while the metric using the same information effectively reduces the MSE
8
Figure 3: Regression with real-world datasets. NW is the NW regression with conventionial kernels,
NW+GMetric is the NW regression with the proposed metric, LLR is the locally linear regression,
NW+WMetric [28] and NW+KMetric [14] are different metrics for NW regression, GPR is the
Gaussian process regression, and GGL is the Gaussian globally linear model. Normalized MSE
(NMSE) is the ratio between the MSE and the variance of the target value. If we constantly choose
the mean of the target, we get an NMSE of 1.
from the original NW estimator. A detailed discussion comparing the proposed method with other
methods for non-Gaussian data is provided in Section 3 and 4 of the Appendix.
6
Conclusions
An effective metric function is investigated for reducing the bias of NW regression. Our analysis has
shown that the bias can be minimized under certain generative assumptions. The optimal metric is
obtained by solving a series of eigenvector problems of size 2 by 2 and needs no explicit gradients or
curvature information.
The Gaussian model captures only the rough covariance structure of whole data. The proposed
approach uses the global covariance to identify the directions that are most likely to have gradient
components, and the experiments with real data show that the method is effective for more reliable
and less biased estimation. This is in contrast to LLR which attempts to eliminate the linear noise, but
the noise elimination relies on a small number of local data. In contrast, our model uses additional
information from distant data only if they are close in the projected two-dimensional subspace. As a
result, the metric allows a more reliable unbiased estimation of the NW estimator.
We have also shown that minimizing the variance is relatively unimportant in high-dimensional
spaces compared to minimizing the bias, especially when the bandwidth selection method is used.
Consequently, our bias minimization method can achieve sufficiently low MSE without the additional
computational cost incurred by empirical MSE minimization.
9
Acknowledgments
YKN acknowledges support from NRF/MSIT-2017R1E1A1A03070945, BK21Plus in Korea, MS from KAKENHI 17H01760 in Japan, KEK from IITP/MSIT 2017-0-01778 in Korea, FCP from BK21Plus, MITIP10048320 in Korea, and DDL from the NSF, ONR, ARL, AFOSR, DOT, DARPA in US.
References
[1] J. Alcal?-Fdez, A. Fernandez, J. Luengo, J. Derrac, S. Garc?a, L. S?nchez, and F. Herrera. KEEL
data-mining software tool: Data set repository, integration of algorithms and experimental
analysis framework. Journal of Multiple-Valued Logic and Soft Computing, 17(2-3):255?287,
2011.
[2] C. G. Atkeson, A. W. Moore, and S. Schaal. Locally weighted learning. Artificial Intelligence
Review, 11(1-5):11?73, February 1997.
[3] A. Bellet, A. Habrard, and M. Sebban. A survey on metric learning for feature vectors and
structured data. CoRR, abs/1306.6709, 2013.
[4] Y. Cheng. Mean shift, mode seeking, and clustering. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 17:790?799, 1995.
[5] E. Choi, P. Hall, and V. Rousson1. Data sharpening methods for bias reduction in nonparametric
regression. Annals of Statistics, 28(5):1339?1355, 2000.
[6] J.V. Davis, B. Kulis, P. Jain, S. Sra, and I.S. Dhillon. Information-theoretic metric learning. In
Proceedings of the 24th International Conference on Machine Learning, pages 209?216, 2007.
[7] K. Fukunaga and D. H. Larry. The estimation of the gradient of a density function, with
applications in pattern recognition. IEEE Transactions on Information Theory, 21:32?40, 1975.
[8] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components
analysis. In Advances in Neural Information Processing Systems 17, pages 513?520. 2005.
[9] P. Hall, S. J. Sheather, M. C. Jones, and J. S. Marron. On optimal data-based bandwidth selection
in kernel density estimation. Biometrika, 78:263?269, 1991.
[10] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer Series
in Statistics. Springer New York Inc., New York, NY, USA, 2001.
[11] S. Haykin. Neural Networks and Learning Machines (3rd Edition). Prentice Hall, 3 edition,
2008.
[12] R. Huang and S. Sun. Kernel regression with sparse metric learning. Journal of Intelligent and
Fuzzy Systems, 24(4):775?787, 2013.
[13] P. W. Keller, S. Mannor, and D. Precup. Automatic basis function construction for approximate
dynamic programming and reinforcement learning. In Proceedings of the 23rd International
Conference on Machine Learning, ICML ?06, pages 449?456, New York, NY, USA, 2006.
ACM.
[14] S. Kpotufe, A. Boularias, T. Schultz, and K. Kim. Gradients weights improve regression and
classification. Journal of Machine Learning Research, 17(22):1?34, 2016.
[15] M. Lazaro-Gredilla and A. R. Figueiras-Vidal. Marginalized neural network mixtures for
large-scale regression. IEEE Transactions on Neural Networks, 21(8):1345?1351, 2010.
[16] E. A. Nadaraya. On estimating regression. Theory of Probability and its Applications, 9:141?
142, 1964.
[17] B. Nguyen, C. Morell, and B. De Baets. Large-scale distance metric learning for k-nearest
neighbors regression. Neurocomputing, 214:805?814, 2016.
[18] Y.-K. Noh, B.-T. Zhang, and D. D. Lee. Generative local metric learning for nearest neighbor
classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, to appear, doi:
10.1109/TPAMI.2017.2666151, 2017.
[19] R. M. Nosofsky and T. J. Palmeri. An exemplar-based random walk model of speeded classification. Psychological Review, 104(2):266?300, 1997.
[20] B. U. Park and J. S. Marron. Comparison of data-driven bandwidth selectors. Journal of the
American Statistical Association, 85:66?72, 1990.
[21] B. U. Park and B. A. Turlach. Practical performance of several data driven bandwidth selectors.
Computational Statistics, 7:251?270, 1992.
10
[22] E. Parzen. On estimation ofa probability density function and mode. Annals of Mathematical
Statistics, 33:1065?1076, 1962.
[23] D. Ruppert and M. P. Wand. Multivariate Locally Weighted Least Squares Regression. The
Annals of Statistics, 22(3):1346?1370, September 1994.
[24] W. R. Schucany and John P. Sommers. Improvement of kernel type density estimators. Journal
of the American Statistical Association, 72:420?423, 1977.
[25] L. Shi, T. L. Griffiths, N. H. Feldman, and A. N. Sanborn. Exemplar models as a mechanism
for performing Bayesian inference. Psychonomic bulletin & review, 17(4):443?464, 2010.
[26] Geoffrey S. Watson. Smooth regression analysis. Sankhy?a: The Indian Journal of Statistics,
Series A, 26:359?372, 1964.
[27] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest
neighbor classification. In Advances in Neural Information Processing Systems 18, pages
1473?1480. 2006.
[28] K. Q. Weinberger and G. Tesauro. Metric learning for kernel regression. In Eleventh international conference on artificial intelligence and statistics, pages 608?615, 2007.
11
| 6839 |@word kulis:1 determinant:3 repository:1 eliminating:1 turlach:1 covariance:5 decomposition:2 tr:2 reduction:3 configuration:2 series:3 nohyung:1 daniel:1 comparing:1 goldberger:1 yet:1 written:1 john:1 distant:2 eleven:1 shape:3 plot:1 update:1 generative:7 selected:1 intelligence:4 isotropic:2 haykin:1 hypersphere:1 mannor:1 zhang:1 mathematical:1 along:8 h4:7 direct:1 eung:1 c2:4 eleventh:1 inside:1 x0:3 upenn:1 expected:1 salakhutdinov:1 globally:2 decomposed:1 cpu:2 considering:2 increasing:2 becomes:3 provided:1 estimating:1 mitigated:1 minimizes:2 eigenvector:4 fuzzy:1 finding:2 transformation:2 sharpening:1 guarantee:1 pseudo:1 mitigate:1 masashi:1 every:2 ofa:1 biometrika:1 control:2 uo:2 yn:1 appear:1 positive:9 local:14 limit:1 analyzing:1 solely:1 equivalence:1 delve:1 nadaraya:6 range:1 ms:1 statistically:1 speeded:1 acknowledgment:1 practical:1 msit:2 definite:4 procedure:1 empirical:6 significantly:2 word:1 pre:1 griffith:1 protein:1 get:1 close:2 selection:9 prentice:1 applying:1 conventional:5 dz:3 shi:1 straightforward:1 regardless:3 keller:1 focused:1 survey:1 keel:2 anw:9 estimator:20 regarded:1 spanned:2 hd:3 embedding:1 coordinate:3 variation:1 annals:3 target:9 controlling:1 construction:1 exact:1 programming:1 us:6 baets:1 element:1 satisfying:3 particularly:2 approximated:1 recognition:1 ep:1 capture:1 thousand:2 sun:1 iitp:1 removed:1 pol:2 dynamic:1 solving:1 upon:1 basis:1 joint:1 darpa:1 differently:1 various:1 riken:1 derivation:3 jain:1 effective:3 doi:1 query:2 artificial:2 hyper:3 outside:1 widely:1 kaist:2 supplementary:1 solve:1 valued:1 otherwise:1 fdez:1 statistic:7 jointly:4 transform:1 tpami:1 differentiable:1 eigenvalue:11 propose:2 uci:1 rapidly:1 poorly:1 achieve:3 roweis:1 intuitive:2 figueiras:1 enhancement:1 sea:1 produce:1 derive:1 blitzer:1 ac:4 exemplar:2 nearest:4 eq:13 c:1 come:1 arl:1 direction:10 tokyo:2 modifying:1 shrunk:1 stochastic:1 larry:1 material:1 elimination:1 garc:1 alleviate:2 proposition:8 extension:2 around:1 sufficiently:1 hall:3 normal:1 exp:1 nw:46 claim:1 achieves:3 fh:1 estimation:13 tool:1 weighted:4 minimization:3 rough:2 gaussian:31 always:8 rather:1 pn:3 derived:1 focus:2 schaal:1 kakenhi:1 improvement:1 rank:4 likelihood:1 contrast:2 kim:2 inference:2 dependent:2 typically:1 eliminate:2 transformed:2 mitigating:1 noh:2 classification:5 aforementioned:1 overall:1 arg:2 flexible:1 among:1 integration:1 once:2 never:1 having:1 beach:1 sampling:3 zz:1 eliminated:1 nrf:1 park:3 jones:1 icml:1 sankhy:1 minimized:2 simplify:1 intelligent:1 national:2 neurocomputing:1 individual:1 elevator:1 intended:1 attempt:1 ab:1 friedman:1 interest:2 huge:1 investigate:1 mining:1 llr:11 mixture:1 yielding:1 xy:6 korea:6 orthogonal:2 unless:1 euclidean:3 walk:1 theoretical:2 fitted:1 psychological:1 increased:1 modeling:1 soft:1 ar:1 cost:2 deviation:10 habrard:1 dependency:3 marron:2 varies:1 lazaro:1 considerably:1 synthetic:1 st:1 density:7 fundamental:1 international:3 lee:2 parzen:1 quickly:1 precup:1 nosofsky:1 again:1 squared:1 satisfied:1 boularias:1 choose:3 huang:1 american:2 leading:9 bx:3 japan:2 de:1 accompanied:1 gaussianity:1 inc:1 explicitly:1 fernandez:1 performed:1 later:1 closed:1 analyze:1 overfits:1 pendulum:1 competitive:1 parallel:1 square:6 variance:13 kek:1 efficiently:1 ykn:1 identify:1 bayesian:2 produced:1 zy:1 corruption:1 explain:1 sugi:1 ggl:3 proof:3 associated:1 knowledge:1 dimensionality:5 organized:1 sophisticated:1 actually:1 higher:1 supervised:1 improved:1 yb:3 formulation:1 evaluated:1 shrink:1 generality:1 mode:2 usa:4 effect:4 normalized:1 true:4 y2:1 unbiased:1 regularization:3 analytically:1 symmetric:5 nonzero:4 moore:1 dhillon:1 mahalanobis:4 ll:2 numerator:3 ex1:2 davis:1 noted:1 abalone:1 criterion:2 m:1 trying:1 theoretic:1 demonstrate:1 performs:1 psychonomic:1 sebban:1 fourteen:1 jp:1 volume:1 association:2 interpretation:2 interpret:2 kekim:1 significant:1 feldman:1 rd:17 automatic:1 similarly:1 herrera:1 sugiyama:1 sommers:1 dot:1 longer:1 curvature:1 multivariate:1 recent:1 driven:2 tesauro:1 certain:2 rep:3 watson:6 onr:1 yi:9 minimum:4 additional:2 converge:1 full:1 multiple:1 reduces:4 smooth:1 adapt:1 plug:2 calculation:3 long:2 retrieval:1 sphere:1 paired:1 laplacian:2 plugging:1 prediction:4 regression:41 denominator:3 metric:86 expectation:5 kernel:32 confined:1 achieved:1 c1:7 preserved:1 separately:1 appropriately:1 biased:1 eliminates:6 rest:1 virtually:1 sheather:1 enough:1 xj:4 zi:1 pennsylvania:1 bandwidth:22 fm:1 hastie:1 reduce:3 tradeoff:1 shift:1 reuse:1 improperly:1 suffer:1 hessian:1 cause:1 york:3 luengo:1 detailed:2 eigenvectors:8 unimportant:1 amount:2 nonparametric:2 locally:8 ten:1 reduced:2 exist:1 ddl:1 canonical:1 nsf:1 sign:2 estimated:7 tibshirani:1 key:1 achieving:1 changing:2 utilize:1 kept:1 v1:18 asymptotically:2 wand:1 extends:2 decision:1 dy:2 appendix:4 scaling:2 capturing:1 cheng:1 constraint:2 software:1 dominated:1 nearby:1 u1:15 min:2 fukunaga:1 performing:1 relatively:1 structured:1 according:1 gredilla:1 poor:1 bellet:1 snu:2 making:1 invariant:5 equation:1 previously:3 mechanism:1 know:1 gaussians:1 rewritten:1 multiplied:1 apply:3 vidal:1 away:1 appropriate:2 indirectly:1 v2:18 neighbourhood:1 weinberger:2 eigen:2 existence:2 original:3 c2d:1 clustering:1 include:2 marginalized:1 yx:4 calculating:1 prof:1 especially:1 february:1 seeking:1 objective:1 added:1 parametric:1 concentration:1 variance2:1 september:1 gradient:14 sanborn:1 subspace:4 distance:12 considers:1 reason:1 spanning:1 code:1 palmeri:1 relationship:4 ellipsoid:2 reformulate:1 minimizing:5 balance:1 ratio:1 frank:1 favorably:1 trace:1 negative:5 unknown:1 perform:1 derrac:1 kpotufe:1 datasets:11 benchmark:1 finite:5 behave:1 kyun:1 extended:2 variability:2 hinton:1 y1:1 community:1 introduced:1 learned:4 nip:1 c1d:1 usually:1 pattern:3 laplacians:1 max:1 memory:1 explanation:1 reliable:2 regularized:1 improve:1 acknowledges:1 conventionally:1 deviate:1 review:3 relative:2 asymptotic:2 afosr:1 loss:1 interesting:1 proven:1 geoffrey:1 validation:1 h2:7 incurred:1 sufficient:1 bank:2 pi:1 balancing:2 translation:1 prone:1 bias:50 neighbor:4 saul:1 bulletin:1 sparse:1 distributed:1 slice:2 curve:1 calculated:2 xn:2 overcome:1 world:1 contour:1 adaptive:1 projected:1 reinforcement:1 schultz:1 atkeson:1 nguyen:1 transaction:4 approximate:2 selector:2 ignore:1 preferred:1 logic:1 confirm:1 global:7 overfitting:1 conclude:1 xi:24 triplet:1 additionally:1 as1:1 zk:1 ca:1 sra:1 fcp:2 mse:22 investigated:1 linearly:2 whole:2 noise:5 motivation:1 edition:2 nmse:2 fig:6 depicts:1 ddlee:1 ny:2 explicit:1 gpr:2 dozen:1 theorem:4 choi:1 specific:1 showing:1 decay:1 naively:1 exists:4 adding:1 effectively:3 kr:3 corr:1 margin:1 subtract:1 likely:1 nchez:1 yung:1 scalar:1 u2:15 springer:2 satisfies:1 determines:1 constantly:1 relies:1 acm:1 conditional:4 identity:2 kee:1 consequently:1 change:9 ruppert:1 included:1 infinite:4 determined:4 reducing:3 principal:1 lemma:5 total:1 experimental:1 support:1 seoul:2 aileron:1 indian:1 |
6,456 | 684 | A Neural Model of Descending Gain
Control in the Electrosensory System
Mark E. Nelson
Beckman Institute
University of Illinois
405 N. Mathews
Urbana, IL 61801
Abstract
In the electrosensory system of weakly electric fish, descending
pathways to a first-order sensory nucleus have been shown to influence the gain of its output neurons. The underlying neural mechanisms that subserve this descending gain control capability are not
yet fully understood. We suggest that one possible gain control
mechanism could involve the regulation of total membrane conductance of the output neurons. In this paper, a neural model based
on this idea is used to demonstrate how activity levels on descending pathways could control both the gain and baseline excitation
of a target neuron .
1
INTRODUCTION
Certain species of freshwater tropical fish, known as weakly electric fish, possess an
active electric sense that allows them to detect and discriminate objects in their
environment using a self-generated electric field (Bullock and Heiligenberg, 1986).
They detect objects by sensing small perturbations in this electric field using an
array of specialized receptors, known as electroreceptors, that cover their body surface. Weakly electric fish often live in turbid water and tend to be nocturnal. These
conditions, which hinder visual perception, do not adversely affect the electric sense.
Hence the electrosensory system allows these fish to navigate and capture prey in
total darkness in much the same way as the sonar system of echolocating bats allows
them to do the same. A fundamental difference between bat echolocation and fish
921
922
Nelson
"electrolocation" is that the propagation of the electric field emitted by the fish is
essentially instantaneous when considered on the time scales that characterize nervous system function. Thus rather than processing echo delays as bats do, electric
fish extract information from instantaneous amplitude and phase modulations of
their emitted signals.
The electric sense must cope with a wide range of stimulus intensities because the
magnitude of electric field perturbations varies considerably depending on the size,
distance and impedance of the object that gives rise to them (Bastian, 1981a).
The range of intensities that the system experiences is also affected by the conductivity of the surrounding water, which undergoes significant seasonal variation.
In the electrosensory system, there are no peripheral mechanisms to compensate
for variations in stimulus intensity. Unlike the visual system, which can regulate
the intensity of light arriving at photoreceptors by adjusting pupil diameter, the
electrosensory system has no equivalent means for directly regulating the overall
stimulus strength experienced by the electroreceptors, 1 and unlike the auditory
system, there are no efferent projections to the sensory periphery to control the
gain of the receptors themselves. The first opportunity for the electrosensory system to make gain adjustments occurs in a first-order sensory nucleus known as the
electrosensory lateral line lobe (ELL).
In the ELL, primary afferent axons from peripheral electroreceptors terminate on
the basal dendrites of a class of pyramidal cells referred to as E-cells (Maler et
al., 1981; Bastian, 1981b), which represent a subset of the output neurons for the
nucleus. These pyramidal cells also receive descending inputs from higher brain
centers on their apical dendrites (Maler et al., 1981). One noteworthy feature is
that the descending inputs are massive, far outnumbering the afferent inputs in total
number of synapses. Experiments have shown that the E-cells, unlike peripheral
electroreceptors, maintain a relatively constant response amplitude to electrosensory
stimuli when the overall electric field normalization is experimentally altered. This
automatic gain control capability is lost, however, when descending input to the ELL
is blocked (Bastian, 1986ab). The underlying neural mechanisms that subserve this
descending gain control capability are not yet fully understood, although it is known
that GABAergic inhibition plays a role (Shumway & Maler, 1989). We suggest that
one possible gain control mechanism could involve the regulation of total membrane
conductance of the pyramidal cells. In the next section we present a model based
on this idea and show how activity levels on descending pathways could regulate
both the gain and baseline excitation of a target neuron.
2
NEURAL CIRCUITRY FOR DESCENDING GAIN
CONTROL
Figure 1 shows a schematic diagram of neural circuitry that could provide the basis
for a descending gain control mechanism. This circuitry is inspired by the circuitry
found in the ELL, but has been greatly simplified to retain only the aspects that
lIn principle, this could be achieved by regulating the strength of the fish's own electric
discharge. However, these fish maintain a remarkably stable discharge amplitude and such
a mechanism has never been observed.
A Neural Model of Descending Gain Control in the Electrosensory System
descending_~_ _ _ _.....
excitatory
(CONTROL)
descending~
inhibitory
~
inhibitory
interneuron
(INPUT)
o
excitatory synapse
?
inhibitory synapse
pyramidal cell (OUTPUT)
primary
afferent --->!)o----~O
Figure 1: Neural circuitry for descending gain control. The gain of the pyramidal
cell response to an input signal arriving on its basilar dendrite can be controlled
by adjusting the tonic levels of activity on two descending pathways. A descending
excitatory pathway makes excitatory synapses (open circles) directly on the pyramidal cell. A descending inhibitory pathway acts through an inhibitory interneuron
(shown in gray) to activate inhibitory synapses (filled circles) on the pyramidal cell.
are essential for the proposed gain control mechanism. The pyramidal cell receives
afferent input on a basal dendrite and control inputs from two descending pathways.
One descending pathway makes excitatory synaptic connections directly on the
apical dendrite of the pyramidal cell, while a second pathway exerts a net inhibitory
effect on the pyramidal cell by acting through an inhibitory interneuron. We will
show that under appropriate conditions, the gain of the pyramidal cell's response
to an input signal arriving on its basal dendrite can be controlled by adjusting
the tonic levels of activity on the two descending pathways. At this point it is
worth pointing out that the spatial segregation of input and control pathways onto
different parts of the dendritic tree is not an essential feature of the proposed gain
control mechanism. However, by allowing independent experimental manipulation
of these two pathways, this segregation has played a key role in the discovery and
subsequent characterization of the gain control function in this system (Bastian,
1986ab ).
The gain control function of the neural circuitry show in Figure 1 can best be understood by considering an electrical equivalent circuit for the pyramidal cell. The
equivalent circuit shown in Figure 2 includes only the features that are necessary
to understand the gain control function and does not reflect the true complexity of
ELL pyramidal cells, which are known to contain many different types of voltagedependent channels (Mathieson & Maler, 1988). The passive electrical properties
of the circuit in Figure 2 are described by a membrane capacitance em, a leakage conductance gleak, and an associated reversal potential E leak . The excitatory
descending pathway directly activates excitatory synapses on the pyramidal cell,
giving rise to an excitatory synaptic conductance gex with a reversal potential Eex.
923
924
Nelson
g leak
em
E leak
Figure 2: Electrical equivalent circuit for the pyramidal cell in the gain control
circuit. The excitatory and inhibitory conductances, gex and 9inh, are shown are
variable resistances to indicate that their steady-state values are dependent on the
activity levels of the descending pathways.
The inhibitory descending pathway acts by exciting a class of inhibitory interneurons which in turn activate inhibitory synapses on the pyramidal cell with inhibitory
conductance 9inh and reversal potential Einh. In this model, the excitatory and inhibitory conductances gex and 9inh represent the population conductances of all
the individual excitatory and inhibitory synapses associated with the descending
pathways. Although individual synaptic events give rise to a time-dependent conductance change (which is often modeled by an a function), we consider the regime
in which the activity levels on the descending pathways, the number of synapses
involved, and the synaptic time constants are such that the summed effect can be
well described by a single time-invariant conductance value for each pathway. The
input signal (the one under the influence of the gain control mechanism) is modeled
in a general form as a time-dependent current I( t). This current can represent either the synaptic current arising from activation of synapses in the primary afferent
pathway, or it can represent direct current injection into the cell, such as might
occur in an intracellular recording experiment.
The behavior of the membrane potential Vet) for this model system is described by
d~r(t)
Cm~
+ 9leak(V(t) -
Eleak ) + gex(V(t) - Eex) + 9inh(V(t) - Einh) = let) (1)
=
In the absence of an input signal (I
0), the system will reach a steady-state
(dV/dt = 0) membrane potential ~8 given by
~s (I
= 0) = 9leak E leak + gex E ex + 9inhEinh
gleak
+ gex + 9inh
(2)
A Neural Model of Descending Gain Control in the Electrosensory System
If we consider the input I(t) to give rise to fluctuations in membrane potential U(t)
about this steady state value
U(t) = V(t) - 11;;8
(3)
then (1) can be rewritten as
dU(t)
Cm~
where
9tot
+ 9totU(t) = I(t)
(4)
is the total membrane conductance
9tot
=
9leak
+ gex + 9inh
(5)
Equation (4) describes a first-order low-pass filter with a transfer function 0(8),
from input current to output voltage change, given by
0(8)
= Rtot
+1
(6)
'TS
where 8 is the complex frequency (8 = iw), R tot is the total membrane resistance
(R tot 1/9tot), and 'T is the RC time constant
=
(7)
The frequency dependence of the response gain 10(iw)1 is shown in Figure 3. For low
frequency components of the input signal (W'T ? 1) , the gain is inversely proportional to the total membrane conductance 9toh while at high frequencies (W'T? 1),
the gain is independent of 9tot. This is due to the fact that the impedance of the
RC circuit shown in Figure 2 is dominated by the resistive components at low frequencies and by the capacitive component at high frequencies. Note that the RC
time constant 'T, which characterizes the low-pass filter cutoff frequency, varies inversely with 9tot. For components of the input signal below the cutoff frequency,
gain control can be accomplished by regulating the total membrane conductance.
In electrophysiological terms, this mechanism can be thought of in terms of regulating the input resistance of the neuron. As the total membrane conductance is
increased, the input resistance is decreased, meaning that a fixed amount of current
injection will cause a smaller change in membrane potential. Hence increasing the
total membrane conductance decreases the response gain.
In our model, we propose that regulation of total membrane conductance occurs
via activity on descending pathways that activate excitatory and inhibitory synaptic
conductances. For this proposed mechanism to be effective, these synaptic conductances must make a significant contribution to the total membrane conductance of
the pyramidal cell. Whether this condition actually holds for ELL pyramidal cells
has not yet been experimentally tested. However, it is not an unreasonable assumption to make, considering recent reports that synaptic background activity can have
925
926
Nelson
20
0
,-...,
==
-e
-20
gtot= g leak:
1:=
100 msec
gtot= 10 gl k
'-'
.-c:
~
-40
gtot= 100 gleak:
1:
= 1 msec
~
-60
-80~~~~~~~~~~~~m-~~~~~~~~
10 1
1&
1d
(0
1~
1d
1<f
(rad/sec)
Figure 3: Gain as a function of frequency for three different values oftotal membrane
conductance gtot. At low frequencies, gain is inversely proportional to gtot. Note
that the time constant T, which characterizes the low-pass cutoff frequency, also
varies inversely with gtot . Gain is normalized to the maximum gain: G max = _1_;
9teak
Gain(dB) = 20 10glO( G~aJ.
a significant influence on the total membrane conductance of cortical pyramidal cells
(Bernander et aI., 1991) and cerebellar Purkinje cells (Rapp et aI., 1992).
3
CONTROL OF BASELINE EXCITATION
If the only functional goal was to achieve regulation of total membrane conductance,
then synaptic background activity on a single descending pat.hway would be sufficient and there would be no need for the paired excitatory and inhibitory control
pathways shown in Figure 1. However, the goal of gain control is regulate the total
membrane conductance while holding the baseline level of excitation constant. In
other words, we would like to be able to change the sensitivity of a neuron's response
without changing its spontaneous level of activity (or steady-state resting potential
if it is below spiking threshold). By having paired excitatory and inhibitory control
pathways, as shown in Figure 1, we gain the extra degree-of-freedom necessary to
achieve this goal.
Equation (2) provided us with a relationship between the synaptic conductances in
our model and the steady-state membrane potential. In order to change the gain
of a neuron, without changing its baseline level of excitation, the excitatory and
inhibitory conductances must be adjusted in a way that achieves the desired total
membrane conductance gtot and maintains a constant V3S ? Solving equations (2)
and (5) simultaneously for gex and ginh, we find
A Neural Model of Descending Gain Control in the Electrosensory System
(8)
(9)
-70 m V,
For example, consider a case where the reversal potentials are El eak
Eex = 0 m V, and Einh = -90 m V. Assume want to find values of the steady-state
conductances, gex and ginh, that would result in a total membrane conductance
that is twice the leakage conductance (i.e. gtot = 2g lea k) and would produce a
steady-state depolarization of 10 mV (i.e. ~s
-60 mY). Using (8) and (9) we
find the required synaptic conductance levels are gex = ~ gleak and ginh = ~ gleak.
=
4
DISCUSSION
The ability to regulate a target neuron's gain using descending control signals would
provide the nervous system with a powerful means for implementing adaptive signal
processing algorithms in sensory processing pathways as well as other parts of the
brain. The simple gain control mechanism proposed here, involving the regulation
of total membrane conductance, may find widespread use in the nervous system.
Determining whether or not this is the case, of course, requires experimental verification. Even in the electrosensory system, which provided the inspiration for
this model, definitive experimental tests of the proposed mechanism have yet to
be carried out. Fortunately the model provides a straightforward experimentally
testable prediction, namely that if activity levels on the presumed control pathways are changed, then the input resistance of the target neuron will reflect those
changes. In the case of the ELL, the prediction is that if descending pathways
were silenced while monitoring the input resistance of an E-type pyramidal cell, one
would observe an increase in input resistance corresponding to the elimination of
the descending contributions to the total membrane conductance.
We have mentioned that the gain control circuitry of Figure 1 was inspired by the
neural circuitry of the ELL. For those familiar with this circuitry, it is interesting to
speculate on the identity of the interneuron in the inhibitory control pathway. In the
gymnotid ELL, there are at least six identified classes of inhibitory interneurons. For
the proposed gain control mechanism, we are interested in the identifying those that
receive descending input and which make inhibitory synapses onto pyramidal cells.
Four of the six classes meet these criteria: granule cell type 2 (GC2), polymorphic,
stellate, and ventral molecular layer neurons. While all four classes may participate
to some extent in the gain control mechanism, one would predict, based on cell
number and synapse location, that GC2 (as suggested by Shumway & Maler, 1989)
and polymorphic cells would make the dominant contribution. The morphology of
GC2 and polymorphic neurons differs somewhat from that shown in Figure 1. In
addition to the apical dendrite, which is shown in the figure, these neurons also
have a basal dendrite that receives primary afferent input. GC2 and polymorphic
neurons are excited by primary afferent input and thus provide additional inhibition
to pyramidal cells when afferent activity levels increase. This can be viewed as
providing a feedforward component to the automatic gain control mechanism.
927
928
Nelson
In this paper, we have confined our analysis to the effects of tonic changes in descending activity. While this may be a reasonable approximantion for certain experimental manipulations, it is unlikely to be a good representation of the dynamic
patterns that occur under natural conditions, particularly since the descending pathways form part of a feedback loop that includes the ELL output neurons. The full
story in the electrosensory system will un doubt ably be much more complex. For
example, there is already experimental evidence demonstrating that, in addition
to gain control, descending pathways influence the spatial and temporal filtering
properties of ELL output neurons (Bastian, 1986ab; Shumway & Maler, 1989).
Acknowledgements
This work was supported by NIMH 1-R29-MH49242-01. Thanks to Joe Bastian and
Lenny Maler for many enlightening discussions on descending control in the ELL.
References
Bastian, J. (1981a) Electrolocation I: An analysis of the effects of moving objects
and other electrical stimuli on the electroreceptor activity of Apteronotus alhi/rons.
J. Compo Physiol. 144, 465-479.
Bastian, J. (1981b) Electrolocation II: The effects of moving objects and other
electrical stimuli on the activities of two categories of posterior lateral line lobe
cells in Apteronotus alhi/rons. J. Compo Physiol. 144, 481-494.
Bastian, J. (1986a) Gain control in the electrosensory system mediated by descending inputs to the electrosensory lateral line lobe. J. Neurosci. 6, 553-562.
Bastian, J. (1986b) Gain control in the electrosensory system: a role for the descending projections to the electrosensory lateral line lobe. J. Compo Physiol. 158,
505-515.
Bernander, 0., Douglas, R.J., Martin, K.A.C. & Koch, C. (1991) Synaptic background activity influences spatiotemporal integration in single pyramidal cells.
Proc. Natl. Acad. Sci. USA 88, 11569-11573.
Bullock, T.H. & Heiligenberg, W., eds. (1986) Electro reception. Wiley, New York.
Maler, 1., Sas, E. and Rogers, J. (1981) The cytology of the posterior lateral line
lobe of high frequency weakly electric fish (Gymnotidei): Dendritic differentiation
and synaptic specificity in a simple cortex. J. Compo Neurol. 195,87-140.
Mathieson, W.B. & Maler, 1. (1988) Morphological and electrophysiological properties of a novel in vitro preparation: the electrosensory lateral line lobe brain slice.
J. Compo Physiol. 163, 489-506.
Rapp, M., Yarom, Y. & Segev, I. (1992) The impact of parallel fiber background
activity on the cable properties of cerebellar Purkinje Cells. Neural Compo 4, 518533.
Shumway, C.A. & Maler, L.M. (1989) GABAergic inhibition shapes temporal and
spatial response properties of pyramidal cells in the electrosensory lateral line lobe
of gymnotiform fish J. Compo Physiol. 164, 391-407.
| 684 |@word eex:3 open:1 electrosensory:19 lobe:7 electroreceptors:4 excited:1 cytology:1 current:6 activation:1 yet:4 toh:1 must:3 tot:7 physiol:5 subsequent:1 shape:1 nervous:3 compo:7 provides:1 characterization:1 location:1 ron:2 rc:3 direct:1 resistive:1 pathway:28 presumed:1 behavior:1 themselves:1 morphology:1 brain:3 inspired:2 considering:2 increasing:1 provided:2 underlying:2 circuit:6 cm:2 depolarization:1 differentiation:1 temporal:2 act:2 electrolocation:3 mathews:1 control:41 conductivity:1 understood:3 acad:1 receptor:2 meet:1 fluctuation:1 modulation:1 noteworthy:1 reception:1 might:1 twice:1 maler:10 range:2 bat:3 lost:1 differs:1 thought:1 projection:2 word:1 specificity:1 suggest:2 onto:2 influence:5 live:1 descending:39 darkness:1 equivalent:4 center:1 straightforward:1 identifying:1 array:1 population:1 variation:2 discharge:2 target:4 play:1 spontaneous:1 massive:1 particularly:1 observed:1 role:3 electrical:5 capture:1 morphological:1 decrease:1 mentioned:1 environment:1 leak:8 complexity:1 nimh:1 hinder:1 dynamic:1 weakly:4 solving:1 basis:1 polymorphic:4 fiber:1 surrounding:1 effective:1 activate:3 ability:1 echo:1 net:1 propose:1 loop:1 achieve:2 produce:1 object:5 depending:1 basilar:1 sa:1 indicate:1 filter:2 rapp:2 elimination:1 implementing:1 rogers:1 stellate:1 dendritic:2 adjusted:1 hold:1 koch:1 considered:1 predict:1 circuitry:9 pointing:1 achieves:1 ventral:1 proc:1 beckman:1 iw:2 activates:1 rather:1 voltage:1 seasonal:1 greatly:1 baseline:5 sense:3 detect:2 dependent:3 el:1 unlikely:1 interested:1 overall:2 spatial:3 summed:1 ell:12 integration:1 field:5 never:1 having:1 report:1 stimulus:6 simultaneously:1 individual:2 familiar:1 phase:1 maintain:2 ab:3 freedom:1 conductance:32 regulating:4 interneurons:2 light:1 natl:1 electroreceptor:1 necessary:2 experience:1 filled:1 tree:1 circle:2 desired:1 increased:1 purkinje:2 cover:1 apical:3 subset:1 delay:1 characterize:1 varies:3 spatiotemporal:1 considerably:1 my:1 thanks:1 freshwater:1 fundamental:1 sensitivity:1 retain:1 reflect:2 adversely:1 doubt:1 potential:10 speculate:1 sec:1 includes:2 afferent:8 mv:1 characterizes:2 maintains:1 capability:3 parallel:1 ably:1 contribution:3 il:1 monitoring:1 worth:1 synapsis:9 reach:1 synaptic:13 ed:1 echolocation:1 frequency:12 involved:1 associated:2 efferent:1 gain:48 auditory:1 adjusting:3 gleak:5 electrophysiological:2 amplitude:3 actually:1 higher:1 dt:1 response:7 synapse:3 receives:2 tropical:1 propagation:1 gtot:8 widespread:1 undergoes:1 aj:1 gray:1 usa:1 effect:5 contain:1 true:1 normalized:1 hence:2 inspiration:1 self:1 excitation:5 steady:7 criterion:1 demonstrate:1 passive:1 heiligenberg:2 meaning:1 instantaneous:2 novel:1 specialized:1 functional:1 spiking:1 vitro:1 echolocating:1 resting:1 significant:3 blocked:1 ai:2 subserve:2 automatic:2 illinois:1 moving:2 stable:1 cortex:1 surface:1 inhibition:3 glo:1 dominant:1 posterior:2 own:1 recent:1 periphery:1 manipulation:2 certain:2 accomplished:1 fortunately:1 somewhat:1 additional:1 signal:9 ii:1 full:1 compensate:1 lin:1 molecular:1 paired:2 controlled:2 schematic:1 prediction:2 involving:1 impact:1 essentially:1 exerts:1 represent:4 normalization:1 cerebellar:2 confined:1 achieved:1 cell:33 lea:1 receive:2 want:1 background:4 remarkably:1 decreased:1 addition:2 diagram:1 pyramidal:24 extra:1 unlike:3 posse:1 recording:1 tend:1 db:1 electro:1 emitted:2 feedforward:1 affect:1 identified:1 gex:10 idea:2 whether:2 six:2 resistance:7 york:1 cause:1 nocturnal:1 involve:2 amount:1 mathieson:2 category:1 diameter:1 inhibitory:22 fish:12 arising:1 affected:1 basal:4 key:1 four:2 threshold:1 demonstrating:1 changing:2 cutoff:3 douglas:1 prey:1 powerful:1 reasonable:1 layer:1 played:1 bastian:10 activity:17 strength:2 occur:2 segev:1 dominated:1 aspect:1 injection:2 relatively:1 martin:1 peripheral:3 membrane:24 describes:1 smaller:1 em:2 bullock:2 voltagedependent:1 cable:1 dv:1 invariant:1 segregation:2 equation:3 turn:1 mechanism:17 reversal:4 rewritten:1 unreasonable:1 observe:1 regulate:4 appropriate:1 capacitive:1 opportunity:1 giving:1 testable:1 granule:1 yarom:1 leakage:2 capacitance:1 already:1 occurs:2 primary:5 dependence:1 ginh:3 distance:1 lateral:7 sci:1 nelson:5 participate:1 extent:1 water:2 modeled:2 relationship:1 providing:1 regulation:5 holding:1 rise:4 rtot:1 allowing:1 neuron:16 urbana:1 t:1 pat:1 tonic:3 inh:6 perturbation:2 intensity:4 namely:1 required:1 connection:1 rad:1 able:1 suggested:1 below:2 perception:1 pattern:1 gymnotiform:1 regime:1 max:1 enlightening:1 event:1 natural:1 apteronotus:2 altered:1 inversely:4 gabaergic:2 carried:1 bernander:2 extract:1 mediated:1 r29:1 discovery:1 acknowledgement:1 determining:1 shumway:4 fully:2 interesting:1 proportional:2 filtering:1 nucleus:3 eleak:1 degree:1 sufficient:1 verification:1 principle:1 exciting:1 story:1 excitatory:15 course:1 changed:1 einh:3 gl:1 supported:1 arriving:3 understand:1 institute:1 wide:1 slice:1 feedback:1 cortical:1 sensory:4 adaptive:1 simplified:1 far:1 cope:1 eak:1 active:1 photoreceptors:1 vet:1 un:1 sonar:1 impedance:2 terminate:1 channel:1 transfer:1 dendrite:8 du:1 complex:2 electric:14 intracellular:1 neurosci:1 definitive:1 body:1 silenced:1 referred:1 pupil:1 axon:1 wiley:1 experienced:1 msec:2 navigate:1 sensing:1 neurol:1 evidence:1 essential:2 joe:1 magnitude:1 interneuron:4 visual:2 adjustment:1 goal:3 identity:1 viewed:1 absence:1 experimentally:3 change:7 acting:1 total:19 specie:1 discriminate:1 pas:3 experimental:5 mark:1 preparation:1 tested:1 ex:1 |
6,457 | 6,840 | Information Theoretic Properties of Markov Random
Fields, and their Algorithmic Applications
Linus Hamilton?
Frederic Koehler ?
Ankur Moitra ?
Abstract
Markov random fields are a popular model for high-dimensional probability distributions. Over the years, many mathematical, statistical and algorithmic problems
on them have been studied. Until recently, the only known algorithms for provably
learning them relied on exhaustive search, correlation decay or various incoherence assumptions. Bresler [4] gave an algorithm for learning general Ising models
on bounded degree graphs. His approach was based on a structural result about
mutual information in Ising models.
Here we take a more conceptual approach to proving lower bounds on the mutual
information. Our proof generalizes well beyond Ising models, to arbitrary Markov
random fields with higher order interactions. As an application, we obtain algorithms for learning Markov random fields on bounded degree graphs on n nodes
with r-order interactions in nr time and log n sample complexity. Our algorithms
also extend to various partial observation models.
1
1.1
Introduction
Background
Markov random fields are a popular model for defining high-dimensional distributions by using a
graph to encode conditional dependencies among a collection of random variables. More precisely,
the distribution is described by an undirected graph G = (V, E) where to each of the n nodes u ? V
we associate a random variable Xu which takes on one of ku different states. The crucial property
is that the conditional distribution of Xu should only depend on the states of u?s neighbors. It turns
out that as long as every configuration has positive probability, the distribution can be written as
!
r
X
X
i1 ???i`
Pr(a1 , ? ? ? an ) = exp
?
(a1 , ? ? ? an ) ? C
(1)
`=1 i1 <i2 <???<i`
Here ?i1 ???i` : [ki1 ] ? . . . ? [ki` ] ? R is a function that takes as input the configuration of states on
the nodes i1 , i2 , ? ? ? i` and is assumed to be zero on non-cliques. These functions are referred to as
clique potentials. In the equation above, C is a constant that ensures the distribution is normalized
and is called the log-partition function. Such distributions are also called Gibbs measures and arise
frequently in statistical physics and have numerous applications in computer vision, computational
biology, social networks and signal processing. The Ising model corresponds to the special case
?
Massachusetts Institute of Technology. Department of Mathematics. Email: [email protected]. This work
was supported in part by Hertz Fellowship.
?
Massachusetts Institute of Technology. Department of Mathematics. Email: [email protected].
?
Massachusetts Institute of Technology. Department of Mathematics and the Computer Science and Artificial Intelligence Lab. Email: [email protected]. This work was supported in part by NSF CAREER Award
CCF-1453261, NSF Large CCF-1565235, a David and Lucile Packard Fellowship and an Alfred P. Sloan Fellowship.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
where every node has two possible states and the only non-zero clique potentials correspond to
single nodes or to pairs of nodes.
Over the years, many sorts of mathematical, statistical and algorithmic problems have been studied on Markov random fields. Such models first arose in the context of statistical physics where
they were used to model systems of interacting particles and predict temperatures at which phase
transitions occur [6]. A rich body of work in mathematical physics aims to rigorously understand
such phenomena. It is also natural to seek algorithms for sampling from the Gibbs distribution when
given its clique potentials. There is a natural Markov chain to do so, and a number of works have
identified a critical temperature (in our model this is a part of the clique potentials) above which the
Markov chain mixes rapidly and below which it mixes slowly [14, 15]. Remarkably in some cases
these critical temperatures also demarcate where approximate sampling goes from being easy to
being computationally hard [19, 20]. Finally, various inference problems on Markov random fields
lead to graph partitioning problems such as the metric labelling problem [12].
In this paper, we will be primarily concerned with the structure learning problem. Given samples
from a Markov random field, our goal is to learn the underlying graph G with high probability.
The problem of structure learning was initiated by Chow and Liu [7] who gave an algorithm for
learning Markov random fields whose underlying graph is a tree by computing the maximum-weight
spanning tree where the weight of each edge is equal to the mutual information of the variables at its
endpoints. The running time and sample complexity are on the order of n2 and log n respectively.
Since then, a number of works have sought algorithms for more general families of Markov random
fields. There have been generalizations to polytrees [10], hypertrees [21] and tree mixtures [2].
Others works construct the neighborhood by exhaustive search [1, 8, 5], impose certain incoherence
conditions [13, 17, 11] or require that there are no long range correlations (e.g. between nodes at
large distance in the underlying graph) [3, 5].
In a breakthrough work, Bresler [4] gave a simple greedy algorithm that provably works for any
bounded degree Ising model, even if it has long-range correlations. This work used mutual information as its underlying progress measure and for each node it constructed its neighborhood. For a set
S of nodes, let XS denote the random variable representing their joint state. Then the key fact is the
following:
Fact 1.1. For any node u, for any S ? V \ {u} that does not contain all of u?s neighbors, there is a
node v 6= u which has non-negligible conditional mutual information (conditioned on XS ) with u.
This fact is simultaneously surprising and not surprising. When S contains all the neighbors of u,
then Xu has zero conditional mutual information (again conditioned on XS ) with any other node
because Xu only depends on XS . Conversely shouldn?t we expect that if S does not contain the entire neighborhood of u, that there is some neighbor that has nonzero conditional mutual information
with u? The difficulty is that the influence of a neighbor on u can be cancelled out indirectly by the
other neighbors of u. The key fact above tells us that it is impossible for the influences to all cancel
out. But is this fact only true for Ising models or is it an instance of a more general phenomenon
that holds over any Markov random field?
1.2
Our Techniques
In this work, we give a vast generalization of Bresler?s [4] lower bound on the conditional mutual
information. We prove that it holds in general Markov random fields with higher order interactions
provided that we look at sets of nodes. More precisely we prove, in a Markov random field with
non-binary states and order up to r interactions, the following fundamental fact:
Fact 1.2. For any node u, for any S ? V \ {u} that does not contain all of u?s neighbors, there
is a set I of at most r ? 1 nodes which does not contain u where Xu and XI have non-negligible
conditional mutual information (conditioned on XS ).
Our approach goes through a two-player game that we call the G UESSING G AME between Alice and
Bob. Alice samples a configuration X1 , X2 , . . . Xn and reveals I and XI for a randomly chosen
set of u?s neighbors with |I| ? r ? 1. Bob?s goal is to guess Xu with non-trivial advantage over
its marginal distribution. We give an explicit strategy for Bob that achieves positive expected value.
Our approach is quite general because we base Bob?s guess on the contribution of XI to the overall
clique potentials that Xu participates in, in a way that the expectation over I yields an unbiased
2
estimator of the total clique potential. The fact that the strategy has positive expected value is then
immediate, and all that remains is to prove a quantitative lower bound on it using the law of total
variance. From here, the intuition is that if the mutual information I(Xu ; XI ) were zero for all sets
I then Bob could not have positive expected value in the G UESSING G AME.
1.3
Our Results
Let ?(u) denote the neighbors of u. We require certain conditions (Definition 2.3) on the clique
potentials to hold, which we call ?, ?-non-degeneracy, to ensure that the presence or absence of
each hyperedge can be information-theoretically determined from few samples (essentially that no
clique potential is too large and no non-zero clique potential is too small). Under this condition, we
prove:
Theorem 1.3. Fix any node u in an ?, ?-non-degenerate Markov random field of bounded degree
and a subset of the vertices S which does not contain the entire neighborhood of u. Then taking I
uniformly at random from the subsets of the neighbors of u not contained in S of size s = min(r ?
1, |?(u) \ S|), we have EI [I(Xu ; XI |XS )] ? C.
See Theorem 4.3 which gives the precise dependence of C on all of the constants, including ?, ?,
the maximum degree D, the order of the interactions r and the upper bound K on the number of
states of each node. We remark that C is exponentially small in D, r and ? and there are examples
where this dependence is necessary [18].
Next we apply our structural result within Bresler?s [4] greedy framework for structure learning to
obtain our main algorithmic result. We obtain an algorithm for learning Markov random fields on
bounded degree graphs with a logarithmic number of samples, which is information-theoretically
optimal [18]. More precisely we prove:
Theorem 1.4. Fix any ?, ?-non-degenerate Markov random field on n nodes with r-order interactions and bounded degree. There is an algorithm for learning G that succeeds with high probability
given C 0 log n samples and runs in time polynomial in nr .
Remark 1.5. It is easy to encode an r ? 1-sparse parity with noise as a Markov random field with
order r interactions. This means if we could improve the running time to no(r) this would yield
the first no(k) algorithm for learning k-sparse parities with noise, which is a long-standing open
question. The best known algorithm of Valiant [22] runs in time n0.8k .
See Theorem 5.1 for a more precise statement. The constant C 0 depends doubly exponentially on
D. In the special case of Ising models with no external field, Vuffray et al. [23] gave an algorithm
based on convex programming that reduces the dependence on D to singly exponential. In greedy
approaches based on mutual information like the one we consider here, doubly-exponential dependence on D seems intrinsic. As in Bresler?s [4] work, we construct a superset of the neighborhood
that contains roughly 1/C nodes where C comes from Theorem 1.3. Recall that C is exponentially
small in D. Then to accurately estimate conditional mutual information when conditioning on the
states of this many nodes, we need doubly exponential in D many samples.
Our results extend to a model where we are only allowed partial observations. More precisely, for
each sample we are allowed to specify a set J of size at most C 00 and all we observe is XJ . We
prove:
Theorem 1.6. Fix any ?, ?-non-degenerate Markov random field on n nodes with r-order interactions and bounded degree. There is an algorithm for learning G with C 00 -bounded queries that
succeeds with high probability given C 0 log n samples and runs in time polynomial in nr .
See Theorem 5.3 for a more precise statement. This is a natural scenario that arises when it is
too expensive to obtain a sample where the states of all nodes are known. We also consider a model
where each node?s state is erased (and unobserved) independently with some fixed probability p. See
the supplementary material for a precise statement. The fact that we can straightforwardly obtain
algorithms for these alternative settings demonstrates the flexibility of greedy, information-theoretic
approaches to learning.
3
2
Preliminaries
For reference, all fundamental parameters of the graphical model (max degree, etc.) are defined in
the next two subsections. In terms of these fundamental parameters, we define additional parameters
? and ? in (3), C 0 (?, K, ?) in Theorem 4.3, and ? in (5) and L in (6).
2.1
Markov Random Fields and the Canonical Form
Let K be an upper bound on the maximum number of states of any node. Recall the joint probability
distribution of the model, given in (1). For notational convenience, even when i1 , . . . , i` are not
0
0
sorted in increasing order, we define ?i1 ???i` (a1 , . . . , a` ) = ?i1 ???i` (a01 , . . . , a0` ) where the i01 , . . . , i0`
are the sorted version of i1 , . . . , i` and the a01 , . . . , a0` are the corresponding copies of a1 , . . . , a` .
The parameterization in (1) is not unique. It will be helpful to put it in a normal form as below. A
tensor fiber is the vector given by fixing all of the indices of the tensor except for one; this generalizes
the notion of row/column in matrices. For example for any 1 ? m ? `, i1 < . . . < im <
. . . i` and a1 , . . . , am?1 , am+1 , . . . a` fixed, the corresponding tensor fiber is the set of elements
?i1 ???i` (a1 , . . . , am , . . . , a` ) where am ranges from 1 to kim .
Definition 2.1. We say that the weights ? are in canonical form4 if for every tensor ?i1 ???i` , the sum
over all of the tensor fibers of ?i1 ???i` is zero.
Moreover we say that a tensor with the property that the sum over all tensor fibers is zero is a
centered tensor. Hence having a Markov random field in canonical form just means that all of the
tensors corresponding to its clique potentials are centered. We observe that every Markov random
field can be put in canonical form:
Claim 2.2. Every Markov random field can be put in canonical form
2.2
Non-Degeneracy
Let H = (V, H) denote a hypergraph obtained from the Markov random field as follows. For every
non-zero tensor ?i1 ???i` we associate a hyperedge (i1 ? ? ? i` ). We say that a hyperedge h is maximal if
no other hyperedge of strictly larger size contains h. Now G = (V, E) can be obtained by replacing
every hyperedge with a clique. Let D be a bound on the maximum degree. Recall that ?(u) denotes
the neighbors of u. We will require the following conditions in order to ensure that the presence and
absence of every maximal hyperedge is information-theoretically determined:
Definition 2.3. We say that a Markov random field is ?,?-non-degenerate if
(a) Every edge (i, j) in the graph G is contained in some hyperedge h ? H where the corresponding tensor is non-zero.
(b) Every maximal hyperedge h ? H has at least one entry lower bounded by ? in absolute
value.
(c) Every entry of ?i1 i2 ???i` is upper bounded by a constant ? in absolute value.
We will refer to a hyperedge h with an entry lower bounded by ? in absolute value as ?nonvanishing.
2.3
Bounds on Conditional Probabilities
First we review properties of the conditional probabilities in a Markov random field as well as
introduce some convenient notation which we will use later on. Fix a node u and its neighborhood
U = ?(u). Then for any R ? [ku ] we have
X
exp(Eu,R
)
P (Xu = R|XU ) = Pku
X
B=1 exp(Eu,B )
(2)
4
This is the same as writing the log of the probability mass function according to the Efron-Stein decomposition with respect to the uniform measure on colors; this decomposition is known to be unique. See e.g.
Chapter 8 of [16]
4
where we define
X
Eu,R
=
r
X
X
?ui2 ???i` (R, Xi2 , ? ? ? , Xi` )
`=1 i2 <???<i`
and i2 , . . . , i` range over elements of the neighborhood U ; when ` = 1 the inner sum is just ?u (R).
Let X?u = X[n]\{u} . To see that the above is true, first condition on X?u , and observe that the
X
probability for a certain Xu is proportional to exp(Eu,R
), which gives the right hand side of (2).
Then apply the tower property for conditional probabilities.
Therefore if we define (where |T |max denotes the maximum entry of a tensor T )
r
r
X
X
X
D
1
exp(?2?)
? := sup
|?ui2 ???i` |max ? ?
,
? :=
K
`?1
u
i <???<i
`=1
2
(3)
`=1
`
then for any R
P (Xu = R|XU ) ?
exp(??)
1
=
exp(?2?) = ?
K exp(?)
K
(4)
Observe that if we pick any node i and consider the new Markov random field given by conditioning
on a fixed value of Xi , then the value of ? for the new Markov random field is non-increasing.
3
The Guessing Game
Here we introduce a game-theoretic framework for understanding mutual information in general
Markov random fields. The G UESSING G AME is defined as follows:
1. Alice samples X = (X1 , . . . , Xn ) and X 0 = (X10 , . . . , Xn0 ) independently from the Markov random
field
2. Alice samples R uniformly at random from [ku ]
3. Alice samples a set I of size s = min(r ? 1, du ) uniformly at random from the neighbors of u
4. Alice tells Bob I, XI and R
D
5. Bob wagers w with |w| ? ?K r?1
6. Bob gets ? = w1Xu =R ? w1Xu0 =R
Bob?s goal is to guess Xu given knowledge of the states of some of u?s neighbors. The Markov
random field (including all of its parameters) are common knowledge. The intuition is that if Bob
can obtain a positive expected value, then there must be some set I of neighbors of u which have
non-zero mutual information. In this section, will show that there is a simple, explicit strategy for
Bob that yields positive expected value.
3.1
A Good Strategy for Bob
Here we will show an explicit strategy for Bob that has positive expected value. Our analysis will
rest on the following key lemma:
D
Lemma 3.1. There is a strategy for Bob that wagers at most ?K r?1
in absolute value that satisfies
X
X
X
E [w|X?u , R] = Eu,R
?
Eu,B
I,XI
B6=R
Proof. First we explicitly define Bob?s strategy. Let
?(R, I, XI ) =
s
X
`=1
Cu,`,s
X
1{i1 ???i` }?I ?ui1 ???i` (R, Xi1 , . . . , Xi` )
i1 <i2 <???<i`
5
where Cu,`,s =
du
s
du ?`
s?`
( )
(
)
. Then Bob wagers
w = ?(R, I, XI ) ?
X
?(B, I, XI )
B6=R
Notice that the strategy only depends on XI because all terms in the summation where {i1 ? ? ? i` }
are not a subset of I have zero contribution.
The intuition behind this strategy is that the weighting term satisifes
Cu,`,s =
1
Pr[{i1 , . . . i` } ? I]
Thus when we take the expectation over I and XI we get
E [?(R, I, XI )|X?u , R] =
I,XI
r
X
X
X
?ui2 ???i` (R, Xi2 , ? ? ? , Xi` ) = Eu,R
`=1 i2 <???<i`
X
X
and hence EI,XI [w|X?u , R] = Eu,R
? B6=R Eu,B
. To complete the proof, notice that Cu,`,s ?
D
D
which
using
the
definition
of
?
implies
that
|?(R,
I, XI )| ? ? r?1
for any state B, and thus
r?1
Bob wagers at most the desired amount (in absolute value).
P
Now we are ready to analyze the strategy:
Theorem 3.2. There is a strategy for Bob that wagers at most ?K
satisfies
4?2 ? r?1
E[?] ? 2r 2?
r e
D
r?1
in absolute value which
0
and R. Then we have
Proof. We will use the strategy from Lemma 3.1. First we fix X?u , X?u
0
0
E [?|X?u , X?u
, R] = E [w|X?u , R] Pr[Xu = R|X?u , R] ? Pr[Xu0 = R|X?u
, R]
I,XI
I,XI
0
and
which follows because ? = r1Xu =R ? r1Xu0 =R and because r and Xu do not depend on X?u
0
similarly Xu does not depend on X?u . Now using (2) we calculate:
0
Pr[Xu = R|X?u , R] ?
Pr[Xu0
=
0
R|X?u
, R]
X
X
exp(Eu,R
)
exp(Eu,R
)
P
=P
?
X )
X0 )
exp(E
exp(E
u,B
u,B
B
B
0
1X
X
X
X
X0
=
exp(Eu,R
+ Eu,B
) ? exp(Eu,B
+ Eu,R
)
D
B6=R
where D =
P
P
X
X0
exp(E
)
exp(E
)
. Thus putting it all together we have
u,B
u,B
B
B
0
E [?|X?u , X?u
, R] =
I,XI
X
X
1 X
X
X
X0
X
X0
Eu,R ?
Eu,B
exp(Eu,R
+ Eu,B
) ? exp(Eu,B
+ Eu,R
)
D
B6=R
B6=R
Now it is easy to see that
?
X
distinct R,G,B
X ?
Eu,B
?
X
0
0
X
X
X
X ?
exp(Eu,R
+ Eu,G
) ? exp(Eu,G
+ Eu,R
) =0
G6=R
which follows because when we interchange R and G the entire term multiplies by a negative one
and so we can pair up the terms in the summation so that they exactly cancel. Using this identity we
get
1 XX X
0
X
X
X0
X
X0
Eu,R ? Eu,B
exp(Eu,R
+ Eu,B
) ? exp(Eu,B
+ Eu,R
)
E [?|X?u , X?u
]=
I,XI
ku D
R B6=R
6
where we have also used the fact that R is uniform on ku . And finally using the fact that X?u and
0
X?u
are identically distributed we can sample Y?u and Z?u and flip a coin to decide whether we
0
set X?u = Y?u and X?u
= Z?u or vice-versa. Now we have
Y
Z
Y
Z
1 XX Y
Y
Z
Z
eEu,R +Eu,B ? eEu,B +Eu,R
Eu,R ? Eu,B
? Eu,R
+ Eu,B
E [?|Y?u , Z?u ] =
I,XI
2ku D
R B6=R
With the appropriate notation it is easy to see that the above sum is strictly positive. Let aR,B =
Y
Z
Z
Y
Eu,R
+ Eu,B
and bR,B = Eu,R
+ Eu,B
. With this notation:
1 XX
E [?|Y?u , Z?u ] =
aR,B ? bR,B exp(aR,B ) ? exp(bR,B )
I,XI
2Dku
R B6=R
Since exp(x) is a strictly increasing function it follows that as long as aR,B 6= bR,B for some term
in the sum, the sum is positive. In Lemma 3.3 we prove that the expectation over Y and Z of this
2 r?1
?
sum is at least 4?
r 2r e2? , which completes the proof.
In the supplementary material we show how to use the law of total variance to give a quantitative
lower bound on the sum that arose in the proof of Theorem 3.2. More precisely we show:
Lemma 3.3.
hX X
i 4?2 ? r?1
Y
Y
Z
Z
Y
Z
Y
Z
E
Eu,R
? Eu,B
? Eu,R
+ Eu,B
exp(Eu,R
+ Eu,B
) ? exp(Eu,B
+ Eu,R
) ? 2r 2?
Y,Z
r e
R B6=R
4
Implications for Mutual Information
In this section we show that Bob?s strategy implies a lower bound on the mutual information between
node u and a subset I of its neighbors of size at most r ? 1. We then extend the argument to work
with conditional mutual information as well.
4.1
Mutual Information in Markov Random Fields
Recall that the goal of the G UESSING G AME is for Bob to use information about the states of nodes
I to guess the state of node u. Intuitively, if XI conveys no information about Xu then it should
contradict the fact that Bob has a strategy with positive expected value. We make this precise below.
Our argument proceeds in two steps. First we upper bound the expected value of any strategy.
Lemma 4.1. For any strategy,
h
i
D
E
| Pr[Xu = R|XI ] ? Pr[Xu = R]|
E[?] ? ?K
r ? 1 I,XI ,R
Intuitively this follows because Bob?s optimal strategy given I, XI and R is to guess
w = sgn(Pr[Xu = R|XI ] ? Pr[Xu = R])?K
Next we lower bound the mutual information using (essentially) the same quantity. We prove
Lemma 4.2.
r
h
i
1
1
I(Xu ; XI ) ? r E | Pr(Xu = R|XI ) ? Pr(Xu = R)|
2
K XI ,R
These bounds together yield a lower bound on the mutual information. In the supplementary material, we show how to extend the lower bound for mutual information to conditional mutual information. The main idea is to show there is a setting of XS where the hyperedges do not completely
cancel out in the Markov random field we obtain by conditioning on XS .
Theorem 4.3. Fix a vertex u such that all of the maximal hyperedges containing u are ?nonvanishing, and a subset of the vertices S which does not contain the entire neighborhood of
7
u. Then taking I uniformly at random from the subsets of the neighbors of u not contained in S of
size s = min(r ? 1, |?(u) \ S|),
#
"r
1
E
I(Xu ; XI |XS ) ? C 0 (?, K, ?)
I
2
where explicitly
C 0 (?, K, ?) :=
5
4?2 ? r+d?1
D
r2r K r+1 r?1
?e2?
Applications
We now employ the greedy approach of Bresler [4] which was previously used to learn Ising models on bounded degree graphs. Suppose we are given m independent samples from the Markov
c denote the empirical distribution and let E
b denote the expectation under this
random field. Let Pr
distribution.
We compute empirical estimates for a certain information theoretic quantity ?u,I|S (defined in the
supplementary material) as follows
b X [|Pr(X
c u = R, XI = G|XS ) ? Pr(X
c u = R|XS )Pr(X
c I = G|XS )|]
?bu,I|S := E E
S
R,G
where R is a state drawn uniformly at random from [ku ], and G is an |I|-tuple of states drawn
independently uniformly at random from [ki1 ] ? [ki2 ] ? . . . ? [ki|I| ] where I = (i1 , i2 , . . . i|I| ). Also
we define ? (which will be used as a thresholding constant) as
? := C 0 (?, k, ?)/2
(5)
and L, which is an upper bound on the size of the superset of a neighborhood of u that the algorithm
will construct,
L := (8/? 2 ) log K = (32/C 0 (?, k, ?)2 ) log K.
(6)
Then the algorithm M RF N BHD at node u is:
1. Fix input vertex u. Set S := ?.
2. While |S| ? L and there exists a set of vertices I ? [n] \ S of size at most r ? 1 such that
?bu,I|S > ? , set S := S ? I.
3. For each i ? S, if ?bu,i|S\i < ? then remove i from S.
4. Return set S as our estimate of the neighborhood of u.
Theorem 5.1. Fix ? > 0. Suppose we are given m samples from an ?, ?-non-degenerate Markov
random field with r-order interactions where the underlying graph has maximum degree at most D
and each node takes on at most K states. Suppose that
60K 2L
m ? 2 2L log(1/?) + log(L + r) + (L + r) log(nK) + log 2 .
? ?
Then with probability at least 1 ? ?, M RF N BHD when run starting from each node u recovers the
correct neighborhood of u, and thus recovers the underlying graph G. Furthermore, each run of the
algorithm takes O(mLnr ) time.
In many situations, it is too expensive to obtain full samples from a Markov random field (e.g. this
could involve needing to measure every potential symptom of a patient). Here we consider a model
where we are allowed only partial observations in the form of a C-bounded query:
Definition 5.2. A C-bounded query to a Markov random field is specified by a set S with |S| ? C
and we observe XS
8
Our algorithm M RF N BHD can be made to work with C-bounded queries instead of full observations.
We prove:
Theorem 5.3. Fix an ?, ?-non-degenerate Markov random field with r-order interactions where the
underlying graph has maximum degree at most D and each node takes on at most K states. The
bounded queries modification to the algorithm returns the correct neighborhood of every vertex u
using m0 Lrnr -bounded queries of size at most L + r where
60K 2L
m0 = 2 2L log(Lrnr /?) + log(L + r) + (L + r) log(nK) + log 2 ,
? ?
with probability at least 1 ? ?.
In the supplementary material, we extend our results to the setting where we observe partial samples
where the state of each node is revealed independently with probability p, and the choice of which
nodes to reveal is independent of the sample.
Acknowledgements: We thank Guy Bresler for valuable discussions and feedback.
References
[1] Pieter Abbeel, Daphne Koller, and Andrew Y Ng. Learning factor graphs in polynomial time and sample
complexity. Journal of Machine Learning Research, 7(Aug):1743?1788, 2006.
[2] Anima Anandkumar, Daniel J Hsu, Furong Huang, and Sham M Kakade. Learning mixtures of tree
graphical models. In Advances in Neural Information Processing Systems, pages 1052?1060, 2012.
[3] Animashree Anandkumar, Vincent YF Tan, Furong Huang, and Alan S Willsky. High-dimensional structure estimation in ising models: Local separation criterion. The Annals of Statistics, pages 1346?1375,
2012.
[4] Guy Bresler. Efficiently learning ising models on arbitrary graphs. In Proceedings of the Forty-Seventh
Annual ACM on Symposium on Theory of Computing, pages 771?782. ACM, 2015.
[5] Guy Bresler, Elchanan Mossel, and Allan Sly. Reconstruction of markov random fields from samples:
Some observations and algorithms. In Approximation, Randomization and Combinatorial Optimization.
Algorithms and Techniques, pages 343?356. Springer, 2008.
[6] Stephen G Brush. History of the lenz-ising model. Reviews of modern physics, 39(4):883, 1967.
[7] C Chow and Cong Liu. Approximating discrete probability distributions with dependence trees. IEEE
transactions on Information Theory, 14(3):462?467, 1968.
[8] Imre Csisz?ar and Zsolt Talata. Consistent estimation of the basic neighborhood of markov random fields.
In Information Theory, 2004. ISIT 2004. Proceedings. International Symposium on, page 170. IEEE,
2004.
[9] Gautam Dasarathy, Aarti Singh, Maria-Florina Balcan, and Jong Hyuk Park. Active learning algorithms
for graphical model selection. J. Mach. Learn. Res, page 199207, 2016.
[10] Sanjoy Dasgupta. Learning polytrees. In Proceedings of the Fifteenth conference on Uncertainty in
artificial intelligence, pages 134?141. Morgan Kaufmann Publishers Inc., 1999.
[11] Ali Jalali, Pradeep Ravikumar, Vishvas Vasuki, and Sujay Sanghavi. On learning discrete graphical
models using group-sparse regularization. In Proceedings of the Fourteenth International Conference on
Artificial Intelligence and Statistics, pages 378?387, 2011.
[12] Jon Kleinberg and Eva Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric labeling and markov random fields. Journal of the ACM (JACM), 49(5):616?639,
2002.
[13] Su-In Lee, Varun Ganapathi, and Daphne Koller. Efficient structure learning of markov networks using l
1-regularization. In Proceedings of the 19th International Conference on Neural Information Processing
Systems, pages 817?824. MIT Press, 2006.
[14] Fabio Martinelli and Enzo Olivieri. Approach to equilibrium of glauber dynamics in the one phase region.
Communications in Mathematical Physics, 161(3):447?486, 1994.
9
[15] Elchanan Mossel, Dror Weitz, and Nicholas Wormald. On the hardness of sampling independent sets
beyond the tree threshold. Probability Theory and Related Fields, 143(3):401?439, 2009.
[16] Ryan O?Donnell. Analysis of Boolean Functions. Cambridge University Press, New York, NY, USA,
2014.
[17] Pradeep Ravikumar, Martin J Wainwright, John D Lafferty, et al. High-dimensional ising model selection
using ?1-regularized logistic regression. The Annals of Statistics, 38(3):1287?1319, 2010.
[18] Narayana P Santhanam and Martin J Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information Theory, 58(7):4117?4134, 2012.
[19] Allan Sly. Computational transition at the uniqueness threshold. In Foundations of Computer Science
(FOCS), 2010 51st Annual IEEE Symposium on, pages 287?296. IEEE, 2010.
[20] Allan Sly and Nike Sun. The computational hardness of counting in two-spin models on d-regular graphs.
In Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on, pages 361?369.
IEEE, 2012.
[21] Nathan Srebro. Maximum likelihood bounded tree-width markov networks. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 504?511. Morgan Kaufmann Publishers
Inc., 2001.
[22] Gregory Valiant. Finding correlations in subquadratic time, with applications to learning parities and
juntas. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on, pages
11?20. IEEE, 2012.
[23] Marc Vuffray, Sidhant Misra, Andrey Lokhov, and Michael Chertkov. Interaction screening: Efficient and
sample-optimal learning of ising models. In Advances in Neural Information Processing Systems, pages
2595?2603, 2016.
10
| 6840 |@word cu:4 version:1 polynomial:3 seems:1 open:1 pieter:1 seek:1 decomposition:2 pick:1 configuration:3 liu:2 contains:3 selecting:1 daniel:1 surprising:2 written:1 must:1 john:1 partition:1 remove:1 n0:1 intelligence:4 greedy:5 guess:5 parameterization:1 hyuk:1 junta:1 node:35 gautam:1 daphne:2 narayana:1 mathematical:4 constructed:1 symposium:5 focs:3 prove:9 doubly:3 introduce:2 x0:7 pairwise:1 theoretically:3 hardness:2 allan:3 expected:8 roughly:1 frequently:1 increasing:3 provided:1 xx:3 bounded:18 underlying:7 moreover:1 notation:3 mass:1 dror:1 unobserved:1 finding:1 quantitative:2 every:13 exactly:1 demonstrates:1 partitioning:1 hamilton:1 positive:10 negligible:2 local:1 limit:1 mach:1 initiated:1 dasarathy:1 sidhant:1 incoherence:2 bhd:3 wormald:1 studied:2 ankur:1 conversely:1 polytrees:2 alice:6 range:4 seventeenth:1 unique:2 empirical:2 convenient:1 regular:1 get:3 convenience:1 selection:2 martinelli:1 put:3 context:1 influence:2 impossible:1 writing:1 go:2 starting:1 independently:4 convex:1 estimator:1 his:1 proving:1 notion:1 tardos:1 annals:2 suppose:3 ui2:3 tan:1 programming:1 associate:2 element:2 expensive:2 ising:13 calculate:1 cong:1 region:1 ensures:1 eva:1 sun:1 eu:50 valuable:1 intuition:3 complexity:3 hypergraph:1 rigorously:1 dynamic:1 depend:3 singh:1 ali:1 completely:1 joint:2 various:3 fiber:4 chapter:1 distinct:1 artificial:4 query:6 tell:2 labeling:1 neighborhood:13 exhaustive:2 whose:1 quite:1 supplementary:5 larger:1 say:4 statistic:3 advantage:1 reconstruction:1 interaction:11 maximal:4 rapidly:1 degenerate:6 flexibility:1 csisz:1 andrew:1 fixing:1 progress:1 aug:1 come:1 implies:2 correct:2 nike:1 centered:2 sgn:1 material:5 require:3 hx:1 fix:9 generalization:2 abbeel:1 hypertrees:1 preliminary:1 randomization:1 isit:1 ryan:1 summation:2 im:1 strictly:3 hold:3 normal:1 exp:27 equilibrium:1 algorithmic:4 predict:1 claim:1 m0:2 lokhov:1 sought:1 achieves:1 aarti:1 uniqueness:1 estimation:2 lenz:1 combinatorial:1 vice:1 mit:4 aim:1 arose:2 imre:1 encode:2 notational:1 maria:1 likelihood:1 kim:1 am:4 a01:2 helpful:1 inference:1 i0:1 entire:4 chow:2 a0:2 koller:2 i1:20 provably:2 overall:1 among:1 classification:1 multiplies:1 special:2 breakthrough:1 mutual:22 marginal:1 field:42 construct:3 equal:1 having:1 beach:1 sampling:3 ng:1 biology:1 park:1 look:1 linus:1 cancel:3 jon:1 others:1 sanghavi:1 subquadratic:1 primarily:1 few:1 employ:1 randomly:1 modern:1 simultaneously:1 phase:2 screening:1 lucile:1 zsolt:1 mixture:2 pradeep:2 behind:1 wager:5 chain:2 implication:1 edge:2 tuple:1 partial:4 necessary:1 elchanan:2 tree:7 pku:1 desired:1 re:1 instance:1 column:1 boolean:1 ar:5 vertex:6 subset:6 entry:4 uniform:2 seventh:1 too:4 straightforwardly:1 dependency:1 gregory:1 andrey:1 st:2 fundamental:3 international:3 standing:1 donnell:1 bu:3 physic:5 participates:1 xi1:1 lee:1 together:2 michael:1 nonvanishing:2 again:1 moitra:2 containing:1 huang:2 slowly:1 guy:3 external:1 return:2 ganapathi:1 potential:11 luh:1 inc:2 sloan:1 explicitly:2 depends:3 later:1 lab:1 analyze:1 sup:1 relied:1 sort:1 weitz:1 b6:10 contribution:2 spin:1 variance:2 who:1 efficiently:1 kaufmann:2 correspond:1 yield:4 vincent:1 accurately:1 bob:22 anima:1 history:1 email:3 definition:5 i01:1 vuffray:2 e2:2 conveys:1 proof:6 recovers:2 degeneracy:2 hsu:1 popular:2 massachusetts:3 animashree:1 recall:4 subsection:1 efron:1 color:1 knowledge:2 furong:2 higher:2 varun:1 specify:1 symptom:1 furthermore:1 just:2 sly:3 until:1 correlation:4 hand:1 ei:2 replacing:1 su:1 logistic:1 yf:1 reveal:1 usa:2 normalized:1 contain:6 true:2 ccf:2 unbiased:1 hence:2 regularization:2 nonzero:1 i2:8 glauber:1 game:3 width:1 criterion:1 theoretic:5 complete:1 temperature:3 balcan:1 recently:1 common:1 endpoint:1 exponentially:3 conditioning:3 extend:5 refer:1 versa:1 gibbs:2 cambridge:1 rd:2 sujay:1 mathematics:3 similarly:1 particle:1 etc:1 base:1 scenario:1 certain:4 misra:1 binary:2 hyperedge:9 morgan:2 additional:1 impose:1 forty:1 signal:1 stephen:1 full:2 mix:2 needing:1 reduces:1 x10:1 sham:1 alan:1 long:6 ravikumar:2 award:1 a1:6 basic:1 florina:1 regression:1 vision:1 metric:2 expectation:4 essentially:2 patient:1 fifteenth:1 ui1:1 background:1 fellowship:3 remarkably:1 completes:1 hyperedges:2 crucial:1 publisher:2 rest:1 undirected:1 lafferty:1 call:2 anandkumar:2 structural:2 presence:2 counting:1 revealed:1 easy:4 concerned:1 superset:2 identically:1 xj:1 gave:4 identified:1 inner:1 idea:1 br:4 whether:1 york:1 remark:2 involve:1 singly:1 amount:1 stein:1 nsf:2 canonical:5 notice:2 talata:1 alfred:1 discrete:2 dasgupta:1 ame:4 group:1 key:3 putting:1 santhanam:1 threshold:2 drawn:2 vast:1 graph:17 year:2 shouldn:1 sum:8 run:5 fourteenth:1 uncertainty:2 family:1 decide:1 separation:1 bound:15 ki:2 annual:4 occur:1 precisely:5 x2:1 kleinberg:1 nathan:1 argument:2 min:3 martin:2 department:3 according:1 hertz:1 kakade:1 modification:1 intuitively:2 pr:16 g6:1 computationally:1 equation:1 remains:1 previously:1 turn:1 xi2:2 flip:1 generalizes:2 apply:2 observe:6 indirectly:1 appropriate:1 cancelled:1 nicholas:1 alternative:1 coin:1 denotes:2 running:2 ensure:2 graphical:5 xu0:2 approximating:1 tensor:12 question:1 quantity:2 koehler:1 strategy:17 r2r:1 dependence:5 nr:3 guessing:1 jalali:1 fabio:1 distance:1 thank:1 tower:1 trivial:1 spanning:1 willsky:1 index:1 relationship:1 statement:3 negative:1 upper:5 observation:5 markov:44 immediate:1 defining:1 situation:1 communication:1 precise:5 interacting:1 arbitrary:2 david:1 pair:2 specified:1 nip:1 beyond:2 proceeds:1 below:3 rf:3 packard:1 including:2 max:3 wainwright:2 critical:2 natural:3 difficulty:1 regularized:1 representing:1 improve:1 technology:3 mossel:2 numerous:1 ready:1 review:2 understanding:1 acknowledgement:1 law:2 bresler:9 expect:1 proportional:1 srebro:1 foundation:3 degree:13 vasuki:1 consistent:1 thresholding:1 demarcate:1 row:1 supported:2 parity:3 copy:1 ki2:1 side:1 understand:1 institute:3 neighbor:16 taking:2 absolute:6 sparse:3 distributed:1 feedback:1 dimension:1 xn:2 transition:2 rich:1 interchange:1 collection:1 made:1 social:1 transaction:2 approximate:1 contradict:1 clique:12 active:1 reveals:1 conceptual:1 assumed:1 xi:36 search:2 learn:3 ku:7 ca:1 career:1 du:3 marc:1 main:2 noise:2 arise:1 n2:1 allowed:3 xu:28 body:1 x1:2 referred:1 ny:1 explicit:3 exponential:3 weighting:1 chertkov:1 theorem:13 decay:1 x:13 frederic:1 intrinsic:1 exists:1 valiant:2 labelling:1 conditioned:3 nk:2 logarithmic:1 jacm:1 contained:3 brush:1 springer:1 corresponds:1 satisfies:2 acm:3 conditional:13 goal:4 sorted:2 identity:1 erased:1 absence:2 hard:1 determined:2 except:1 uniformly:6 lemma:7 called:2 total:3 sanjoy:1 player:1 succeeds:2 xn0:1 jong:1 arises:1 ki1:2 phenomenon:2 |
6,458 | 6,841 | Fitting Low-Rank Tensors in Constant Time
Kohei Hayashi?
National Institute of Advanced Industrial Science and Technology
RIKEN AIP
[email protected]
Yuichi Yoshida?
National Institute of Informatics
[email protected]
Abstract
In this paper, we develop an algorithm that approximates the residual error of
Tucker decomposition, one of the most popular tensor decomposition methods,
with a provable guarantee. Given an order-K tensor X ? RN1 ?????NK , our
algorithm randomly samples a constant number s of indices for each mode and
? ? Rs?????s , whose elements are given by the intersection
creates a ?mini? tensor X
of the sampled indices on X. Then, we show that the residual error of the Tucker
? is sufficiently close to that of X with high probability. This
decomposition of X
result implies that we can figure out how much we can fit a low-rank tensor to X in
constant time, regardless of the size of X. This is useful for guessing the favorable
rank of Tucker decomposition. Finally, we demonstrate how the sampling method
works quickly and accurately using multiple real datasets.
1
Introduction
Tensor decomposition is a fundamental tool for dealing with array-structured data. Using tensor
decomposition, a tensor (or a multidimensional array) is approximated with multiple tensors in
lower-dimensional space using a multilinear operation. This drastically reduces disk and memory
usage. We say that a tensor is of order K if it is a K-dimensional array; each dimension is called a
mode in tensor terminology.
Among the many existing tensor decomposition methods, Tucker decomposition [18] is a popular
choice. To some extent, Tucker decomposition is analogous to singular-value decomposition (SVD):
as SVD decomposes a matrix into left and right singular vectors that interact via singular values,
Tucker decomposition of an order-K tensor consists of K factor matrices that interact via the socalled core tensor. The key difference between SVD and Tucker decomposition is that, with the latter,
the core tensor need not be diagonal and its ?rank? can differ for each mode k = 1, . . . , K. In this
paper, we refer to the size of the core tensor, which is a K-tuple, as the Tucker rank of a Tucker
decomposition.
We are usually interested in obtaining factor matrices and a core tensor to minimize the residual error?
the error between the input and low-rank approximated tensors. Sometimes, however, knowing the
residual error itself is an important task. The residual error tells us how the low-rank approximation
is suitable to the input tensor, and is particularly useful to predetermine the Tucker rank. In real
?
Supported by ONR N62909-17-1-2138.
Supported by JSPS KAKENHI Grant Number JP17H04676 and JST ERATO Grant Number JPMJER1305,
Japan.
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Algorithm 1
Input: Random access to a tensor X ? RN1 ?????NK , Tucker rank (R1 , . . . , Rk ), and , ? ? (0, 1).
for k = 1 to K do
Sk ? a sequence of s = s(, ?) indices uniformly and independently sampled from [Nk ].
Construct a mini-tensor X|S1 ,...,SK .
Return `R1 ,...,RK (X|S1 ,...,SK ).
applications, Tucker ranks are not explicitly given, and we must select them by considering the
balance between space usage and approximation accuracy. For example, if the selected Tucker rank
is too small, we risk losing essential information in the input tensor. On the other hand, if the selected
Tucker rank is too large, the computational cost of computing the Tucker decomposition (even if we
allow for approximation methods) increases considerably along with space usage. As with the case of
the matrix rank, one might think that a reasonably good Tucker rank can be found using a grid search.
Unfortunately, grid searches for Tucker ranks are challenging because, for an order-K tensor, the
Tucker rank consists of K free parameters and the search space grows exponentially in K. Hence,
we want to evaluate each grid point as quickly as possible.
Unfortunately, although several practical algorithms have been proposed, such as the higher-order
orthogonal iteration (HOOI) [7], they are not sufficiently scalable. For each mode, HOOI iteratively
applies SVD to an unfolded tensor?a matrix that is reshaped fromQthe input tensor. Given an N1 ?
? ? ? ? NK tensor, the computational cost is hence O(K maxk Nk ? k Nk ), which depends crucially
on the input size N1 , . . . , NK . Although there are several approximation algorithms [8, 21, 17], their
computational costs are still intensive. Consequently, we cannot search for good Tucker ranks. Rather,
we can only check a few candidates.
1.1
Our Contributions
When finding a good Tucker rank with a grid search, we need only the residual error. More specifically,
given an order-K tensor X ? RN1 ?????NK and integers Rk ? Nk (k = 1, . . . , K), we consider the
following rank-(R1 , . . . , RK ) Tucker-fitting problem. For an integer n ? N, let [n] denote the set
{1, 2, . . . , n}. Then, we want to compute the following normalized residual error:
2
X ? [ G; U (1) , . . . , U (K) ]
F
Q
`R1 ,...,RK (X) :=
min
,
(1)
N
G?RR1 ?????RK ,{U (k) ?RNk ?Rk }k?[K]
k
k?[K]
where [ G; U (1) , . . . , U (K) ] ? RN1 ?????NK is an order-K tensor, defined as
X
Y (k)
[ G; U (1) , . . . , U (K) ] i1 ???iK =
Gr1 ???rK
Uik rk
r1 ?[R1 ],...,rK ?[RK ]
k?[K]
(1)
for every i1 ? [N1 ], . . . , iK ? [NK ]. Here, G is the core tensor, and U , . . . , U (K) are the factor
matrices. Note that we are not concerned with computing the minimizer. Rather, we only want
to compute the minimum value. In addition, we do not need the exact minimum. Indeed, a rough
estimate still helps to narrow down promising rank candidates. The question here is how quickly we
can compute the normalized residual error `R1 ,...,RK (X) with moderate accuracy.
We shed light on this question by considering a simple sampling-based algorithm. Given an order-K
tensor X ? RN1 ?????NK , Tucker rank (R1 , . . . , RK ), and sample size s ? N, we sample a sequence
of indices Sk = (xk1 , . . . , xks ) uniformly and independently from {1, . . . , Nk } for each mode k ?
[K]. Then, we construct a mini-tensor X|S1 ,...,SK ? Rs?????s such that (X|S1 ,...,SK )i1 ,...,iK =
Xx1i ...,xK
. Finally, we compute `R1 ,...,RK (X|S1 ,...,SK ) using a solver, such as HOOI, that then
iK
1
outputs the obtained value. The details are provided in Algorithm 1.
In this paper, we show that Algorithm 1 achieves our ultimate goal: with a provable guarantee, the
time complexity remains constant. Assume each rank parameter Rk is sufficiently smaller than
the dimension of each mode Nk . Then, given error and confidence parameters , ? ? (0, 1), there
exists a constant s = s(, ?) such that the approximated residual `R1 ,...,RK (X|S1 ,...,SK ) is close to
the original one `R1 ,...,RK (X), to within with a probability of at least 1 ? ?. Note that the time
2
complexity for computing `R1 ,...,RK (X|S1 ,...,SK ) does not depend on the input size N1 , . . . , NK but
rather on the sample size s, meaning that the algorithm runs in constant time, regardless of the input
size.
The main component in our proof is the weak version of Szemer?di?s regularity lemma [9], which
roughly states that any tensor can be well approximated by a tensor consisting of a constant number
of blocks whose entries in the same block are equal. Then, we can show that X|S1 ,...,SK is a good
sketch of the original tensor, because by sampling s many indices for each mode, we can hit each
block a sufficient number of times. It follows that `R1 ,...,RK (X) and `R1 ,...,RK (X|S1 ,...,SK ) are close.
To formalize this argument, we want to measure the ?distance? between X and X|S1 ,...,SK , and we
want to show that it is small. To this end, we exploit graph limit theory, first described by Lov?sz and
Szegedy [13] (see also [12]), in which we measure the distance between two graphs on a different
number of vertices by considering continuous versions called dikernels. Hayashi and Yoshida [10]
used graph limit theory to develop a constant-time algorithm that minimizes quadratic functions
described by matrices and vectors. We further extend this theory to tensors to analyze the Tucker
fitting problem.
With both synthetic and real datasets, we numerically evaluate our algorithm. The results show that
our algorithm overwhelmingly outperforms other approximation methods in terms of both speed and
accuracy.
2
Preliminaries
Tensors
Let X ? RN1 ????NK be a tensor. Then, we define the Frobenius norm of X as kXkF =
qP
2
max
|Xi1 ???iK |, and the cut
i1 ,...,iK Xi1 ???iK , the max norm of X as kXkmax =
P i1 ?[N1 ],...,iK ?[NK ]
norm of X as kXk =
max
|
Xi1 ???iK |. We note that these norms
S1 ?[N1 ],...,SK ?[NK ] i1 ?S1 ,...,iK ?SK
satisfy the triangle inequalities.
For a vector v ? Rn and a sequence S = (x1 , . . . , xs ) of indices in [n], we define the restriction
v|S ? Rs of v such that (v|S )i = vxi for i ? [s]. Let X ? RN1 ?????NK be a tensor, and
Sk = (xk1 , . . . , xks ) be a sequence of indices in [Nk ] for each mode k ? [K]. Then, we define the
restriction X|S1 ,...,SK ? Rs?????s of X to S1 ?? ? ??SK such that (X|S1 ,...,SK )i1 ???iK = Xx1i ,...,xK
iK
1
for each i1 ? [N1 ], . . . , iK ? [Nk ].
K
Hyper-dikernels We call a (measurable) function W : [0, 1] ? R a (hyper-)dikernel of order K.
We can regard a dikernel as a tensor whose indices are specified by real values in [0, 1]. We stress
that the term ?dikernel? has nothing to do with kernel methods used in machine learning.
R1
For two functions f, g : [0, 1] ? R, we define their inner product as hf, gi = 0 f (x)g(x)dx. For a
N
K
sequence of functions f (1) , . . . , f (K) , we define their tensor product k?[K] f (k) ? [0, 1] ? R as
N
Q
(k)
(x1 , . . . , xK ) = k?[K] f (k) (xk ), which is an order-K dikernel.
k?[K] f
K
Let
qR W : [0, 1] ? R be a dikernel. Then, we define the Frobenius norm of W as kWkF =
2
W(x) dx, the max norm of W as kWkmax = maxx?[0,1]K |W(x)|, and the cut norm of
[0,1]K
R
W as kWk = supS1 ,...,SK ?[0,1] S1 ?????SK W(x)dx. Again, we note that these norms satisfy
For two dikernels W and W 0 , we define their inner product as hW, W 0 i =
Rthe triangle inequalities.
0
W(x)W
(x)dx.
[0,1]K
Let ? be a Lebesgue measure. A map ? : [0, 1] ? [0, 1] is said to be measure-preserving, if the
pre-image ? ?1 (X) is measurable for every measurable set X, and ?(? ?1 (X)) = ?(X). A measurepreserving bijection is a measure-preserving map whose inverse map exists and is also measurable
(and, in turn, also measure-preserving). For a measure-preserving bijection ? : [0, 1] ? [0, 1] and
K
K
a dikernel W : [0, 1] ? R, we define a dikernel ?(W) : [0, 1] ? R as ?(W)(x1 , . . . , xK ) =
W(?(x1 ), . . . , ?(xK )).
3
For a tensor G ? RR1 ?????RK and vector-valued functions {F (k) : [0, 1] ? RRk }k?[K] , we define
an order-K dikernel [ G; F (1) , . . . , F (K) ] : [0, 1]
[ G; F
(1)
,...,F
(K)
K
] (x1 , . . . , xK ) =
? R as
X
r1 ?[R1 ],...,rK ?[RK ]
We note that [ G; F
(1)
,...,F
(K)
Y
Gr1 ,...,rK
F (k) (xk )rk
k?[K]
] is a continuous analogue of Tucker decomposition.
K
Tensors and hyper-dikernels We can construct the dikernel X : [0, 1] ? R from a tensor X ?
RN1 ?????NK as follows. For an integer n ? N, let I1n = [0, n1 ], I2n = ( n1 , n2 ], . . . , Inn = ( n?1
n , . . . , 1].
For x ? [0, 1], we define in (x) ? [n] as a unique integer such that x ? Iin . Then, we define
X (x1 , . . . , xK ) = XiN1 (x1 )???iNK (xK ) . The main motivation for creating a dikernel from a tensor is
that, in doing so, we can define the distance between two tensors X and Y of different sizes via the
cut norm?that is, kX ? Yk .
K
Let W : [0, 1] ? R be a dikernel and Sk = (xk1 , . . . , xks ) for k ? [K] be sequences of elements
K
in [0, 1]. Then, we define a dikernel W|S1 ,...,SK : [0, 1] ? R as follows: We first extract a tensor
s?????s
1
K
W ?R
by setting Wi1 ???iK = W(xi1 , . . . , xiK ). Then, we define W|S1 ,...,SK as the dikernel
constructed from W .
3
Correctness of Algorithm 1
In this section, we prove the correctness of Algorithm 1.
The following sampling lemma states that dikernels and their sampling versions are close in the cut
norm with high probability.
K
Lemma 3.1. Let W 1 , . . . , W T : [0, 1] ? [?L, L] be dikernels. Let S1 , . . . , SK be sequences of
s elements uniformly and independently sampled from [0, 1]. Then, with a probability of at least
1?exp(??K (s2 (T / log2 s)1/(K?1) )), there exists a measure-preserving bijection ? : [0, 1] ? [0, 1]
such that, for every t ? [T ], we have
1/(2K?2)
kW t ? ?(W t |S1 ,...,SK )k = L ? OK T / log2 s
,
where OK (?) and ?K (?) hide factors depending on K.
We now consider the dikernel counterpart to the Tucker fitting problem, in which we want to compute
the following:
2
`R1 ,...,RK (X ) :=
inf
? [ G; f (1) , . . . , f (K) ]
,
(2)
X
R
R ?????R
G?R
1
K ,{f (k) :[0,1]?R
k}
k?[K]
F
The following lemma states that the Tucker fitting problem and its dikernel counterpart have the same
optimum values.
Lemma 3.2. Let X ? RN1 ?????NK be a tensor, and let R1 , . . . , RK ? N be integers. Then, we have
`R1 ,...,RK (X) = `R1 ,...,RK (X ).
For a set of vector-valued functions F = {f (k) : [0, 1] ? RRk }k?[K] , we define kF kmax =
(k)
maxk?[K],r?[Rk ],x?[0,1] fr (x). For real values a, b, c ? R, a = b ? c is shorthand for b ? c ? a ?
K
K
2
b + c. For a dikernel X : [0, 1] ? R, we define a dikernel X 2 : [0, 1] ? R as X 2 (x) = X (x)
K
for every x ? [0, 1] . The following lemma states that if X and Y are close in the cut norm, then the
optimum values of the Tucker fitting problem regarding them are also close.
K
Lemma 3.3. Let X , Y : [0, 1] ? R be dikernels with kX ? Yk ? and kX 2 ? Y 2 k ? . For
integers R1 , . . . , RK ? N, we have
K
`R1 ,...,RK (X ) = `R1 ,...,RK (Y) ? 2 1 + R kGX kmax kFX kK
,
max + kGY kmax kFY kmax
(k)
(k)
where (GX , FX = {fX }k?[K] ) and (GY, FY = {fY }k?[K] ) are solutions to the problem (2) on
Q
X and Y, respectively, whose objective values exceed the infima by at most , and R = k?[K] Rk .
4
It is well known that the Tucker fitting problem has a minimizer for which the factor matrices are
orthonormal. Thus, we have the following guarantee for the approximation error of Algorithm 1.
Theorem 3.4. Let X ? RN1 ?????NK be a tensor, R1 , . . . , RK be integers, and , ? ? (0, 1). For
2K?2
)
s(, ?) = 2?(1/
+ ?(log 1? log log 1? ), we have the following. Let S1 , . . . , SK be sequences of
?
??, U
? ?, . . . , U
? ? ) be minimizers of
indices as defined in Algorithm 1. Let (G? , U1? , . . . , UK
) and (G
1
K
the problem (1) on X and X|S1 ,...,SK for which the factor matrices are orthonormal, respectively.
Then, with a probability of at least 1 ? ?, we have
`R1 ,...,RK (X|S1 ,...,SK ) = `R1 ,...,RK (X) ? O(L2 (1 + 2M R)),
? ? kmax }, and R = Q
where L = kXkmax , M = max {kG? kmax , kG
k?[K] Rk .
? ? kmax are equal to the maximum
We remark that, for the matrix case (i.e., K = 2), kG? kmax and kG
singular values of the original and sampled matrices, respectively.
Proof. We apply Lemma 3.1 to X and X 2 . Then, with a probability of at least 1 ? ?, there exists a
measure-preserving bijection ? : [0, 1] ? [0, 1] such that
kX ? ?(X |S1 ,...,SK )k ? L and
kX 2 ? ?(X 2 |S1 ,...,SK )k ? L2 .
In what follows, we assume that this has happened. Then, by Lemma 3.3 and the fact that
`R1 ,...,RK (X |S1 ,...,SK ) = `R1 ,...,RK (?(X |S1 ,...,SK )), we have
K
?
?
`R1 ,...,RK (X |S1 ,...,SK ) = `R1 ,...,RK (X ) ? L2 1 + 2R(kGkmax kF kK
+
k
Gk
k
F
k
)
max
max
max ,
? F? = {f?(k) }
where (G, F = {f (k) }k?[K] ) and (G,
k?[K] ) be as in the statement of Lemma 3.3.
?
??
From the proof of Lemma 3.2, we can assume that kGkmax = kG? kmax , kGk
max = kG kmax ,
?
?
?
? ,...,U
? ? ). It
kF kmax ? 1, and kF? kmax ? 1 (owing to the orthonormality of U1 , . . . , UK and U
1
K
follows that
? ? kmax ) .
`R1 ,...,RK (X |S1 ,...,SK ) = `R1 ,...,RK (X ) ? L2 1 + 2R(kG? kmax + kG
(3)
Then, we have
`R1 ,...,RK (X|S1 ,...,SK ) = `R1 ,...,RK (X |S1 ,...,SK )
? ? kmax )
= `R1 ,...,RK (X ) ? L2 1 + 2R(kG? kmax + kG
? ? kmax ) .
= `R1 ,...,RK (X) ? L2 1 + 2R(kG? kmax + kG
(By Lemma 3.2)
(By (3))
(By Lemma 3.2)
Hence, we obtain the desired result.
4
Related Work
To solve Tucker decomposition, several randomized algorithms have been proposed. A popular
approach involves using a truncated or randomized SVD. For example, Zhou et al. [21] proposed
a variant of HOOI with randomized SVD. Another approach is based on tensor sparsification.
Tsourakakis [17] proposed MACH, which randomly picks the element of the input tensor and
substitutes zero, with a probability of 1 ? p, where p ? (0, 1] is an approximation parameter.
Moreover, several authors proposed CUR-type Tucker decomposition, which approximates the input
tensor by sampling tensor tubes [6, 8].
Unfortunately, these methods do not significantly reduce the computational cost. Randomized
Q
SVD approaches Q
reduce the computational cost of multiple
SVDs from O(K maxk Nk ? k Nk ) to
Q
O(K maxk Rk ? k Nk ), but they still depend on k Nk . CUR-type approaches require the same
time complexity. In MACH, to obtain accurate results, we need to set p as constant for instance
Q
p = 0.1 [17]. Although this will improve the runtime by a constant factor, the dependency on k Nk
does not change.
5
N = 100
200
400
800
Residual error
0.3
s
0.2
?
20
40
0.1
?
?
?
?
?
0.0
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
80
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
11121314151617181920 11121314151617181920 11121314151617181920 11121314151617181920
R
Figure 1: Synthetic data: computed residual errors for various Tucker ranks. The horizontal axis
indicates the approximated residual error `R1 ,...,RK (X|S1 ,...,SK ). The error bar indicates the standard
deviation over ten trials with different random seeds, which affected both data generation and
sampling.
5
Experiments
For the experimental evaluation, we slightly modified our sampling algorithm. In Algorithm 1, the
indices are sampled using sampling with replacement (i.e., the same indices can be sampled more
than once). Although this sampling method is theoretically sound, we risk obtaining redundant
information by sampling the same index several times. To avoid this issue, we used sampling without
replacement?i.e., each index was sampled at most once. Furthermore, if the dimension of a mode
was smaller than the sampling size, we used all the coordinates. That is, we sampled min(s, Nk )
indices for each mode k ? [K]. Note that both sampling methods, with and without replacement, are
almost equivalent when the input size N1 , . . . , NK is sufficiently larger than s (i.e., the probability
that a previously sampled index is sampled approaches zero.)
5.1
Synthetic Data
We first demonstrate the accuracy of our method using synthetic data. We prepared N ? N ? N
tensors for N ? {100, 200, 400, 800}, with a Tucker rank of (15, 15, 15). Each element of the core
G ? R15?15?15 and the factor matrices U (1) , U (2) , U (3) ? RN ?15 was drawn from a standard
normal distribution. We set Y = [ G; U (1) , U (2) , U (3) ] . Then, we generated X ? RN ?N ?N as
Xijk = Yijk /kY kF + 0.1ijk , where ijk follows the standard normal distribution for i, j, k ? [N ].
Namely, X had a low-rank structure, though some small noise was added. Subsequently, X was
decomposed using our method with various Tucker ranks (R, R, R) for R ? {11, 12, . . . , 20} and
the sample size s ? {20, 40, 80}.
The results (see Figure 1) show that our method behaved ideally. That is, the error was high when
R was less than the true rank, 15, and it was almost zero when R was greater than or equal to the
true rank. Note that the scale of the estimated residual error seems to depend on s, i.e., small s tends
to yield a small residual error. This implies our method underestimates the residual error when s is
small.
5.2
Real Data
To evaluate how our method worked against real data tensors, we used eight datasets [1, 2, 4, 11,
14, 19] described in Table 1, where the ?fluor? dataset is order-4 and the others are order-3 tensors.
Details regarding the data are provided in the Supplementary material. Before the experiment, we
normalized each data tensor by its norm kXkF . To evaluate the approximation accuracy, we used
HOOI implemented in Python by Maximilian Nickel3 as ?true? residual error.4 As baselines, we
used the two randomized methods introduced in Section 4: randomized SVD [21] and MACH [17].
We denote our method by ?samples? where s indicates the sample size (e.g., sample40 denotes our
3
https://github.com/mnick/scikit-tensor
Note that, though no approximation is used in HOOI, the objective function (1) is nonconvex and it is not
guaranteed to converge to the global minimum. The obtained solution can be different from the ground truth.
4
6
Table 1: Real Datasets.
Dataset
movie_gray
EEM
fluorescence
bonnie
fluor
wine
BCI_Berlin
visor
Size
Total # of elements
120 ? 160 ? 107
28 ? 13324 ? 8
299 ? 301 ? 41
89 ? 97 ? 549
405 ? 136 ? 19 ? 5
44 ? 2700 ? 200
4001 ? 59 ? 1400
16818 ? 288 ? 384
method
hooi
randsvd
sample40
2.0M
2.9M
3.6M
4.7M
5.2M
23.7M
0.3G
1.8G
sample80
movie_gray
0.020
0.015
0.010
0.005
0.000
0.12
fluorescence
0.08
0.04
bonnie
0.04
0.02
0.00
0.04
wine
Residual error
0.00
0.06
0.02
0.00
BCI_Berlin
0.3
0.2
0.1
0.03
visor
0.02
5x5x5
10x5x5
5x10x5
5x5x10
15x5x5
5x15x5
5x5x15
10x10x5
10x5x10
20x5x5
5x10x10
5x20x5
5x5x20
10x15x5
10x5x15
15x10x5
15x5x10
5x10x15
5x15x10
10x10x10
10x20x5
10x5x20
20x10x5
20x5x10
5x10x20
5x20x10
15x15x5
15x5x15
5x15x15
10x10x15
10x15x10
15x10x10
15x20x5
15x5x20
20x15x5
20x5x15
5x15x20
5x20x15
10x10x20
10x20x10
20x10x10
20x20x5
20x5x20
5x20x20
10x15x15
15x10x15
15x15x10
10x15x20
10x20x15
15x10x20
15x20x10
20x10x15
20x15x10
15x15x15
10x20x20
20x10x20
20x20x10
15x15x20
15x20x15
20x15x15
15x20x20
20x15x20
20x20x15
20x20x20
0.01
Tucker rank
Figure 2: Real data: (approximated) residual errors for various Tucker ranks.
method with s = 40). Similarly, ?machp? refers to MACH with sparsification probability set at p. For
all the approximation methods, we used the HOOI implementation to solve Tucker decomposition.
Every data tensor was decomposed with Tucker rank (R1 , . . . , RK ) on the grid Rk ? {5, 10, 15, 20}
for k ? [K].
Figure 2 shows the residual error for order-3 data.5 It shows that the random projection tends to
overestimate the decomposition error. On the other hand, except for the wine dataset, our method
stably estimated the residual error with reasonable approximation errors. For the wine dataset, our
method estimated a very small value, far from the correct value. This result makes sense, however,
because the wine dataset is sparse (where 90% of the elements are zero) and the residual error is too
small. Table 2 shows the absolute error from HOOI averaged over all rank settings. In all the datasets,
our methods achieved the lowest error.
5
Here we exclude the results of the EEM dataset because its size is too small and we were unable to run the
experiment with all the Tucker rank settings. Also, the results of MACH are excluded owing to considerable
errors.
7
Table 2: Real data: absolute error of HOOI?s and other?s residual errors averaged over ranks. The
best and the second best results are shown in bold and italic, respectively.
movie_gray
EEM
fluorescence
bonnie
fluor
wine
BCI_Berlin
visor
mach0.1
mach0.3
randsvd
sample40
sample80
0.809 ? 0.001
0.855 ? 0.006
0.818 ? 0.010
0.832 ? 0.008
0.822 ? 0.006
0.854 ? 0.009
0.677 ? 0.025
0.799 ? 0.003
0.491 ? 0.002
0.565 ? 0.017
0.501 ? 0.009
0.528 ? 0.016
0.507 ? 0.010
0.633 ? 0.019
0.415 ? 0.016
0.484 ? 0.002
0.004 ? 0.003
0.018 ? 0.029
0.024 ? 0.023
0.012 ? 0.011
0.009 ? 0.007
0.012 ? 0.009
0 .057 ? 0 .020
0.007 ? 0.003
0 .001 ? 0 .001
0 .003 ? 0 .003
0 .004 ? 0 .005
0 .004 ? 0 .002
0 .003 ? 0 .001
0 .008 ? 0 .006
0.065 ? 0.022
0 .003 ? 0 .001
0.000 ? 0.000
0.003 ? 0.003
0.002 ? 0.002
0.003 ? 0.001
0.002 ? 0.001
0.007 ? 0.006
0.055 ? 0.007
0.001 ? 0.001
Table 3: Real data: Kendall?s tau against the ranking of Tucker ranks obtained by HOOI.
movie_gray
EEM
fluorescence
bonnie
fluor
wine
BCI_Berlin
visor
mach0.1
mach0.3
randsvd
sample40
sample80
0.1
0.72
0.04
-0.01
0.78
-0.07
0.08
0.2
0
0.67
0.09
0.04
0.79
-0.01
0.17
0.37
0.1
0.77
0.28
0.33
0.83
-0.02
0.02
0.11
0.71
0.79
0.61
0.27
0.93
0.04
0.18
0.64
0.73
0.91
0.77
0.67
0.89
0.15
0.45
0.7
Table 4: Real data: runtime averaged over Tucker ranks (in seconds).
movie_gray
EEM
fluorescence
bonnie
fluor
wine
BCI_Berlin
visor
hooi
mach0.1
mach0.3
randsvd
sample40
sample80
0.71
3447.97
2.67
9.13
3.2
142.34
428.13
10034.96
39.36
16494.45
122.8
102.93
47.85
418.5
3874.48
27841.67
109.93
8839.09
91.37
72.34
149.39
266.85
10258.96
27854.14
0.33
2212.54
1.47
2.32
1.43
41.94
82.43
1950.45
0.13
0.11
0.13
0.11
0.2
0.12
0.2
0.13
0.25
0.11
0.23
0.41
0.43
0.23
0.45
0.26
Next, we evaluated the correctness of the order of Tucker ranks. For rank determination, it is important
that the rankings of Tucker ranks in terms of residual errors are consistent between the original and
the sampled tensors. For example, if the rank-(15, 15, 5) Tucker decomposition of the original tensor
achieves a lower error than the rank-(5, 15, 15) Tucker decomposition, this order relation should
be preserved in the sampled tensor. We evaluated this using Kendall?s tau coefficient, between the
rankings of Tucker ranks obtained by HOOI and the others. Kendall?s tau coefficient takes as its
value +1 when the two rankings are the same, and ?1 when they are opposite. Table 3 shows the
results. We can see that, again, our method outperformed the others.
Table 4 shows the runtime averaged over all the rank settings. It shows that our method is consistently
the fastest. Note that MACH was slower than normal Tucker decomposition.
This is possibly because
Q
it must create an additional sparse tensor, which requires O( k Nk ) time complexity.
6
Discussion
One might point out by way of criticism that the residual error is not a satisfying measure for
determining rank. In machine learning and statistics, it is common to choose hyperparameters based
on the generalization error or its estimator, such as cross-validation (CV) error, rather than the training
error (i.e., the residual error in Tucker decomposition). Unfortunately, our approach cannot be used
the CV error, because what we can obtain is the minimum of the training error, whereas CV requires
us to plug in the minimizers. An alternative is to use information criteria such as Akaike [3] and
8
Bayesian information criteria [15]. These criteria are given by the penalty term, which consists of
the number of parameters and samples6 , and the maximum log-likelihood. Because the maximum
log-likelihood is equivalent to the residual error, our method can approximate these criteria.
Python code of our algorithm is available at: https://github.com/hayasick/CTFT.
References
[1] E. Acar, R. Bro, and B. Schmidt. New exploratory clustering tool. Journal of Chemometrics, 22(1):91,
2008.
[2] E. Acar, E. E. Papalexakis, G. G?rdeniz, M. A. Rasmussen, A. J. Lawaetz, M. Nilsson, and R. Bro.
Structure-revealing data fusion. BMC bioinformatics, 15(1):239, 2014.
[3] H. Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control,
19(6):716?723, 1974.
[4] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. M?ller, and G. Curio. The non-invasive berlin brain?
computer interface: fast acquisition of effective performance in untrained subjects. NeuroImage, 37(2):539?
550, 2007.
[5] C. Borgs, J. T. Chayes, L. Lov?sz, V. T. S?s, and K. Vesztergombi. Convergent sequences of dense graphs I:
Subgraph frequencies, metric properties and testing. Advances in Mathematics, 219(6):1801?1851, 2008.
[6] C. F. Caiafa and A. Cichocki. Generalizing the column?row matrix decomposition to multi-way arrays.
Linear Algebra and its Applications, 433(3):557?573, 2010.
[7] L. De Lathauwer, B. De Moor, and J. Vandewalle. On the best rank-1 and rank-(r1 , r2 , . . . , rn ) approximation of higher-order tensors. SIAM Journal on Matrix Analysis and Applications, 21(4):1324?1342,
2000.
[8] P. Drineas and M. W. Mahoney. A randomized algorithm for a tensor-based generalization of the singular
value decomposition. Linear Algebra and Its Applications, 420(2):553?571, 2007.
[9] A. Frieze and R. Kannan. The regularity lemma and approximation schemes for dense problems. In FOCS,
pages 12?20, 1996.
[10] K. Hayashi and Y. Yoshida. Minimizing quadratic functions in constant time. In NIPS, pages 2217?2225,
2016.
[11] A. J. Lawaetz, R. Bro, M. Kamstrup-Nielsen, I. J. Christensen, L. N. J?rgensen, and H. J. Nielsen.
Fluorescence spectroscopy as a potential metabonomic tool for early detection of colorectal cancer.
Metabolomics, 8(1):111?121, 2012.
[12] L. Lov?sz. Large Networks and Graph Limits. American Mathematical Society, 2012.
[13] L. Lov?sz and B. Szegedy. Limits of dense graph sequences. Journal of Combinatorial Theory, Series B,
96(6):933?957, 2006.
[14] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local SVM approach. In ICPR,
volume 3, pages 32?36, 2004.
[15] G. Schwarz et al. Estimating the dimension of a model. The Annals of Statistics, 6(2):461?464, 1978.
[16] R. J. Steele and A. E. Raftery. Performance of bayesian model selection criteria for gaussian mixture
models. Frontiers of Statistical Decision Making and Bayesian Analysis, 2:113?130, 2010.
[17] C. E. Tsourakakis. Mach: Fast randomized tensor decompositions. In ICDM, pages 689?700, 2010.
[18] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279?311,
1966.
[19] R. Vezzani and R. Cucchiara. Video surveillance online repository (visor): an integrated framework.
Multimedia Tools and Applications, 50(2):359?380, 2010.
[20] S. Watanabe. Algebraic geometry and statistical learning theory, volume 25. Cambridge University Press,
2009.
[21] G. Zhou, A. Cichocki, and S. Xie. Decomposition of big tensors with low multilinear rank. arXiv preprint
arXiv:1412.1885, 2014.
6
For models with multiple solutions, such as Tucker decomposition, the penalty term can differ from the
standard form [20]. Still, these criteria are useful in practice (see, e.g. [16]).
9
| 6841 |@word kgk:1 repository:1 version:3 trial:1 norm:12 seems:1 disk:1 r:4 crucially:1 decomposition:28 pick:1 series:1 nii:1 outperforms:1 existing:1 com:3 gmail:1 dx:4 must:2 acar:2 selected:2 xk:10 core:6 bijection:4 gx:1 mathematical:2 along:1 constructed:1 lathauwer:1 ik:14 focs:1 consists:3 prove:1 shorthand:1 fitting:7 theoretically:1 lov:4 indeed:1 roughly:1 multi:1 brain:1 decomposed:2 unfolded:1 considering:3 solver:1 kfy:1 provided:2 estimating:1 moreover:1 psychometrika:1 lowest:1 what:2 kg:12 minimizes:1 finding:1 sparsification:2 guarantee:3 dornhege:1 every:5 multidimensional:1 i1n:1 shed:1 runtime:3 hit:1 uk:2 control:1 grant:2 overestimate:1 before:1 local:1 infima:1 limit:4 tends:2 papalexakis:1 mach:7 might:2 challenging:1 fastest:1 i2n:1 averaged:4 practical:1 unique:1 testing:1 practice:1 block:3 kohei:2 maxx:1 significantly:1 projection:1 revealing:1 confidence:1 pre:1 refers:1 cannot:2 close:6 selection:1 risk:2 kmax:18 restriction:2 measurable:4 map:3 equivalent:2 vxi:1 yoshida:3 regardless:2 independently:3 estimator:1 array:4 orthonormal:2 exploratory:1 fx:2 coordinate:1 analogous:1 annals:1 exact:1 losing:1 akaike:2 element:7 approximated:6 particularly:1 satisfying:1 cut:5 preprint:1 svds:1 yk:2 complexity:4 ideally:1 depend:3 algebra:2 laptev:1 creates:1 triangle:2 drineas:1 various:3 riken:1 fast:2 effective:1 tell:1 hyper:3 whose:5 larger:1 valued:2 solve:2 say:1 supplementary:1 bro:3 statistic:2 gi:1 reshaped:1 think:1 itself:1 chayes:1 online:1 sequence:10 inn:1 product:3 caiafa:1 fr:1 subgraph:1 frobenius:2 yijk:1 ky:1 qr:1 chemometrics:1 regularity:2 optimum:2 r1:40 help:1 depending:1 develop:2 ac:1 implemented:1 involves:1 implies:2 differ:2 correct:1 owing:2 subsequently:1 human:1 jst:1 material:1 require:1 generalization:2 preliminary:1 multilinear:2 frontier:1 sufficiently:4 ground:1 normal:3 exp:1 seed:1 achieves:2 early:1 wine:8 favorable:1 wi1:1 outperformed:1 combinatorial:1 fluorescence:6 schwarz:1 correctness:3 create:1 tool:4 moor:1 rough:1 gaussian:1 modified:1 rather:4 zhou:2 avoid:1 surveillance:1 overwhelmingly:1 kakenhi:1 consistently:1 rank:47 check:1 indicates:3 likelihood:2 industrial:1 criticism:1 baseline:1 sense:1 minimizers:2 integrated:1 relation:1 interested:1 i1:8 issue:1 among:1 socalled:1 equal:3 construct:3 once:2 beach:1 sampling:14 hooi:13 kw:1 bmc:1 look:1 others:3 aip:1 few:1 randomly:2 frieze:1 national:2 geometry:1 consisting:1 lebesgue:1 replacement:3 n1:10 detection:1 evaluation:1 mahoney:1 mixture:1 light:1 accurate:1 tuple:1 orthogonal:1 desired:1 instance:1 column:1 kxkf:2 cost:5 vertex:1 entry:1 deviation:1 jsps:1 recognizing:1 vandewalle:1 too:4 dependency:1 considerably:1 synthetic:4 st:1 fundamental:1 randomized:8 siam:1 xi1:4 informatics:1 quickly:3 again:2 tube:1 rn1:10 choose:1 possibly:1 creating:1 american:1 return:1 japan:1 szegedy:2 exclude:1 potential:1 de:2 gy:1 bold:1 coefficient:2 satisfy:2 explicitly:1 ranking:4 depends:1 kendall:3 analyze:1 kwk:1 doing:1 hf:1 dikernels:7 contribution:1 minimize:1 accuracy:5 yield:1 weak:1 bayesian:3 identification:1 accurately:1 against:2 underestimate:1 acquisition:1 frequency:1 tucker:49 invasive:1 proof:3 di:1 cur:2 sampled:12 dataset:6 popular:3 formalize:1 nielsen:2 ok:2 higher:2 xie:1 evaluated:2 though:2 furthermore:1 xk1:3 schuldt:1 hand:2 sketch:1 horizontal:1 scikit:1 mode:11 stably:1 behaved:1 grows:1 usa:1 usage:3 steele:1 normalized:3 orthonormality:1 true:3 counterpart:2 hence:3 excluded:1 iteratively:1 erato:1 xks:3 bonnie:5 kgx:1 criterion:6 stress:1 demonstrate:2 interface:1 iin:1 meaning:1 image:1 x5x5:4 common:1 qp:1 jp:1 exponentially:1 volume:2 extend:1 approximates:2 numerically:1 refer:1 cambridge:1 rr1:2 cv:3 automatic:1 grid:5 mathematics:1 similarly:1 had:1 dikernel:17 access:1 hide:1 moderate:1 inf:1 nonconvex:1 inequality:2 onr:1 preserving:6 minimum:4 greater:1 additional:1 converge:1 redundant:1 ller:1 multiple:4 sound:1 reduces:1 determination:1 plug:1 cross:1 long:1 icdm:1 predetermine:1 scalable:1 variant:1 metric:1 arxiv:2 iteration:1 sometimes:1 kernel:1 achieved:1 x10x10:4 preserved:1 addition:1 want:6 whereas:1 singular:5 subject:1 kwkf:1 integer:7 call:1 vesztergombi:1 exceed:1 concerned:1 fit:1 opposite:1 inner:2 regarding:2 reduce:2 knowing:1 intensive:1 ultimate:1 penalty:2 algebraic:1 remark:1 action:1 krauledat:1 useful:3 colorectal:1 prepared:1 ten:1 http:2 happened:1 estimated:3 affected:1 key:1 terminology:1 drawn:1 rrk:2 graph:6 run:2 inverse:1 almost:2 reasonable:1 decision:1 rnk:1 guaranteed:1 convergent:1 quadratic:2 cucchiara:1 worked:1 r15:1 u1:2 speed:1 argument:1 min:2 structured:1 icpr:1 smaller:2 slightly:1 making:1 s1:32 nilsson:1 christensen:1 yyoshida:1 remains:1 previously:1 turn:1 end:1 available:1 operation:1 apply:1 eight:1 alternative:1 schmidt:1 slower:1 original:5 substitute:1 denotes:1 clustering:1 log2:2 exploit:1 society:1 ink:1 tensor:66 objective:2 question:2 added:1 fluor:5 rgensen:1 diagonal:1 guessing:1 said:1 italic:1 distance:3 unable:1 berlin:1 extent:1 fy:2 provable:2 kannan:1 code:1 index:15 mini:3 kk:2 balance:1 minimizing:1 unfortunately:4 statement:1 xik:1 gk:1 implementation:1 tsourakakis:2 datasets:5 truncated:1 maxk:4 rn:4 introduced:1 namely:1 specified:1 narrow:1 nip:2 bar:1 usually:1 max:10 memory:1 tau:3 video:1 analogue:1 suitable:1 szemer:1 residual:26 advanced:1 blankertz:1 scheme:1 improve:1 github:2 technology:1 axis:1 raftery:1 extract:1 cichocki:2 l2:6 python:2 kf:5 determining:1 generation:1 validation:1 sufficient:1 consistent:1 row:1 cancer:1 supported:2 free:1 rasmussen:1 drastically:1 allow:1 institute:2 eem:5 absolute:2 sparse:2 regard:1 dimension:4 author:1 far:1 transaction:1 kfx:1 approximate:1 rthe:1 gr1:2 dealing:1 sz:4 global:1 yuichi:1 search:5 continuous:2 decomposes:1 sk:37 table:8 promising:1 reasonably:1 ca:1 obtaining:2 xijk:1 spectroscopy:1 caputo:1 interact:2 untrained:1 main:2 dense:3 motivation:1 s2:1 noise:1 hyperparameters:1 n2:1 nothing:1 big:1 kxkmax:2 x1:7 x5x20:4 uik:1 neuroimage:1 watanabe:1 candidate:2 hw:1 rk:52 down:1 theorem:1 borgs:1 r2:1 x:1 svm:1 fusion:1 essential:1 exists:4 curio:1 maximilian:1 kx:5 nk:32 intersection:1 generalizing:1 kxk:1 hayashi:4 applies:1 minimizer:2 truth:1 goal:1 consequently:1 considerable:1 change:1 specifically:1 except:1 uniformly:3 lemma:14 called:2 total:1 multimedia:1 svd:8 experimental:1 ijk:2 select:1 latter:1 bioinformatics:1 evaluate:4 |
6,459 | 6,842 | Deep Supervised Discrete Hashing
Qi Li
Zhenan Sun
Ran He
Tieniu Tan
Center for Research on Intelligent Perception and Computing
National Laboratory of Pattern Recognition
CAS Center for Excellence in Brain Science and Intelligence Technology
Institute of Automation, Chinese Academy of Sciences
{qli,znsun,rhe,tnt}@nlpr.ia.ac.cn
Abstract
With the rapid growth of image and video data on the web, hashing has been
extensively studied for image or video search in recent years. Benefiting from
recent advances in deep learning, deep hashing methods have achieved promising
results for image retrieval. However, there are some limitations of previous deep
hashing methods (e.g., the semantic information is not fully exploited). In this
paper, we develop a deep supervised discrete hashing algorithm based on the
assumption that the learned binary codes should be ideal for classification. Both the
pairwise label information and the classification information are used to learn the
hash codes within one stream framework. We constrain the outputs of the last layer
to be binary codes directly, which is rarely investigated in deep hashing algorithm.
Because of the discrete nature of hash codes, an alternating minimization method
is used to optimize the objective function. Experimental results have shown that
our method outperforms current state-of-the-art methods on benchmark datasets.
1
Introduction
Hashing has attracted much attention in recent years because of the rapid growth of image and
video data on the web. It is one of the most popular techniques for image or video search due to
its low computational cost and storage efficiency. Generally speaking, hashing is used to encode
high dimensional data into a set of binary codes while preserving the similarity of images or videos.
Existing hashing methods can be roughly grouped into two categories: data independent methods and
data dependent methods.
Data independent methods rely on random projections to construct hash functions. Locality Sensitive
Hashing (LSH) [3] is one of the representative methods, which uses random linear projections to
map nearby data into similar binary codes. LSH is widely used for large scale image retrieval. In
order to generalize LSH to accommodate arbitrary kernel functions, the Kenelized Locality Sensitive
Hashing (KLSH) [7] is proposed to deal with high-dimensional kernelized data. Other variants of
LSH are also proposed in recent years, such as super-bit LSH [5], non-metric LSH [14]. However,
there are some limitations of data independent hashing methods, e.g., it makes no use of training data.
The learning efficiency is low, and it requires longer hash codes to attain high accuracy. Due to the
limitations of the data independent hashing methods, recent hashing methods try to exploit various
machine learning techniques to learn more effective hash function based on a given dataset.
Data dependent methods refer to using training data to learn the hash functions. They can be further
categorized into supervised and unsupervised methods. Unsupervised methods retrieve the neighbors
under some kinds of distance metrics. Iterative Quantization (ITQ) [4] is one of the representative
unsupervised hashing methods, in which the projection matrix is optimized by iterative projection and
thresholding according to the given training samples. In order to utilize the semantic labels of data
samples, supervised hashing methods are proposed. Supervised Hashing with Kernels (KSH) [13]
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
is a well-known method of this kind, which learns the hash codes by minimizing the Hamming
distances between similar pairs, and at the same time maximizing the Hamming distances between
dissimilar pairs. Binary Reconstruction Embedding (BRE) [6] learns the hash functions by explicitly
minimizing the reconstruction error between the original distances and the reconstructed distances
in Hamming space. Order Preserving Hashing (OPH) [17] learns the hash codes by preserving the
supervised ranking list information, which is calculated based on the semantic labels. Supervised
Discrete Hashing (SDH) [15] aims to directly optimize the binary hash codes using the discrete cyclic
coordinate descend method.
Recently, deep learning based hashing methods have been proposed to simultaneously learn the
image representation and hash coding, which have shown superior performance over the traditional
hashing methods. Convolutional Neural Network Hashing (CNNH) [20] is one of the early works to
incorporate deep neural networks into hash coding, which consists of two stages to learn the image
representations and hash codes. One drawback of CNNH is that the learned image representation can
not give feedback for learning better hash codes. To overcome the shortcomings of CNNH, Network
In Network Hashing (NINH) [8] presents a triplet ranking loss to capture the relative similarities of
images. The image representation learning and hash coding can benefit each other within one stage
framework. Deep Semantic Ranking Hashing (DSRH) [26] learns the hash functions by preserving
semantic similarity between multi-label images. Other ranking-based deep hashing methods have
also been proposed in recent years [18, 22]. Besides the triplet ranking based methods, some pairwise
label based deep hashing methods are also exploited [9, 27]. A novel and efficient training algorithm
inspired by alternating direction method of multipliers (ADMM) is proposed to train very deep neural
networks for supervised hashing in [25]. The classification information is used to learn hash codes.
[25] relaxes the binary constraint to be continuous, then thresholds the obtained continuous variables
to be binary codes.
Although deep learning based methods have achieved great progress in image retrieval, there are
some limitations of previous deep hashing methods (e.g., the semantic information is not fully
exploited). Recent works try to divide the whole learning process into two streams under the multitask learning framework [11, 21, 22]. The hash stream is used to learn the hash function, while the
classification stream is utilized to mine the semantic information. Although the two stream framework
can improve the retrieval performance, the classification stream is only employed to learn the image
representations, which does not have a direct impact on the hash function. In this paper, we use CNN
to learn the image representation and hash function simultaneously. The last layer of CNN outputs
the binary codes directly based on the pairwise label information and the classification information.
The contributions of this work are summarized as follows. 1) The last layer of our method is
constrained to output the binary codes directly. The binary codes are learned to preserve the similarity
relationship and keep the label consistent simultaneously. To the best of our knowledge, this is the first
deep hashing method that uses both pairwise label information and classification information to learn
the hash codes under one stream framework. 2) In order to reduce the quantization error, we keep
the discrete nature of the hash codes during the optimization process. An alternating minimization
method is proposed to optimize the objective function by using the discrete cyclic coordinate descend
method. 3) Extensive experiments have shown that our method outperforms current state-of-the-art
methods on benchmark datasets for image retrieval, which demonstrates the effectiveness of the
proposed method.
2
2.1
Deep supervised discrete hashing
Problem definition
N
Given N image samples X = {xi }i=1 ? Rd?N , hash coding is to learn a collection of K-bit
K?N
K
binary codes B ? {?1, 1}
, where the i-th column bi ? {?1, 1} denotes the binary codes
for the i-th sample xi . The binary codes are generated by the hash function h (?), which can be
rewritten as [h1 (?) , ..., hc (?)]. For image sample xi , its hash codes can be represented as bi =
h (xi ) = [h1 (xi ) , ..., hc (xi )]. Generally speaking, hashing is to learn a hash function to project
image samples to a set of binary codes.
2
2.2
Similarity measure
N
c
In supervised hashing, the label information is given as Y = {yi }i=1 ? Rc?N , where yi ? {0, 1}
corresponds to the sample xi , c is the number of categories. Note that one sample may belong
to multiple categories. Given the semantic label information, the pairwise label information is
derived as: S = {sij }, sij ? {0, 1}, where sij = 1 when xi and xj are semantically similar,
sij = 0 when xi and xj are semantically dissimilar. For two binary codes bi and bj , the relationship
between their Hamming distance distH (?, ?) and their inner product h?, ?i is formulated as follows:
distH (bi , bj ) = 12 (K ? hbi , bj i). If the inner product of two binary codes is small, their Hamming
distance will be large, and vice versa. Therefore the inner product of different hash codes can be used
to quantify their similarity.
Given the pairwise similarity relationship S = {sij }, the Maximum a Posterior (MAP) estimation of
hash codes can be represented as:
p (B|S) ? p (S|B) p (B) = ? p (sij |B) p (B)
(1)
sij ?S
where p (S|B) denotes the likelihood function, p (B) is the prior distribution. For each pair of the
images, p (sij |B) is the conditional probability of sij given their hash codes B, which is defined as
follows:
? (?ij ) , sij = 1
p (sij |B) =
(2)
1 ? ? (?ij ) , sij = 0
where ? (x) = 1/ (1 + e?x ) is the sigmoid function, ?ij = 21 hbi , bj i = 12 bTi bj . From Equation 2
we can see that, the larger the inner product hbi , bj i is, the larger p (1|bi , bj ) will be, which implies
that bi and bj should be classified as similar, and vice versa. Therefore Equation 2 is a reasonable
similarity measure for hash codes.
2.3
Loss function
In recent years, deep learning based methods have shown their superior performance over the
traditional handcrafted features on object detection, image classification, image segmentation, etc. In
this section, we take advantage of recent advances in CNN to learn the hash function. In order to have
a fair comparison with other deep hashing methods, we choose the CNN-F network architecture [2]
as a basic component of our algorithm. This architecture is widely used to learn the hash function
in recent works [9, 18]. Specifically, there are two separate CNNs to learn the hash function, which
share the same weights. The pairwise samples are used as the input for these two separate CNNs. The
CNN model consists of 5 convolutional layers and 2 fully connected layers. The number of neurons
in the last fully connected layer is equal to the number of hash codes.
Considering the similarity measure, the following loss function is used to learn the hash codes:
P
P
J = ? log p (S|B) = ?
log p (sij |B) = ?
sij ?ij ? log 1 + e?ij .
(3)
s ?S
s ?S
ij
ij
Equation 3 is the negative log likelihood function, which makes the Hamming distance of two similar
points as small as possible, and at the same time makes the Hamming distance of two dissimilar
points as large as possible.
Although pairwise label information is used to learn the hash function in Equation 3, the label
information is not fully exploited. Most of the previous works make use of the label information
under a two stream multi-task learning framework [21, 22]. The classification stream is used to
measure the classification error, while the hash stream is employed to learn the hash function. One
basic assumption of our algorithm is that the learned binary codes should be ideal for classification.
In order to take advantage of the label information directly, we expect the learned binary codes to be
optimal for the jointly learned linear classifier.
We use a simple linear classifier to model the relationship between the learned binary codes and the
label information:
Y = W T B,
(4)
where W = [w1 , w2,..., wK ] is the classifier weight, Y = [y1 , y2,..., yN ] is the ground-truth label
vector. The loss function can be calculated as:
N
P
2
2
(5)
Q = L Y, W T B + ? kW kF =
L yi , W T bi + ? kW kF ,
i=1
3
where L (?) is the loss function, ? is the regularization parameter, k?kF is the Frobenius norm of a
matrix. Combining Equation 5 and Equation 3, we have the following formulation:
sij ?ij ? log 1 + e?ij
P
F = J + ?Q = ?
+?
sij ?S
N
P
i=1
2
L yi , W T bi + ? kW kF ,
(6)
where ? is the trade-off parameters, ? = ??. Suppose that we choose the l2 loss for the linear
classifier, Equation 6 is rewritten as follows:
sij ?ij ? log 1 + e?ij
P
F =?
+?
sij ?S
N
P
yi ? W T bi
2 + ? kW k2 ,
F
2
(7)
i=1
where k?k2 is l2 norm of a vector. The hypothesis for Equation 7 is that the learned binary codes
should make the pairwise label likelihood as large as possible, and should be optimal for the jointly
learned linear classifier.
2.4
Optimization
The minimization of Equation 7 is a discrete optimization problem, which is difficult to optimize
directly. There are several ways to solve this problem. (1) In the training stage, the sigmoid or tanh
activation function is utilized to replace the ReLU function after the last fully connected layer, and
then the continuous outputs are used as a relaxation of the hash codes. In the testing stage, the hash
codes are obtained by applying a thresholding function on the continuous outputs. One limitation of
this method is that the convergence of the algorithm is slow. Besides, there will be a large quantization
error. (2) The sign function is directly applied after the outputs of the last fully connected layer, which
constrains the outputs to be binary variables strictly. However, the sign function is non-differentiable,
which is difficult to back propagate the gradient of the loss function.
Because of the discrepancy between the Euclidean space and the Hamming space, it would result in
suboptimal hash codes if one totally ignores the binary constraints. We emphasize that it is essential
to keep the discrete nature of the binary codes. Note that in our formulation, we constrain the outputs
of the last layer to be binary codes directly, thus Equation 7 is difficult to optimize directly. Similar
to [9, 18, 22], we solve this problem by introducing an auxiliary variable. Then we approximate
Equation 7 as:
F =?
sij ?ij ? log 1 + e?ij
P
sij ?S
+?
N
P
yi ? W T bi
2 + ? kW k2 ,
F
2
i=1
(8)
s.t. bi = sgn(hi ), hi ? RK?1 , (i = 1, ..., N ) ,
where ?ij = 21 hi T hj . hi (i = 1, ..., N ) can be seen as the output of the last fully connected layer,
which is represented as:
hi = M T ? (xi ; ?) + n,
(9)
where ? denotes the parameters of the previous layers before the last fully connected layer, M ?
R4096?K represents the weight matrix, n ? RK?1 is the bias term.
According to the Lagrange multipliers method, Equation 8 can be reformulated as:
P
F =?
sij ?ij ? log 1 + e?ij
sij ?S
N
N
P
yi ? W T bi
2 + ? kW k2 + ? P kbi ? sgn (hi )k2 ,
+?
F
2
2
i=1
s.t.
(10)
i=1
K
bi ? {?1, 1} , (i = 1, ..., N ) ,
where ? is the Lagrange Multiplier. Equation 10 can be further relaxed as:
P
F =?
sij ?ij ? log 1 + e?ij
sij ?S
N
N
P
yi ? W T bi
2 + ? kW k2 + ? P kbi ? hi k2 ,
+?
F
2
2
i=1
s.t.
i=1
K
bi ? {?1, 1} , (i = 1, ..., N ) .
4
(11)
The last term actually measures the constraint violation caused by the outputs of the last fully
connected layer. If the parameter ? is set sufficiently large, the constraint violation is penalized
severely. Therefore the outputs of the last fully connected layer are forced closer to the binary codes,
which are employed for classification directly.
The benefit of introducing an auxiliary variable is that we can decompose Equation 11 into two sub
optimization problems, which can be iteratively solved by using the alternating minimization method.
First, when fixing bi , W , we have:
P
?F
1
e?ij
=
?
s
?
hj ?
ij
?hi
2
1+e?ij
j:sij ?S
1
2
P
sji ?
j:sji ?S
Then we update parameters M , n and ? as follows:
T
?F
?F
=
?
(x
;
?)
, ?F
i
?M
?hi
?n =
e?ji
1+e?ji
?F
?F
?hi , ??(xi ;?)
hj ? 2? (bi ? hi )
?F
= M ?h
.
i
(12)
(13)
The gradient will propagate to previous layers by Back Propagation (BP) algorithm.
Second, when fixing M , n, ? and bi , we solve W as:
F =?
N
X
yi ? W T bi
2 + ? kW k2 .
F
2
(14)
i=1
Equation 14 is a least squares problem, which has a closed form solution:
?1
?
T
W = BB + I
B T Y,
?
N
K?N
where B = {bi }i=1 ? {?1, 1}
(15)
N
, Y = {yi }i=1 ? RC?N .
Finally, when fixing M , n, ? and W , Equation 11 becomes:
F =?
N
N
P
P
2
yi ? W T bi
2 + ? kW k2 + ?
kbi ? hi k2 ,
F
2
i=1
s.t.
i=1
K
(16)
bi ? {?1, 1} , (i = 1, ..., N ) .
In this paper, we use the discrete cyclic coordinate descend method to iteratively solve B row by row:
2
K?N
min
W T B
? 2 Tr (P ) , s.t. B ? {?1, 1}
,
(17)
B
?
? H.
where P = W Y +
Let xT be the k th (k = 1, ..., K) row of B, B1 be the matrix of B excluding
xT , pT be the k th column of matrix P , P1 be the matrix of P excluding p, wT be the k th column of
matrix W , W1 be the matrix of W excluding w, then we can derive:
x = sgn p ? B1T W1 w .
(18)
It is easy to see that each bit of the hash codes is computed based on the pre-learned K ? 1 bits B1 .
We iteratively update each bit until the algorithm converges.
3
3.1
Experiments
Experimental settings
We conduct extensive experiments on two public benchmark datasets: CIFAR-10 and NUS-WIDE.
CIFAR-10 is a dataset containing 60,000 color images in 10 classes, and each class contains 6,000
images with a resolution of 32x32. Different from CIFAR-10, NUS-WIDE is a public multi-label
image dataset. There are 269,648 color images in total with 5,018 unique tags. Each image is
annotated with one or multiple class labels from the 5,018 tags. Similar to [8, 12, 20, 24], we use a
subset of 195,834 images which are associated with the 21 most frequent concepts. Each concept
consists of at least 5,000 color images in this dataset.
We follow the previous experimental setting in [8, 9, 18]. In CIFAR-10, we randomly select 100
images per class (1,000 images in total) as the test query set, 500 images per class (5,000 images in
5
total) as the training set. For NUS-WIDE dataset, we randomly sample 100 images per class (2,100
images in total) as the test query set, 500 images per class (10,500 images in total) as the training set.
The similar pairs are constructed according to the image labels: two images will be considered similar
if they share at least one common semantic label. Otherwise, they will be considered dissimilar.
We also conduct experiments on CIFAR-10 and NUS-WIDE dataset under a different experimental
setting. In CIFAR-10, 1,000 images per class (10,000 images in total) are selected as the test query
set, the remaining 50,000 images are used as the training set. In NUS-WIDE, 100 images per class
(2,100 images in total) are randomly sampled as the test query images, the remaining images (193,734
images in total) are used as the training set.
As for the comparison methods, we roughly divide them into two groups: traditional hashing methods
and deep hashing methods. The compared traditional hashing methods consist of unsupervised
and supervised methods. Unsupervised hashing methods include SH [19], ITQ [4]. Supervised
hashing methods include SPLH [16], KSH [13], FastH [10], LFH [23], and SDH [15]. Both the
hand-crafted features and the features extracted by CNN-F network architecture are used as the input
for the traditional hashing methods. Similar to previous works, the handcrafted features include a
512-dimensional GIST descriptor to represent images of CIFAR-10 dataset, and a 1134-dimensional
feature vector to represent images of NUS-WIDE dataset. The deep hashing methods include
DQN [1], DHN [27], CNNH [20], NINH [8], DSRH [26], DSCH [24], DRCSH [24], DPSH [9],
DTSH [18] and VDSH [25]. Note that DPSH, DTSH and DSDH are based on the CNN-F network
architecture, while DQN, DHN, DSRH are based on AlexNet architecture. Both the CNN-F network
architecture and AlexNet architecture consist of five convolutional layers and two fully connected
layers. In order to have a fair comparison, most of the results are directly reported from previous
works. Following [25], the pre-trained CNN-F model is used to extract CNN features on CIFAR-10,
while a 500 dimensional bag-of-words feature vector is used to represent each image on NUS-WIDE
for VDSH. Then we re-run the source code provided by the authors to obtain the retrieval performance.
The parameters of our algorithm are set based on the standard cross-validation procedure. ?, ? and ?
in Equation 11 are set to 1, 0.1 and 55, respectively.
Similar to [8], we adopt four widely used evaluation metrics to evaluate the image retrieval quality:
Mean Average Precision (MAP) for different number of bits, precision curves within Hamming
distance 2, precision curves with different number of top returned samples and precision-recall curves.
When computing MAP for NUS-WIDE dataset under the first experimental setting, we only consider
the top 5,000 returned neighbors. While we consider the top 50,000 returned neighbors under the
second experimental setting.
Empirical analysis
0.9
1
0.9
0.8
Precision
0.7
0.7
0.6
0.6
0.5
15
20
25
30
35
Number of bits
(a)
40
45
0.5
100
DSDH-A
DSDH-B
DSDH-C
DSDH
0.8
0.8
Precision
Precision (Hamming dist. <=2)
3.2
0.6
0.4
0.2
0
300
500
700
900
Number of top returned images
(b)
0
0.2
0.4
0.6
0.8
1
Recall
(c)
Figure 1: The results of DSDH-A, DSDH-B, DSDH-C and DSDH on CIFAR-10 dataset: (a) precision
curves within Hamming radius 2; (b) precision curves with respect to different number of top returned
images; (c) precision-recall curves of Hamming ranking with 48 bits.
In order to verify the effectiveness of our method, several variants of our method (DSDH) are
also proposed. First, we only consider the pairwise label information while neglecting the linear
classification information in Equation 7, which is named DSDH-A (similar to [9]). Then we design
a two-stream deep hashing algorithm to learn the hash codes. One stream is designed based on
the pairwise label information in Equation 3, and the other stream is constructed based on the
classification information. The two streams share the same image representations except for the last
6
Table 1: MAP for different methods under the first experimental setting. The MAP for NUS-WIDE
dataset is calculated based on the top 5,000 returned neighbors. DPSH? denotes re-running the code
provided by the authors of DPSH.
Method
Ours
DQN
DPSH
DHN
DTSH
NINH
CNNH
FastH
SDH
KSH
LFH
SPLH
ITQ
SH
12 bits
0.740
0.554
0.713
0.555
0.710
0.552
0.439
0.305
0.285
0.303
0.176
0.171
0.162
0.127
CIFAR-10
24 bits 32 bits
0.786
0.801
0.558
0.564
0.727
0.744
0.594
0.603
0.750
0.765
0.566
0.558
0.511
0.509
0.349
0.369
0.329
0.341
0.337
0.346
0.231
0.211
0.173
0.178
0.169
0.172
0.128
0.126
Method
48 bits
0.820
0.580
0.757
0.621
0.774
0.581
0.522
0.384
0.356
0.356
0.253
0.184
0.175
0.129
Ours
DQN
DPSH?
DHN
DTSH
NINH
CNNH
FastH
SDH
KSH
LFH
SPLH
ITQ
SH
12 bits
0.776
0.768
0.752
0.708
0.773
0.674
0.611
0.621
0.568
0.556
0.571
0.568
0.452
0.454
NUS-WIDE
24 bits 32 bits
0.808
0.820
0.776
0.783
0.790
0.794
0.735
0.748
0.808
0.812
0.697
0.713
0.618
0.625
0.650
0.665
0.600
0.608
0.572
0.581
0.568
0.568
0.589
0.597
0.468
0.472
0.406
0.405
48 bits
0.829
0.792
0.812
0.758
0.824
0.715
0.608
0.687
0.637
0.588
0.585
0.601
0.477
0.400
fully connected layer. We denote this method as DSDH-B. Besides, we also design another approach
directly applying the sign function after the outputs of the last fully connected layer in Equation 7,
which is denoted as DSDH-C. The loss function of DSDH-C can be represented as:
F =?
P
sij ?ij ? log 1 + e?ij
+?
sij ?S
N
P
yi ? W T hi
2
2
i=1
2
+ ? kW kF + ?
N
P
i=1
2
kbi ? sgn (hi )k2 ,
(19)
s.t. hi ? RK?1 , (i = 1, ..., N )
Then we use the alternating minimization method to optimize DSDH-C. The results of different
methods on CIFAR-10 under the first experimental setting are shown in Figure 1. From Figure 1
we can see that, (1) The performance of DSDH-C is better than DSDH-A. DSDH-B is better than
DSDH-A in terms of precision with Hamming radius 2 and precision-recall curves. More information
is exploited in DSDH-C than DSDH-A, which demonstrates the classification information is helpful
for learning the hash codes. (2) The improvement of DSDH-C over DSDH-A is marginal. The reason
is that the classification information in DSDH-C is only used to learn the image representations,
which is not fully exploited. Due to violation of the discrete nature of the hash codes, DSDH-C has a
large quantization loss. Note that our method further beats DSDH-B and DSDH-C by a large margin.
3.3
Results under the first experimental setting
The MAP results of all methods on CIFAR-10 and NUS-WIDE under the first experimental setting
are listed in Table 1. From Table 1 we can see that the proposed method substantially outperforms
the traditional hashing methods on CIFAR-10 dataset. The MAP result of our method is more than
twice as much as SDH, FastH and ITQ. Besides, most of the deep hashing methods perform better
than the traditional hashing methods. In particular, DTSH achieves the best performance among all
the other methods except DSDH on CIFAR-10 dataset. Compared with DTSH, our method further
improves the performance by 3 ? 7 percents. These results verify that learning the hash function and
classifier within one stream framework can boost the retrieval performance.
The gap between the deep hashing methods and traditional hashing methods is not very huge on
NUS-WIDE dataset, which is different from CIFAR-10 dataset. For example, the average MAP result
of SDH is 0.603, while the average MAP result of DTSH is 0.804. The proposed method is slightly
superior to DTSH in terms of the MAP results on NUS-WIDE dataset. The main reasons are that
there exits more categories in NUS-WIDE than CIFAR-10, and each of the image contains multiple
labels. Compared with CIFAR-10, there are only 500 images per class for training, which may not
be enough for DSDH to learn the multi-label classifier. Thus the second term in Equation 7 plays a
limited role to learn a better hash function. In Section 3.4, we will show that our method will achieve
7
Table 2: MAP for different methods under the second experimental setting. The MAP for NUS-WIDE
dataset is calculated based on the top 50,000 returned neighbors. DPSH? denotes re-running the
code provided by the authors of DPSH.
Method
Ours
DTSH
DPSH
VDSH
DRSCH
DSCH
DSRH
DPSH?
16 bits
0.935
0.915
0.763
0.845
0.615
0.609
0.608
0.903
CIFAR-10
24 bits 32 bits
0.940
0.939
0.923
0.925
0.781
0.795
0.848
0.844
0.622
0.629
0.613
0.617
0.611
0.617
0.885
0.915
Method
48 bits
0.939
0.926
0.807
0.845
0.631
0.620
0.618
0.911
Ours
DTSH
DPSH
VDSH
DRSCH
DSCH
DSRH
DPSH?
16 bits
0.815
0.756
0.715
0.545
0.618
0.592
0.609
NUS-WIDE
24 bits 32 bits
0.814
0.820
0.776
0.785
0.722
0.736
0.564
0.557
0.622
0.623
0.597
0.611
0.618
0.621
N/A
48 bits
0.821
0.799
0.741
0.570
0.628
0.609
0.631
Table 3: MAP for different methods under the first experimental setting. The MAP for NUS-WIDE
dataset is calculated based on the top 5,000 returned neighbors.
Method
Ours
FastH+CNN
SDH+CNN
KSH+CNN
LFH+CNN
SPLH+CNN
ITQ+CNN
SH+CNN
12 bits
0.740
0.553
0.478
0.488
0.208
0.299
0.237
0.183
CIFAR-10
24 bits 32 bits
0.786
0.801
0.607
0.619
0.557
0.584
0.539
0.548
0.242
0.266
0.330
0.335
0.246
0.255
0.164
0.161
48 bits
0.820
0.636
0.592
0.563
0.339
0.330
0.261
0.161
12 bits
0.776
0.779
0.780
0.768
0.695
0.753
0.719
0.621
NUS-WIDE
24 bits 32 bits
0.808
0.820
0.807
0.816
0.804
0.815
0.786
0.790
0.734
0.739
0.775
0.783
0.739
0.747
0.616
0.615
48 bits
0.829
0.825
0.824
0.799
0.759
0.786
0.756
0.612
a better performance than other deep hashing methods with more training images per class for the
multi-label dataset.
3.4
Results under the second experimental setting
Deep hashing methods usually need many training images to learn the hash function. In this section,
we compare with other deep hashing methods under the second experimental setting, which contains
more training images. Table 2 lists MAP results for different methods under the second experimental
setting. As shown in Table 2, with more training images, most of the deep hashing methods perform
better than in Section 3.3. For CIFAR-10 dataset, the average MAP result of DRSCH is 0.624, and
the average MAP results of DPSH, DTSH and VDSH are 0.787, 0.922 and 0.846, respectively. The
average MAP result of our method is 0.938 on CIFAR-10 dataset. DTSH, DPSH and VDSH have
a significant advantage over other deep hashing methods. Our method further outperforms DTSH,
DPSH and VDSH by about 2 ? 3 percents. For NUS-WIDE dataset, our method still achieves the
best performance in terms of MAP. The performance of VDSH on NUS-WIDE dataset drops severely.
The possible reason is that VDSH uses the provided bag-of-words features instead of the learned
features.
3.5
Comparison with traditional hashing methods using deep learned features
In order to have a fair comparison, we also compare with traditional hashing methods using deep
learned features extracted by the CNN-F network under the first experimental setting. The MAP
results of different methods are listed in Table 3. As shown in Table 3, most of the traditional
hashing methods obtain a better retrieval performance using deep learned features. The average
MAP results of FastH+CNN and SDH+CNN on CIFAR-10 dataset are 0.604 and 0.553, respectively.
And the average MAP result of our method on CIFAR-10 dataset is 0.787, which outperforms the
traditional hashing methods with deep learned features. Besides, the proposed algorithm achieves a
comparable performance with the best traditional hashing methods on NUS-WIDE dataset under the
first experimental setting.
8
4
Conclusion
In this paper, we have proposed a novel deep supervised discrete hashing algorithm. We constrain
the outputs of the last layer to be binary codes directly. Both the pairwise label information and the
classification information are used for learning the hash codes under one stream framework. Because
of the discrete nature of the hash codes, we derive an alternating minimization method to optimize
the loss function. Extensive experiments have shown that our method outperforms state-of-the-art
methods on benchmark image retrieval datasets.
5
Acknowledgements
This work was partially supported by the National Key Research and Development Program of China
(Grant No. 2016YFB1001000) and the Natural Science Foundation of China (Grant No. 61622310).
References
[1] Y. Cao, M. Long, J. Wang, H. Zhu, and Q. Wen. Deep quantization network for efficient image retrieval.
In AAAI, pages 3457?3463, 2016.
[2] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep
into convolutional nets. In BMVC, 2014.
[3] A. Gionis, P. Indyk, R. Motwani, et al. Similarity search in high dimensions via hashing. In VLDB, pages
518?529, 1999.
[4] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean approach to
learning binary codes for large-scale image retrieval. IEEE TPAMI, 35(12):2916?2929, 2013.
[5] J. Ji, J. Li, S. Yan, B. Zhang, and Q. Tian. Super-bit locality-sensitive hashing. In NIPS, pages 108?116,
2012.
[6] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. In NIPS, pages
1042?1050, 2009.
[7] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In ICCV, pages
2130?2137, 2009.
[8] H. Lai, Y. Pan, Y. Liu, and S. Yan. Simultaneous feature learning and hash coding with deep neural
networks. In CVPR, pages 3270?3278, 2015.
[9] W.-J. Li, S. Wang, and W.-C. Kang. Feature learning based deep supervised hashing with pairwise labels.
In IJCAI, pages 1711?1717, 2016.
[10] G. Lin, C. Shen, Q. Shi, A. van den Hengel, and D. Suter. Fast supervised hashing with decision trees for
high-dimensional data. In CVPR, pages 1963?1970, 2014.
[11] K. Lin, H.-F. Yang, J.-H. Hsiao, and C.-S. Chen. Deep learning of binary hash codes for fast image retrieval.
In CVPRW, pages 27?35, 2015.
[12] W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. In ICML, pages 1?8, 2011.
[13] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In CVPR, pages
2074?2081, 2012.
[14] Y. Mu and S. Yan. Non-metric locality-sensitive hashing. In AAAI, pages 539?544, 2010.
[15] F. Shen, C. Shen, W. Liu, and H. Tao Shen. Supervised discrete hashing. In CVPR, pages 37?45, 2015.
[16] J. Wang, S. Kumar, and S.-F. Chang. Sequential projection learning for hashing with compact codes. In
ICML, pages 1127?1134, 2010.
[17] J. Wang, J. Wang, N. Yu, and S. Li. Order preserving hashing for approximate nearest neighbor search. In
ACM MM, pages 133?142, 2013.
[18] X. Wang, Y. Shi, and K. M. Kitani. Deep supervised hashing with triplet labels. In ACCV, pages 70?84,
2016.
9
[19] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, pages 1753?1760, 2009.
[20] R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan. Supervised hashing for image retrieval via image representation
learning. In AAAI, pages 2156?2162, 2014.
[21] H. F. Yang, K. Lin, and C. S. Chen. Supervised learning of semantics-preserving hash via deep convolutional
neural networks. IEEE TPAMI, (99):1?1, 2017.
[22] T. Yao, F. Long, T. Mei, and Y. Rui. Deep semantic-preserving and ranking-based hashing for image
retrieval. In IJCAI, pages 3931?3937, 2016.
[23] P. Zhang, W. Zhang, W.-J. Li, and M. Guo. Supervised hashing with latent factor models. In SIGIR, pages
173?182, 2014.
[24] R. Zhang, L. Lin, R. Zhang, W. Zuo, and L. Zhang. Bit-scalable deep hashing with regularized similarity
learning for image retrieval and person re-identification. IEEE TIP, 24(12):4766?4779, 2015.
[25] Z. Zhang, Y. Chen, and V. Saligrama. Efficient training of very deep neural networks for supervised
hashing. In CVPR, pages 1487?1495, 2016.
[26] F. Zhao, Y. Huang, L. Wang, and T. Tan. Deep semantic ranking based hashing for multi-label image
retrieval. In CVPR, pages 1556?1564, 2015.
[27] H. Zhu, M. Long, J. Wang, and Y. Cao. Deep hashing network for efficient similarity retrieval. In AAAI,
pages 2415?2421, 2016.
10
| 6842 |@word multitask:1 kulis:2 cnn:20 norm:2 vldb:1 propagate:2 tr:1 accommodate:1 cyclic:3 contains:3 liu:5 ours:5 outperforms:6 existing:1 current:2 activation:1 attracted:1 designed:1 gist:1 update:2 drop:1 hash:55 intelligence:1 selected:1 zhang:7 five:1 rc:2 constructed:2 direct:1 consists:3 pairwise:13 excellence:1 rapid:2 roughly:2 p1:1 dist:1 multi:6 brain:1 inspired:1 considering:1 totally:1 becomes:1 project:1 provided:4 alexnet:2 kind:2 substantially:1 growth:2 grauman:1 demonstrates:2 classifier:7 k2:11 grant:2 yn:1 before:1 severely:2 jiang:1 hsiao:1 twice:1 studied:1 china:2 limited:1 bi:22 tian:1 unique:1 testing:1 procedure:1 mei:1 b1t:1 empirical:1 yan:4 attain:1 vedaldi:1 projection:5 pre:2 word:2 kbi:4 storage:1 applying:2 optimize:7 map:23 center:2 maximizing:1 shi:2 attention:1 sigir:1 resolution:1 shen:4 x32:1 zuo:1 retrieve:1 embedding:1 coordinate:3 pt:1 tan:2 nlpr:1 suppose:1 play:1 us:3 hypothesis:1 recognition:1 utilized:2 role:1 solved:1 capture:1 descend:3 wang:10 connected:11 sun:1 trade:1 ran:1 mu:1 constrains:1 mine:1 trained:1 efficiency:2 exit:1 various:1 represented:4 train:1 forced:1 fast:2 effective:1 shortcoming:1 reconstructive:1 query:4 widely:3 larger:2 solve:4 cvpr:6 otherwise:1 simonyan:1 jointly:2 indyk:1 advantage:3 differentiable:1 tpami:2 net:1 reconstruction:2 product:4 frequent:1 saligrama:1 cao:2 combining:1 achieve:1 academy:1 benefiting:1 frobenius:1 qli:1 convergence:1 motwani:1 darrell:1 ijcai:2 converges:1 object:1 derive:2 develop:1 ac:1 gong:1 fixing:3 nearest:1 ij:23 progress:1 auxiliary:2 itq:6 implies:1 quantify:1 direction:1 radius:2 drawback:1 annotated:1 cnns:2 sgn:4 public:2 decompose:1 strictly:1 mm:1 sufficiently:1 considered:2 ground:1 great:1 bj:8 gordo:1 klsh:1 early:1 adopt:1 achieves:3 torralba:1 estimation:1 bag:2 label:31 tanh:1 sensitive:5 grouped:1 vice:2 minimization:6 super:2 aim:1 hj:3 encode:1 derived:1 improvement:1 likelihood:3 helpful:1 dependent:2 perronnin:1 kernelized:2 tao:1 semantics:1 classification:17 among:1 denoted:1 development:1 art:3 constrained:1 marginal:1 equal:1 construct:1 beach:1 kw:10 represents:1 yu:1 unsupervised:5 icml:2 discrepancy:1 intelligent:1 wen:1 suter:1 randomly:3 simultaneously:3 national:2 preserve:1 detection:1 huge:1 evaluation:1 violation:3 sh:4 closer:1 neglecting:1 conduct:2 tree:1 divide:2 euclidean:1 re:4 hbi:3 column:3 cost:1 introducing:2 subset:1 reported:1 st:1 person:1 off:1 tip:1 yao:1 w1:3 aaai:4 containing:1 choose:2 huang:1 zhao:1 return:1 li:5 coding:5 summarized:1 automation:1 wk:1 gionis:1 explicitly:1 ranking:8 caused:1 stream:16 try:2 h1:2 closed:1 contribution:1 square:1 accuracy:1 convolutional:5 descriptor:1 generalize:1 identification:1 classified:1 simultaneous:1 definition:1 associated:1 hamming:13 sampled:1 dataset:26 popular:1 recall:4 knowledge:1 color:3 improves:1 segmentation:1 bre:1 actually:1 back:2 hashing:77 supervised:22 follow:1 zisserman:1 wei:1 bmvc:1 formulation:2 stage:4 until:1 hand:1 web:2 propagation:1 quality:1 dqn:4 usa:1 concept:2 verify:2 multiplier:3 y2:1 regularization:1 lfh:4 alternating:6 kitani:1 laboratory:1 iteratively:3 semantic:11 deal:1 during:1 procrustean:1 percent:2 image:72 lazebnik:1 novel:2 recently:1 superior:3 sigmoid:2 common:1 ji:4 handcrafted:2 belong:1 he:1 refer:1 significant:1 versa:2 rd:1 lsh:6 similarity:12 longer:1 bti:1 etc:1 posterior:1 recent:10 sji:2 binary:30 yi:12 exploited:6 preserving:7 seen:1 relaxed:1 employed:3 multiple:3 cross:1 long:4 retrieval:18 cifar:23 lai:2 lin:4 qi:1 impact:1 variant:2 basic:2 scalable:2 metric:4 kernel:3 represent:3 achieved:2 source:1 w2:1 effectiveness:2 yang:2 ideal:2 easy:1 relaxes:1 enough:1 embeddings:1 xj:2 relu:1 architecture:7 suboptimal:1 reduce:1 inner:4 cn:1 chatfield:1 returned:8 reformulated:1 dhn:4 speaking:2 deep:45 generally:2 listed:2 extensively:1 category:4 sign:3 per:8 discrete:15 group:1 key:1 four:1 threshold:1 utilize:1 graph:1 relaxation:1 year:5 run:1 named:1 reasonable:1 decision:1 comparable:1 bit:34 rhe:1 layer:20 hi:15 tieniu:1 sdh:8 constraint:4 constrain:3 bp:1 nearby:1 tag:2 min:1 kumar:2 according:3 slightly:1 pan:2 den:1 iccv:1 sij:27 equation:21 ninh:4 rewritten:2 spectral:1 original:1 denotes:5 remaining:2 include:4 top:8 running:2 exploit:1 chinese:1 objective:2 traditional:13 gradient:2 distance:10 separate:2 reason:3 code:55 besides:5 relationship:4 minimizing:2 difficult:3 negative:1 design:2 perform:2 neuron:1 datasets:4 benchmark:4 accv:1 beat:1 excluding:3 y1:1 tnt:1 arbitrary:1 pair:4 oph:1 extensive:3 optimized:1 learned:15 kang:1 boost:1 nu:21 nip:4 usually:1 perception:1 pattern:1 program:1 video:5 ia:1 natural:1 rely:1 regularized:1 zhu:2 improve:1 technology:1 extract:1 prior:1 l2:2 acknowledgement:1 kf:5 relative:1 fully:15 loss:10 expect:1 limitation:5 validation:1 foundation:1 consistent:1 thresholding:2 share:3 row:3 penalized:1 supported:1 last:15 bias:1 institute:1 neighbor:7 wide:21 benefit:2 van:1 feedback:1 calculated:5 overcome:1 curve:7 dimension:1 hengel:1 xia:1 ignores:1 author:3 collection:1 bb:1 reconstructed:1 approximate:2 emphasize:1 compact:1 keep:3 splh:4 b1:2 xi:11 fergus:1 search:5 iterative:3 continuous:4 triplet:3 latent:1 table:9 promising:1 learn:23 nature:5 delving:1 ca:2 investigated:1 hc:2 main:1 whole:1 fair:3 categorized:1 crafted:1 representative:2 slow:1 precision:12 sub:1 learns:4 rk:3 cvprw:1 xt:2 list:2 essential:1 consist:2 quantization:6 sequential:1 margin:1 rui:1 gap:1 chen:3 locality:5 lagrange:2 partially:1 chang:3 corresponds:1 truth:1 extracted:2 acm:1 conditional:1 ksh:5 formulated:1 replace:1 admm:1 specifically:1 except:2 semantically:2 wt:1 total:8 experimental:17 rarely:1 select:1 guo:1 devil:1 dissimilar:4 incorporate:1 evaluate:1 |
6,460 | 6,843 | Using Options and Covariance Testing for Long
Horizon Off-Policy Policy Evaluation
Zhaohan Daniel Guo
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Philip S. Thomas
University of Massachusetts Amherst
Amherst, MA 01003
[email protected]
Emma Brunskill
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Evaluating a policy by deploying it in the real world can be risky and costly.
Off-policy policy evaluation (OPE) algorithms use historical data collected from
running a previous policy to evaluate a new policy, which provides a means for
evaluating a policy without requiring it to ever be deployed. Importance sampling
is a popular OPE method because it is robust to partial observability and works
with continuous states and actions. However, the amount of historical data required
by importance sampling can scale exponentially with the horizon of the problem:
the number of sequential decisions that are made. We propose using policies over
temporally extended actions, called options, and show that combining these policies
with importance sampling can significantly improve performance for long-horizon
problems. In addition, we can take advantage of special cases that arise due to
options-based policies to further improve the performance of importance sampling.
We further generalize these special cases to a general covariance testing rule that
can be used to decide which weights to drop in an IS estimate, and derive a new IS
algorithm called Incremental Importance Sampling that can provide significantly
more accurate estimates for a broad class of domains.
1
Introduction
One important problem for many high-stakes sequential decision making under uncertainty domains,
including robotics, health care, education, and dialogue systems, is estimating the performance of a
new policy without requiring it to be deployed. To address this, off-policy policy evaluation (OPE)
algorithms use historical data collected from executing one policy (called the behavior policy), to
predict the performance of a new policy (called the evaluation policy). Importance sampling (IS)
is one powerful approach that can be used to evaluate the potential performance of a new policy
[12]. In contrast to model based approaches to OPE [5], importance sampling provides an unbiased
estimate of the performance of the evaluation policy. In particular, importance sampling is robust
to partial observability, which is often prevalent in real-world domains. Unfortunately, importance
sampling estimates of the performance of the evaluation policy can be inaccurate when the horizon
of the problem is long: the variance of IS estimators can grow exponentially with the number of
sequential decisions made in an episode. This is a serious limitation for applications that involve
decisions made over tens or hundreds of steps, like dialogue systems where a conversation might
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
require dozens of responses, or intelligent tutoring systems that make dozens of decisions about how
to sequence the content shown to a student.
Due to the importance of OPE, there have been many recent efforts to improve the accuracy of importance sampling. For example, Dud?k et al. [4] and Jiang and Li [7] proposed doubly robust importance
sampling estimators that can greatly reduce the variance of predictions when an approximate model of
the environment is available. Thomas and Brunskill [16] proposed an estimator that further integrates
importance sampling and model-based approaches, and which can greatly reduce mean squared error.
These approaches trade-off between the bias and variance of model-based and importance sampling
approaches, and result in strongly consistent estimators. Unfortunately, in long horizon settings,
these approaches will either create estimates that suffer from high variance or exclusively rely on the
provided approximate model, which can have high bias. Other recent efforts that estimate a value
function using off-policy data rather than just the performance of a policy [6, 11, 19] also suffer from
bias if the input state description is not Markovian (such as if the domain description induces partial
observability).
To provide better off policy estimates in long horizon domains, we propose leveraging temporal
abstraction. In particular, we analyze using options-based policies (policies with temporally extended
actions) [14] instead of policies over primitive actions. We prove that the we can obtain an exponential
reduction in the variance of the resulting estimates, and in some cases, cause the variance to be
independent of the horizon. We also demonstrate this benefit with simple simulations. Crucially, our
results can be equivalently viewed as showing that using options can drastically reduce the amount of
historical data required to obtain an accurate estimate of a new evaluation policy?s performance.
We also show that using options-based policies can result in special cases which can lead to significant
reduction in estimation error through dropping importance sampling weights. Furthermore, we
generalize the idea of dropping weights and derive a covariance test that can be used to automatically
determine which weights to drop. We demonstrate the potential of this approach by constructing a
new importance sampling algorithm called Incremental Importance Sampling (INCRIS) and show
empirically that it can significantly reduce estimation error.
2
Background
We consider an agent interacting with a Markov decision process (MDP) for a finite sequence of time
steps. At each time step the agent executes an action, after which the MDP transitions to a new state
and returns a real valued reward. Let s ? S be a discrete state, a ? A be a discrete action, and r be
the reward bounded in [0, Rmax ]).
The transition and reward dynamics are unknown and are denoted by the transition probability
T (s0 |s, a) and reward density R(r|s, a). A primitive policy maps histories to action probabilities,
i.e., ?(at |s1 , a1 , r1 , . . . , st ) is the probability of executing action at at time step t after encountering
history s1 , a1 , r1 , . . . , st . The return of a trajectory ? of H steps is simply the sum of the rewards
PH
G(? ) = t=1 rt . Note we consider the undiscounted setting where ? = 1. The value of policy ? is
the expected return when running that policy: V? = E? (G(? )).
Temporal abstraction can reduce the computational complexity of planning and online learning
[2, 9, 10, 14]. One popular form of temporal abstraction is to use sub-policies, in particular options
[14]. Let ? be the space of trajectories. o, an option, consists of ?, a primitive policy (a policy over
primitive actions), ? : ? ? [0, 1], a termination condition where ?(? ) is the probability of stopping
the option given the current partial trajectory ? ? ? from when this option began, and I ? S, an input
set where s ? I denotes the states where o is allowed to start. Primitive actions can be considered as
a special case of options, where the options always terminate after a single step. ?(ot |s1 , a1 , . . . , st )
denotes the probability of picking option ot given history (s1 , a1 , . . . , st ) when the previous option
has terminated, according to options-based policy ?. A high-level trajectory of length k is denoted
by T = (s1 , o1 , v1 , s2 , o2 , v2 , . . . , sk , ok , vk ) where vt is the sum of the rewards accumulated when
executing option ot .
In this paper we will consider batch, offline, off-policy evaluation of policies for sequential decision
making domains using both primitive action policies and options-based policies. We will now
introduce the general OPE problem using primitive policies: in a later section we will combine this
with options-based policies.
2
In OPE we assume access to historical data, D, generated by an MDP, and a behavior policy
,?b . D consists of n trajectories, {? (i) }ni=1 . A trajectory has length H, and is denoted by ? (i) =
(i) (i) (i) (i) (i) (i)
(i) (i) (i)
(s1 , a1 , r1 , s2 , a2 , r2 , . . . , sH , aH , rH ). In off-policy evaluation, the goal is to use the data
D to estimate the value of an evaluation policy ?e : V?e . As D was generated from running the
behavior policy ?b , we cannot simply use the Monte Carlo estimate. An alternative is to use
importance sampling to reweight the data in D to give greater weight to samples that are likely under
?e and lesser weight to unlikely ones. We consider per-decision importance sampling (PDIS) [12],
which gives the following estimate of the value of ?e :
!
n
H
t
(i) (i)
Y
1 X X (i) (i)
?e (au |su )
(i)
PDIS(D) =
?t rt
,
?t =
,
(1)
(i) (i)
n i=1 t=1
u=1 ?b (au |su )
(i)
where ?t is the weight given to the rewards to correct due to the difference in distribution. This
estimator is an unbiased estimator of the value of ?e :
E?e (G(? )) = E?b (P DIS(? )),
(2)
where E? (. . . ) is the expected value given that the trajectories ? are generated by ?.
For simplicity, hereafter we assume that primitive and options-based policies are a function only of
the current state, but our results apply also when the they are a function of the history. Note that
importance sampling does not assume that the states in the trajectory are Markovian, and is thus
robust to error in the state representation, and in general, robust to partial observability as well.
3
Importance Sampling and Long Horizons
We now show how the amount of data required for importance sampling to obtain a good off-policy
estimate can scale exponentially with the problem horizon. Notice that in the standard importance
sampling estimator, the weight is the product of the ratio of action probabilities. We now prove that
this can cause the variance of the policy estimate to be exponential in H.1
Theorem 1. The mean squared error of the PDIS estimator can be ?(2H ). Proof. See appendix.
Equivalently, this means that achieving a desired mean squared error of can require a number of
trajectories that scales exponentially with the horizon. A natural question is whether this issue also
arises in a weighted importance sampling [13], a popular (biased) approach to OPE that has lower
variance. We show below that the long horizon problem still persists.
Theorem 2. It can take ?(2H ) trajectories to shrink the MSE of weighted importance sampling
(WIS) by a constant. Proof. See appendix.
4
Combining Options and Importance Sampling
We will show that one can leverage the advantage of options to mitigate the long horizon problem. If
the behavior and evaluation policies are both options-based policies, then the PDIS estimator can be
exponentially more data efficient compared to using primitive behavior and evaluation policies.
Due to the structure in options-based policies, we can decompose the difference between the behavior
policy and the evaluation policy in a natural way. Let ?b be the options-based behavior policy and
?e be the options-based evaluation policy. First, we examine the probabilities over the options. The
probabilities ?b (ot |st ) and ?e (ot |st ) can differ and contribute a ratio of probabilities as an importance
sampling weight. Second, the underlying policy, ?, for an option, ot , present in both ?b and ?e may
differ, and this also contributes to the importance sampling weights. Finally, additional or missing
options can be expressed by setting the probabilities over missing options to be zero for either ?b or
?e . Using this decomposition, we can easily apply PDIS to options-based policies.
Theorem 3. Let O be the set of options that have the same underlying policies between ?b and ?e .
Let O be the set of options that have changed underlying policies. Let k (i) be the length of the i-th
(i)
high level trajectory from data set D. Let jt be the length of the sub-trajectory produced by option
(i)
ot . The PDIS estimator applied to D is
1
These theorems can be seen as special case instantiations of Theorem 6 in [8] with simpler, direct proofs.
3
? (i)
?
n
k
1 X ?X (i) (i) ?
P DIS(D) =
w y
n i=1 t=1 t t
(
(i)
yt
=
(i)
(i)
wt =
t
(i) (i)
Y
?e (ou |su )
(i)
u=1
(i)
?b (ou |su )
,
(3)
(i)
(i)
vt if ot ? O
Pjt(i) (i) (i)
(i)
b=1 ?t,b rt,b if ot ? O
(i)
(i)
?t,b
=
jt
(i) (i) (i)
Y
?e (at,c |st,c , ot )
(i)
c=1
(i)
(i)
?b (at,c |st,c , ot )
,
(4)
(i)
where rt,b is the b-th reward in the sub-trajectory of option ot and similarly for s and a.
Proof. This is a straightforward application of PDIS to the options-based policies using the decomposition mentioned.
Theorem 3 expresses the weights in two parts: one part comes from the probabilities over options
(i)
which is expressed as wt , and another part comes from the underlying primitive policies of options
(i)
that have changed with ?t,b . We can immediately make some interesting observations below.
Corollary 1. If no underlying policies for options are changed between ?b and ?e , and all options
have length at least J steps, then the worst case variance of PDIS is exponentially reduced from
?(2H ) to ?(2(H/J) )
Corollary 1 follows from Theorem 3. Since no underlying policies are changed, then the only
(i)
importance sampling weights left are wt . Thus we can focus our attention only on the high-level
trajectory which has length at most H/J. Effectively, the horizon has shrunk from H to H/J, which
results in an exponential reduction of the worst case variance of PDIS.
Corollary 2. If the probabilities over options are the same between ?b and ?e , and a subset of
options O have changed their underlying policies, then the worst case variance of PDIS is reduced
from ?(2H ) to ?(2K ) where K is an upper bound on the sum of the lengths of the options.
Corollary 2 follows from Theorem 3. The options whose underlying policies are the same between
behavior and evaluation can effectively be ignored, and cut out of the trajectories in the data. This
leaves only options whose underlying policies have changed, shrinking down the horizon from H to
the length of the leftover options. For example, if only a single option of length 3 is changed, and the
option appears once in a trajectory, then the horizon can be effectively reduced to just 3. This result
can be very powerful, as the reduced variance becomes independent of the horizon H.
5
Experiment 1: Options-based Policies
This experiment illustrates how using options-based policies can significantly improve the accuracy
of importance-sampling-based estimators for long horizon domains. Since importance sampling
is particularly useful when a good model of the domain is unknown and/or the domain involves
partial observability, we introduce a partially observable variant of the popular Taxi domain [3] called
NoisyTaxi for our simulations (see figure 1).
5.1
Partially Observable Taxi
Figure 1: Taxi Domain [3]. It is a 5?5 gridworld (Figure 1). There are 4 special
locations: R,G,B,Y. A passenger starts randomly at one of the 4 locations, and
its destination is randomly chosen from one of the 4 locations. The taxi starts
randomly on any square. The taxi can move one step in any of the 4 cardinal
directions N,S,E,W, as well as attempt to pickup or drop off the passenger. Each
step has a reward of ?1. An invalid pickup or dropoff has a ?10 reward and a
successful dropoff has a reward of 20.
In NoisyTaxi, the location of the taxi and the location of the passenger is partially observable. If the
row location of the taxi is c, the agent observes c with probability 0.85, c + 1 with probability 0.075
4
and c ? 1 with probability 0.075 (if adding or subtracting 1 would cause the location to be outside
the grid, the resulting location is constrained to still lie in the grid). The column location of the taxis
is observed with the same noisy distribution. Before the taxi successfully picks up the passenger, the
observation of the location of the passenger has a probability of 0.15 of switching randomly to one of
the four designated locations. After the passenger is picked up, the passenger is observed to be in the
taxi with 100% probability (e.g. no noise while in the taxi).
5.2
Experimental Results
We consider -greedy option policies, where with probability 1 ? the policy samples the optimal
option, and probability the policy samples a random option. Options in this case are n-step policies,
where ?optimal? options involve taking n-steps of the optimal (primitive action) policy, and ?random?
options involve taking n random primitive actions.2 Our behavior policies ?b will use = 0.3 and
our evaluation policies ?e use = 0.05. We investigate how the accuracy of estimating ?e varies as
a function both of the number of trajectories and the length of the options n = 1, 2, 3. Note n = 1
corresponds to having a primitive action policy.
Empirically, all behavior policies have essentially the same performance. Similarly all evaluation
policies have essentially the same performance. We first collect data using the behavior policies, and
then use PDIS to evaluate their respective evaluation policies.
Figure 2 compares the MSE (log scale) of the PDIS estimators for the evaluation policies.
Figure 2: Comparing the MSE of PDIS between primitive and options-based behavior and evaluation policy
pairs. Note the y-axis is a log scale. Our results show
that PDIS for the options-based evaluation policies are
an order of magnitude better than PDIS for the primitive evaluation policy. Indeed, Corollary 1 shows that
the n-step options policies are effectively reducing the
horizon by a factor of n over the primitive policy. As
expected, the options-based policies that use 3-step
options have the lowest MSE.
6
Going Further with Options
Often options are used to achieve a specific sub-task in a domain. For example in a robot navigation
task, there may be an option to navigate to a special fixed location. However one may realize that
there is a faster way to navigate to that location, so one may change that option and try to evaluate the
new policy to see whether it is actually better. In this case the old and new option are both always
able to reach the special location; the only difference is that the new option could get there faster. In
such a case we can further reduce the variance of PDIS. We now formally define this property.
Definition 1. Given behavior policy ?b and evaluation policy ?e , an option o is called stationary, if
the distribution of the states on which o terminates is always the same for ?b and ?e . The underlying
policy for option o can differ for ?b and ?e ; only the termination state distribution is important.
A stationary option may not always arise due to solving a sub-task. It can also be the case that a
stationary option is used as a way to perform a soft reset. For example, a robotic manipulation task
may want to reset arm and hand joints to a default configuration in order to minimize sensor/motor
error, before trying to grasp a new object.
Stationary options allows us to point to a step in a trajectory where we know the state distribution
is fixed. Because the state distribution is fixed, we can partition the trajectory into two parts. The
beginning of the second partition would then have state distribution that is independent of the actions
2
We have also tried using more standard options that navigate to a specific destination, and the experiment
results closely mirror those shown here.
5
chosen in the first partition. We can then independently apply PDIS to each partition, and sum up the
estimates. This is powerful because it can halve the effective horizon of the problem.
Theorem 4. Let ?b be an options-based behavior policy. Let ?e be an options-based evaluation
policy. Let O be the set of options that ?b and ?e use. The underlying policies of the options in ?e
may be arbitrarily different from ?b .
Let o1 be a stationary option. We can decompose the expected value as follows. Let ?1 be the first part
of a trajectory up until and including the first occurrence of o1 . Let ?2 be the part of the trajectory
after the first occurrence of o1 up to and including the first occurrence of o2 . Then
E?e (G(? )) = E?b (P DIS(? )) = E?b (P DIS(?1 )) + E?b (P DIS(?2 ))
(5)
Proof. See appendix.
Note that there are no conditions on how the probabilities over options may differ, nor on how the
underlying policies of the non-stationary options may differ. This means that, regardless of these
differences, the trajectories can be partitioned and PDIS can be independently applied. Furthermore,
Theorem 3 can still be applied to each of the independent applications of PDIS. Combining Theorem
4 and Theorem 3 can lead to more ways of designing a desired evaluation policy that will result in a
low variance PDIS estimate.
7
Experiment 2: Stationary Options
We now demonstrate Theorem 4 empirically on NoisyTaxi. In NoisyTaxi, we know that a primitive
-greedy policy will eventually pick up the passenger (though it may take a very long time depending
on ). Since the starting location of the passenger is uniformly random, the location of the taxi
immediately after picking up the passenger is also uniformly random, but over the four pickup
locations. This implies that, regardless of the value in an -greedy policy, we can view executing
that -greedy policy until the passenger is picked up as a new "PickUp-" option that always terminates
in the same state distribution.
Given this argument, we can use Theorem 4 to decompose any NoisyTaxi trajectory into the part
before the passenger is picked up, and the part after the passenger is picked up, estimate the expected
reward for each, and then sum. As picking up the passenger is often the halfway point in a trajectory
(depending on the locations of the passenger and the destination), we can perform importance
sampling over two, approximately half length, trajectories. More concretely, we consider two n = 1
options (e.g. primitive action) -greedy policies. Like in the prior subsection, the behavior policy has
= 0.3 and the evaluation policy has = 0.05. We compare performing normal PDIS to estimate the
value of the evaluation policy to estimating it using partitioned-PDIS using Theorem 4. See Figure 3
for results.
Figure 3: Comparing MSE of Normal PDIS and PDIS
that uses Theorem 4. We gain an order of an order
of magnitude reduction in MSE (labeled PartitionedPDIS). Note this did not require that the primitive
policy used options: we merely used the fact that if
there are subgoals in the domain where the agent is
likely to go through with a fixed state distribution, we
can leverage that to decompose the value of a long
horizon into the sum over multiple shorter ones. Options is one common way this will occur, but as we
see in this example, this can also occur in other ways.
8
Covariance Testing
The special case of stationary options can be viewed as a form of dropping certain importance
sampling weights from the importance sampling estimator. With stationary options, the weights
before the stationary options are dropped when estimating the rewards thereafter. By considering
6
the bias incurred when dropping weights, we derive a general rule involving covariances as follows.
Let W1 W2 r be the ordinary importance sampling estimator for reward r where the product of the
importance sampling weights are partitioned into two products W1 and W2 using some general
partitioning scheme such that E(W1 ) = 1. Note that this condition is satisfied when W1 , W2 are
chosen according to commonly used schemes such as fixed timesteps (not necessarily consecutive) or
fixed states, but can be satisfied by more general schemes as well. Then we can consider dropping
the product of weights W1 and simply output the estimate W2 r:
E(W1 W2 r) = E(W1 )E(W2 r) + Cov(W1 , W2 r)
= E(W2 r) + Cov(W1 , W2 r)
(6)
(7)
This means that if Cov(W1 , W2 r) = 0, then we can drop the weights W1 with no bias. Otherwise,
the bias incurred is Cov(W1 , W2 r). Then we are free to choose W1 , W2 to balance the reduction in
variance and the increase in bias.
8.1
Incremental Importance Sampling (INCRIS)
Using the Covariance Test (eqn 7) idea, we propose a new importance sampling algorithm called
Incremental Importance Sampling (INCRIS). This is a variant of PDIS where for a reward rt , we try
to drop all but the most recent k importance sampling weights, using the covariance test to optimize
k in order to lower MSE.
Let ?b and ?e be the behavior and evaluation policies respectively (they may or may not be optionsbased policies). Let D = {? (1) , ? (2) , . . . , ? (n) } be our historical data set generated from ?b with
(i)
(at |st )
n trajectories of length H. Let ?t = ??eb (a
. Let ?t be the same but computed from the i-th
t |st )
trajectory. Suppose we are given estimators for covariance and variance. See algorithm 1 for details.
Algorithm 1 INCRIS
1: Input: D
2: for t = 1 to H do
3:
for k = 0Qto t do
t?k
4:
Ak = j=1 ?j
Qt
5:
Bk = j=t?k+1 ?j
6:
7:
8:
9:
10:
11:
12:
13:
8.2
bk
Estimate Cov(Ak , Bk rt ) and denote C
b
Estimate Var(Bk rt ) and denote Vk
b 2 + Vbk
\
Estimate MSE with M
SE k = C
k
end for
\
k 0 = argmink M
SE k
Pn
(i)
1
Let rbt = n i=1 Bk rt
end for P
H
Return t=1 rbt
Strong Consistency
In the appendix, we provide a proof that INCRIS is strongly consistent. We now give a brief intuition
for the proof. As n goes to infinity, the estimates for the MSE get better and better and converge
to the bias. We know that if we do not drop any weights, we get an unbiased estimate and thus the
smallest MSE estimate will go to zero. Thus we get more and more likely to pick k that correspond
to unbiased estimates.
9
Experiment 3: Incremental Importance Sampling
To evaluate INCRIS, we constructed a simple MDP that exemplifies to properties of domains for which
we expect INCRIS to be useful. Specifically, we were motivated by the applications of reinforcement
learning methods to type 1 diabetes treatments [1, 17] and digital marketing applications [15]. In
these applications there is a natural place where one might divide data into episodes: for type 1
7
diabetes treatment, one might treat each day as an independent episode, and for digital marketing,
one might treat each user interaction as an independent episode.
However, each day is not actually independent in diabetes treatment?a person?s blood sugar in the
morning depends on their blood sugar at the end of the previous day. Similarly, in digital marketing
applications, whether or not a person clicks on an ad might depend on which ads they were shown
previously (e.g., someone might be less likely to click an ad that they were shown before and did not
click on then). So, although this division into episodes is reasonable, it does not result in episodes
that are completely independent, and so importance sampling will not produce consistent estimates
(or estimates that can be trusted for high-confidence off-policy policy evaluation [18]). To remedy
this, we might treat all of the data from a single individual (many days, and many page visits) as a
single episode, which contains nearly-independent subsequences of decisions.
To model this property, we constructed an MDP with three states, s1 , s2 , and s3 and two actions, a1
and a2 . The agent always begins in s1 , where taking action a1 causes a transition to s2 with a reward
of +1 and taking action a2 causes a transition to s3 with a reward of ?1. In s2 , both actions lead to
a terminal absorbing state with reward ?2 + , and in s3 both actions lead to a terminal absorbing
state with reward +2. For now, let = 0. This simple MDP has a horizon of 2 time steps?after two
actions the agent is always in a terminal absorbing state. To model the aforementioned examples,
we modified this simple MDP so that whenever the agent would transition to the terminal absorbing
state, it instead transitions back to s1 . After visiting s1 fifty times, the agent finally transitions to a
terminal absorbing state. Furthermore, to model the property that the fifty sub-episodes within the
larger episode are not completely independent, we set = 0 initially, and = + 0.01 whenever the
agent enters s2 . This creates a slight dependence across the sub-episodes.
For this illustrative domain, we would like an importance sampling estimator that assumes that subepisodes are independent when there is little data in order to reduce variance. However, once there is
enough data for the variances of estimates to be sufficiently small relative to the bias introduced by
assuming that sub-episodes are independent, the importance sampling estimator should automatically
begin considering longer sequences of actions, as INCRIS does. We compared INCRIS to ordinary
importance sampling (IS), per-decision importance sampling (PDIS), weighted importance sampling
(WIS), and consistent weighted per-decision importance sampling (CWPDIS). The behavior policy
selects actions randomly, while the evaluation policy selects action a1 with a higher probability than
a2 . In Figure 4 we report the mean squared errors of the different estimators using different amounts
of data.
100000
MSE
10000
1000
100
10
1
1
IS
10
10
100
1000
Amount of Historical Data, n
PDIS
WIS
CWPDIS
10000
INCRIS
Figure 4: Performance of different estimators on the simple
MDP that models properties of the diabetes treatment and
digital marketing applications. The reported mean squared
errors are the sample mean squared errors from 128 trials.
Notice that INCRIS achieves an order of magnitude lower
mean squared error than all of the other estimators, and for
some n it achieves two orders of magnitude improvement
over the unweighted importance sampling estimators.
Conclusion
We have shown that using options-based behavior and evaluation policies allow for lower mean
squared error when using importance sampling due to their structure. Furthermore, special cases may
naturally arise when using options, such as when options terminate in a fixed state distribution, and
lead to greater reduction of the mean squared error.
We examined options as a first step, but in the future these results may be extended to full hierarchical
policies (like the MAX-Q framework). We also generalized naturally occurring special cases with
covariance testing that leads to dropping out weights in order to improve importance sampling
predictions. We showed an instance of covariance testing in the algorithm INCRIS, which can greatly
improve estimation accuracy for a general class of domains, and hope to derive more powerful
estimators based on covariance testing that can apply to even more domains in the future.
8
Acknowledgements
The research reported here was supported in part by an ONR Young Investigator award, an NSF
CAREER award, and by the Institute of Education Sciences, U.S. Department of Education. The
opinions expressed are those of the authors and do not represent views of NSF, IES or the U.S. Dept.
of Education.
References
[1] M. Bastani. Model-free intelligent diabetes management using machine learning. Master?s
thesis, Department of Computing Science, University of Alberta, 2014.
[2] Emma Brunskill and Lihong Li. Pac-inspired option discovery in lifelong reinforcement learning.
In ICML, pages 316?324, 2014.
[3] Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function
decomposition. J. Artif. Intell. Res.(JAIR), 13:227?303, 2000.
[4] M. Dud?k, J. Langford, and L. Li. Doubly robust policy evaluation and learning. In Proceedings
of the Twenty-Eighth International Conference on Machine Learning, pages 1097?1104, 2011.
[5] Assaf Hallak, Fran?ois Schnitzler, Timothy Arthur Mann, and Shie Mannor. Off-policy modelbased learning under unknown factored dynamics. In ICML, pages 711?719, 2015.
[6] Assaf Hallak, Aviv Tamar, R?mi Munos, and Shie Mannor. Generalized emphatic temporal
difference learning: Bias-variance analysis. arXiv preprint arXiv:1509.05172, 2015.
[7] Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning.
In Proceedings of The 33rd International Conference on Machine Learning, pages 652?661,
2016.
[8] Lihong Li, Remi Munos, and Csaba Szepesvari. Toward Minimax Off-policy Value Estimation.
In Guy Lebanon and S. V. N. Vishwanathan, editors, Proceedings of the Eighteenth International
Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine
Learning Research, pages 608?616, San Diego, California, USA, 09?12 May 2015. PMLR.
URL http://proceedings.mlr.press/v38/li15b.html.
[9] Daniel J Mankowitz, Timothy A Mann, and Shie Mannor. Time regularized interrupting options.
In Internation Conference on Machine Learning, 2014.
[10] Timothy A Mann and Shie Mannor. The advantage of planning with options. RLDM 2013,
page 9, 2013.
[11] R?mi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. Safe and efficient
off-policy reinforcement learning. In Advances in Neural Information Processing Systems,
pages 1046?1054, 2016.
[12] Doina Precup. Eligibility traces for off-policy policy evaluation. Computer Science Department
Faculty Publication Series, page 80, 2000.
[13] Doina Precup, Richard S Sutton, and Sanjoy Dasgupta. Off-policy temporal-difference learning
with function approximation. In ICML, pages 417?424, 2001.
[14] Richard S Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPsmdps: A
framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):
181?211, 1999.
[15] G. Theocharous, P. S. Thomas, and M. Ghavamzadeh. Personalized ad recommendation systems
for life-time value optimization with guarantees. In Proceedings of the International Joint
Conference on Artificial Intelligence, 2015.
[16] P. S. Thomas and E. Brunskill. Data-efficient off-policy policy evaluation for reinforcement
learning. In International Conference on Machine Learning, 2016.
[17] P. S. Thomas and E. Brunskill. Importance sampling with unequal support. AAAI, 2017.
[18] P. S. Thomas, G. Theocharous, and M. Ghavamzadeh. High confidence off-policy evaluation.
In Proceedings of the Twenty-Ninth Conference on Artificial Intelligence, 2015.
9
[19] Philip S Thomas, Scott Niekum, Georgios Theocharous, and George Konidaris. Policy evaluation using the ?-return.
In C. Cortes, N. D. Lawrence, D. D. Lee,
M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems
28, pages 334?342. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/
5807-policy-evaluation-using-the-return.pdf.
10
| 6843 |@word trial:1 faculty:1 termination:2 simulation:2 crucially:1 tried:1 covariance:11 decomposition:3 pick:3 reduction:6 configuration:1 contains:1 uma:1 exclusively:1 hereafter:1 daniel:2 series:1 o2:2 current:2 comparing:2 realize:1 partition:4 motor:1 drop:6 stationary:10 greedy:5 leaf:1 half:1 intelligence:4 beginning:1 provides:2 mannor:4 contribute:1 location:18 simpler:1 constructed:2 direct:1 prove:2 doubly:3 consists:2 combine:1 assaf:2 emma:2 introduce:2 indeed:1 expected:5 behavior:18 planning:2 examine:1 nor:1 terminal:5 inspired:1 alberta:1 automatically:2 little:1 considering:2 becomes:1 provided:1 estimating:4 bounded:1 underlying:12 begin:2 lowest:1 rmax:1 hallak:2 csaba:1 guarantee:1 temporal:6 mitigate:1 partitioning:1 before:5 persists:1 dropped:1 treat:3 switching:1 taxi:12 sutton:2 ak:2 theocharous:3 jiang:2 approximately:1 might:7 au:2 eb:1 examined:1 collect:1 someone:1 testing:6 significantly:4 confidence:2 get:4 cannot:1 bellemare:1 optimize:1 map:1 missing:2 yt:1 eighteenth:1 primitive:19 straightforward:1 attention:1 independently:2 regardless:2 starting:1 go:3 simplicity:1 immediately:2 factored:1 rule:2 estimator:22 diego:1 suppose:1 user:1 us:1 designing:1 curran:1 diabetes:5 pa:1 associate:1 particularly:1 cut:1 labeled:1 observed:2 preprint:1 enters:1 worst:3 episode:11 trade:1 observes:1 mentioned:1 intuition:1 environment:1 complexity:1 sugar:2 reward:19 dynamic:2 ghavamzadeh:2 depend:1 solving:1 singh:1 creates:1 division:1 completely:2 easily:1 joint:2 effective:1 monte:1 artificial:4 niekum:1 outside:1 whose:2 stanford:3 valued:1 larger:1 otherwise:1 cov:5 statistic:1 noisy:1 online:1 advantage:3 sequence:3 propose:3 subtracting:1 interaction:1 product:4 reset:2 combining:3 argmink:1 achieve:1 description:2 undiscounted:1 r1:3 produce:1 incremental:5 executing:4 object:1 derive:4 depending:2 qt:1 strong:1 c:3 involves:1 come:2 implies:1 ois:1 differ:5 direction:1 safe:1 closely:1 correct:1 shrunk:1 opinion:1 mann:3 education:4 require:3 decompose:4 dropoff:2 sufficiently:1 considered:1 normal:2 lawrence:1 predict:1 achieves:2 consecutive:1 a2:4 smallest:1 pjt:1 estimation:4 integrates:1 leftover:1 create:1 successfully:1 weighted:4 trusted:1 hope:1 sensor:1 always:7 modified:1 rather:1 pn:1 publication:1 corollary:5 exemplifies:1 focus:1 vk:2 improvement:1 prevalent:1 greatly:3 contrast:1 abstraction:4 stopping:1 accumulated:1 inaccurate:1 unlikely:1 initially:1 going:1 selects:2 issue:1 aforementioned:1 html:1 denoted:3 constrained:1 special:11 v38:1 once:2 having:1 beach:1 sampling:51 broad:1 icml:3 nearly:1 future:2 report:1 intelligent:2 serious:1 cardinal:1 richard:2 randomly:5 intell:1 individual:1 attempt:1 mankowitz:1 investigate:1 evaluation:38 grasp:1 navigation:1 sh:1 accurate:2 partial:6 arthur:1 respective:1 shorter:1 old:1 divide:1 desired:2 re:1 instance:1 column:1 soft:1 markovian:2 ordinary:2 subset:1 hundred:1 successful:1 reported:2 varies:1 st:11 density:1 person:2 amherst:2 international:5 destination:3 off:19 lee:1 picking:3 modelbased:1 precup:3 w1:13 squared:9 aaai:1 thesis:1 satisfied:2 management:1 choose:1 guy:1 dialogue:2 return:6 li:5 potential:2 student:1 inc:1 doina:3 depends:1 passenger:15 ad:4 later:1 try:2 picked:4 view:2 analyze:1 start:3 pthomas:1 option:95 minimize:1 square:1 ni:1 accuracy:4 variance:19 correspond:1 generalize:2 rbt:2 produced:1 carlo:1 trajectory:27 cc:1 executes:1 history:4 ah:1 deploying:1 reach:1 halve:1 whenever:2 harutyunyan:1 definition:1 konidaris:1 naturally:2 proof:7 mi:2 gain:1 treatment:4 massachusetts:1 popular:4 conversation:1 subsection:1 ou:2 actually:2 back:1 appears:1 ok:1 higher:1 jair:1 day:4 tom:1 ebrun:1 response:1 shrink:1 strongly:2 though:1 furthermore:4 just:2 marketing:4 until:2 langford:1 hand:1 eqn:1 su:4 morning:1 mdp:8 artif:1 aviv:1 usa:2 dietterich:1 requiring:2 unbiased:4 remedy:1 dud:2 eligibility:1 illustrative:1 generalized:2 trying:1 pdf:1 demonstrate:3 began:1 common:1 absorbing:5 mlr:1 empirically:3 exponentially:6 volume:1 subgoals:1 slight:1 mellon:1 significant:1 rd:1 grid:2 consistency:1 similarly:3 sugiyama:1 lihong:3 access:1 robot:1 encountering:1 longer:1 recent:3 showed:1 manipulation:1 certain:1 onr:1 arbitrarily:1 vt:2 life:1 seen:1 greater:2 care:1 additional:1 george:1 determine:1 converge:1 semi:1 multiple:1 full:1 faster:2 long:12 visit:1 award:2 a1:8 y:1 prediction:2 variant:2 involving:1 essentially:2 cmu:1 arxiv:2 represent:1 robotics:1 addition:1 background:1 want:1 grow:1 ot:12 biased:1 w2:12 fifty:2 shie:4 leveraging:1 leverage:2 enough:1 timesteps:1 click:3 observability:5 reduce:7 idea:2 lesser:1 tamar:1 whether:3 motivated:1 url:2 effort:2 suffer:2 cause:5 action:26 ignored:1 useful:2 se:2 involve:3 amount:5 ten:1 ph:1 induces:1 schnitzler:1 reduced:4 http:2 nsf:2 notice:2 s3:3 per:3 carnegie:1 discrete:2 dropping:6 dasgupta:1 express:1 thereafter:1 four:2 achieving:1 blood:2 bastani:1 v1:1 merely:1 halfway:1 sum:6 uncertainty:1 powerful:4 master:1 place:1 reasonable:1 decide:1 fran:1 interrupting:1 decision:11 appendix:4 bound:1 nan:1 ope:8 occur:2 infinity:1 vishwanathan:1 personalized:1 argument:1 performing:1 department:3 designated:1 according:2 terminates:2 across:1 wi:3 partitioned:3 making:2 s1:10 previously:1 eventually:1 know:3 end:3 available:1 apply:4 hierarchical:2 v2:1 qto:1 occurrence:3 pmlr:1 batch:1 alternative:1 thomas:8 denotes:2 running:3 assumes:1 move:1 question:1 costly:1 rt:8 dependence:1 visiting:1 philip:2 evaluate:5 collected:2 tutoring:1 toward:1 assuming:1 length:12 o1:4 ratio:2 balance:1 equivalently:2 unfortunately:2 reweight:1 trace:1 policy:127 unknown:3 perform:2 twenty:2 upper:1 observation:2 markov:1 finite:1 pickup:4 extended:3 ever:1 gridworld:1 interacting:1 ninth:1 bk:5 introduced:1 pair:1 required:3 california:1 unequal:1 maxq:1 nip:2 address:1 able:1 below:2 scott:1 eighth:1 including:3 max:1 natural:3 rely:1 regularized:1 arm:1 minimax:1 scheme:3 improve:6 brief:1 mdps:1 risky:1 temporally:2 axis:1 health:1 prior:1 acknowledgement:1 discovery:1 relative:1 georgios:1 expect:1 interesting:1 limitation:1 var:1 emphatic:1 digital:4 incurred:2 agent:9 consistent:4 s0:1 editor:2 row:1 changed:7 supported:1 free:2 dis:5 drastically:1 bias:10 offline:1 allow:1 institute:1 lifelong:1 taking:4 munos:3 benefit:1 default:1 evaluating:2 world:2 transition:8 unweighted:1 concretely:1 made:3 commonly:1 reinforcement:7 author:1 san:1 historical:7 lebanon:1 approximate:2 observable:3 satinder:1 robotic:1 instantiation:1 pittsburgh:1 subsequence:1 continuous:1 sk:1 terminate:2 robust:7 ca:2 career:1 szepesvari:1 contributes:1 mse:11 necessarily:1 constructing:1 domain:17 marc:1 garnett:1 did:2 anna:1 terminated:1 s2:6 rh:1 arise:3 noise:1 allowed:1 deployed:2 shrinking:1 brunskill:5 sub:8 exponential:3 lie:1 young:1 dozen:2 theorem:16 down:1 stepleton:1 specific:2 navigate:3 jt:2 showing:1 pac:1 r2:1 cortes:1 sequential:4 effectively:4 importance:52 adding:1 mirror:1 magnitude:4 illustrates:1 occurring:1 horizon:21 timothy:3 simply:3 likely:4 remi:1 expressed:3 partially:3 recommendation:1 corresponds:1 ma:1 viewed:2 goal:1 invalid:1 internation:1 content:1 change:1 specifically:1 reducing:1 uniformly:2 wt:3 called:8 sanjoy:1 stake:1 experimental:1 formally:1 support:1 guo:1 arises:1 investigator:1 dept:1 |
6,461 | 6,844 | How regularization affects the critical points in linear
networks
Amirhossein Taghvaei?
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
Urbana, IL, 61801
[email protected]
Jin W. Kim
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
Urbana, IL, 61801
[email protected]
Prashant G. Mehta
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
Urbana, IL, 61801
[email protected]
Abstract
This paper is concerned with the problem of representing and learning a linear
transformation using a linear neural network. In recent years, there is a growing
interest in the study of such networks, in part due to the successes of deep learning.
The main question of this body of research (and also of our paper) is related to the
existence and optimality properties of the critical points of the mean-squared loss
function. An additional primary concern of our paper pertains to the robustness of
these critical points in the face of (a small amount of) regularization. An optimal
control model is introduced for this purpose and a learning algorithm (backprop
with weight decay) derived for the same using the Hamilton?s formulation of
optimal control. The formulation is used to provide a complete characterization of
the critical points in terms of the solutions of a nonlinear matrix-valued equation,
referred to as the characteristic equation. Analytical and numerical tools from
bifurcation theory are used to compute the critical points via the solutions of the
characteristic equation.
1
Introduction
This paper is concerned with the problem of representing and learning a linear transformation with a
linear neural network. Although a classical problem (Baldi and Hornik [1989, 1995]), there has been
a renewed interest in such networks (Saxe et al. [2013], Kawaguchi [2016], Hardt and Ma [2016],
Gunasekar et al. [2017]) because of the successes of deep learning. The motivation for studying linear
networks is to gain insight into the optimization problem for the more general nonlinear networks. A
?
Financial support from the NSF CMMI grant 1462773 is gratefully acknowledged.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
focus of the recent research on these (and also nonlinear) networks has been on the analysis of the
critical points of the non-convex loss function (Dauphin et al. [2014], Choromanska et al. [2015a,b],
Soudry and Carmon [2016], Bhojanapalli et al. [2016]). This is also the focus of our paper.
Problem: The input-output model is assumed to be of the following linear form:
Z = RX0 + ?
(1)
where X0 ? Rd?1 is the input, Z ? Rd?1 is the output, and ? ? Rd?1 is the noise. The input X0 is
modeled as a random variable whose distribution is denoted as p0 . Its second moment is denoted as
?0 := E[X0 X0> ] and assumed to be finite. The noise ? is assumed to be independent of X0 , with
zero mean and finite variance. The linear transformation R ? Md (R) is assumed to satisfy a property
(P1) introduced in Sec. 3 (Md (R) denotes the set of d ? d matrices). The problem is to learn the
weights of a linear neural network from i.i.d. input-output samples {(X0k , Z k )}K
k=1 .
Solution architecture: is a continuous-time linear feedforward neural network model:
dXt
= At Xt
dt
(2)
where At ? Md (R) are the network weights indexed by continuous-time (surrogate for layer)
t ? [0, T ], and X0 is the initial condition at time t = 0 (same as the input data). The parameter
T denotes the network depth. The optimization problem is to choose the weights At over the
time-horizon [0, T ] to minimize the mean-squared loss function:
E[|XT ? Z|2 ]
(3)
This problem is referred to as the [? = 0] problem.
Backprop is a stochastic gradient descent algorithm for learning the weights At . In general, one
obtains (asymptotic) convergence of the learning algorithm to a (local) minimum of the optimization
problem Lee et al. [2016], Ge et al. [2015]. This has spurred investigation of the critical points of the
loss function (3) and the optimality properties (local vs. global minima, saddle points) of these points.
For linear multilayer (discrete) neural networks (MNN), strong conclusions have been obtained under
rather mild conditions: every local minimum is a global minimum and every critical point that is
not a local minimum is a saddle point Kawaguchi [2016], Baldi and Hornik [1989]. For the discrete
counterpart of the [? = 0] problem (referred to as the linear residual network in Hardt and Ma [2016]),
an even stronger conclusion is possible: all critical points of the [? = 0] problem are global minimum.
In experiments, some of these properties are also empirically observed in deep nonlinear networks;
cf., Choromanska et al. [2015b], Dauphin et al. [2014], Saxe et al. [2013].
In this paper, we consider the following regularized form of the optimization problem:
Z T
1
1
Minimize: J[A] = E[ ?
tr (A>
|XT ? Z|2 ]
t At ) dt +
A
2
0 2
dXt
Subject to:
= At Xt , X0 ? p0
dt
(4)
where ? ? R+ := {x ? R : x ? 0} is a regularization parameter. In literature, this form of
regularization is referred to as weight decay [Goodfellow et al., 2016, Sec. 7.1.1]. Eq. (4) is an
example of an optimal control problem and is referred to as such. The limit ? ? 0 is referred to as
[? = 0+ ] problem. The symbol tr(?) and superscript > are used to denote matrix trace and matrix
transpose, respectively.
The regularized problem is important because of the following reasons:
2
(i) The learning algorithms are believed to converge to the critical points of the regularized [? = 0+ ]
problem, a phenomenon known as implicit regularization Neyshabur et al. [2014], Zhang et al.
[2016], Gunasekar et al. [2017].
(ii) It is shown in the paper that the stochastic gradient descent (for the functional J) yields the
following learning algorithm for the weights At :
(k+1)
At
(k)
= At
(k)
+ ?k (??At
+ backprop update)
(5)
for k = 1, 2, . . ., where ?k is the learning rate parameter. Thus, the parameter ? models
dissipation (or weight decay) in backprop. In an implementation of backprop, one would expect
to obtain critical points of the [? = 0+ ] problem.
The outline of the remainder of this paper is as follows: The Hamilton?s formulation is introduced
for the optimal control problem (4) in Sec. 2; cf., LeCun et al. [1988], Farotimi et al. [1991] for
related constructions. The Hamilton?s equations are used to obtain a formula for the gradient of
J, and subsequently derive the stochastic gradient descent learning algorithm of the form (5). The
equations for the critical points of J are obtained by applying the Maximum Principle of optimal
control (Prop. 1). Remarkably, the Hamilton?s equations for the critical points can be solved in
closed-form to obtain a characterization of the critical points in terms of the solutions of a nonlinear
matrix-valued equation, referred to as the characteristic equation (Prop. 2). For a certain special case,
where the matrix R is normal, analytical results are obtained based on the use of the implicit function
theorem (Thm. 2). Numerical continuation is employed to compute the solutions for this and the
more general non-normal cases (Examples 1 and 2).
2
Hamilton?s formulation and the learning algorithm
Definition 1. The control Hamiltonian is the function
?
tr(B > B)
(6)
2
where x ? Rd is the state, y ? Rd is the co-state, and B ? Md (R) is the weight matrix. The
?H
?H
>
partial derivatives are denoted as ?H
?x (x, y, B) := B y, ?y (x, y, B) := Bx, and ?B (x, y, B) :=
>
yx ? ?B.
H(x, y, B) = y > Bx ?
Pontryagin?s Maximum Principle (MP) is used to obtain the Hamilton?s equations for the solution of
the optimal control problem (4). The MP represents a necessary condition satisfied by any minimizer.
Conversely, a solution of the Hamilton?s equation is a critical point of the functional J. The proof of
the following proposition appears in the supplementary material.
Proposition 1. Consider the terminal cost optimal control
At is the minimizer and Xt is the corresponding trajectory.
Y : [0, T ] ? Rd such that
dXt
?H
=+
(Xt , Yt , At ) = +At Xt ,
dt
?y
dYt
?H
=?
(Xt , Yt , At ) = ?A>
t Yt ,
dt
?x
and At maximizes the expected value of the Hamiltonian
At = arg max
E[H(Xt , Yt , B)]
B ? Md (R)
problem (4) with ? ? 0. Suppose
Then there exists a random process
X0 ? p0
(7)
YT = Z ? XT
(8)
1
E[Yt Xt> ]
?
(9)
(?>0)
=
Conversely, if there exists At and the pair (Xt , Yt ) such that equations (7)-(8)-(9) are satisfied, then
At is a critical point of the optimization problem (4).
3
Remark 1. The Maximum Principle can also be used to derive analogous (difference) equations in
discrete-time as well as nonlinear settings. It is equivalent to the method of Lagrange multipliers that
is used to derive the backprop algorithm in MNN, e.g., LeCun et al. [1988]. The continuous-time limit
is considered here because the computations are simpler and the results are more insightful. Similar
considerations have also motivated the study of continuous-time limit of other types of optimization
algorithms, e.g., Su et al. [2014], Wibisono et al. [2016].
The Hamiltonian is also used to express the first order variation in the functional J. For this purpose,
define the Hilbert space of matrix-valued functions L2 ([0, T ]; Md (R)) := {A : [0, T ] ? Md (R) |
RT
RT
>
2
tr(A>
t At ) dt < ?} with the inner product hA, V iL2 := 0 tr(At Vt ) dt. For any A ? L ,
0
the gradient of the functional J evaluated at A is denoted as ?J[A] ? L2 . It is defined using the
directional derivative formula:
J(A + V ) ? J(A)
h?J[A], V iL2 := lim
?0
2
where V ? L prescribes the direction (variation) along which the derivative is being computed. The
explicit formula for ?J is given by
?H
?J[A] := ?E
(Xt , Yt , At ) = ?At ? E Yt Xt>
(10)
?B
where Xt and Yt are the obtained by solving the Hamilton?s equations (7)-(8) with the prescribed
(not necessarily optimal) weight matrix A ? L2 . The significance of the formula is that the steepest
descent in the objective function J is obtained by moving in the direction of the steepest (for each
fixed t ? [0, T ]) ascent in the Hamiltonian H. Consequently, a stochastic gradient descent algorithm
to learn the weights is as follows:
(k+1)
At
(k)
= At
(k)
? ?k (?At
(k)
Xt
(k)
? Yt
(k) >
Xt
),
(11)
(k)
Yt
where ?k is the step-size at iteration k and
and
are obtained by solving the Hamilton?s
equations (7)-(8):
d (k)
(k) (k)
(k)
(Forward propagation)
X = +At Xt , with init. cond. X0
(12)
dt t
d (k)
(k)> (k)
(k)
(k)
Y
= ?At Yt , YT = Z (k) ? XT
(Backward propagation)
(13)
dt t
|
{z
}
error
(k)
(k)
based on the sample input-output (X , Z ). Note the forward-backward structure of the algorithm:
(k)
(k)
In the forward pass, the network output XT is obtained given the input X0 ; In the backward
(k)
pass, the error between the network output XT and true output Z (k) is computed and propagated
backwards. The regularization parameter is also interpreted as the dissipation or the weight decay
parameter. By setting ? = 0, the standard backprop algorithm is obtained. A convergence result for
the learning algorithm for the [? = 0] case appears as part of the supplementary material.
In the remainder of this paper, the focus is on the analysis of the critical points.
3
Critical points
For continuous-time networks, the critical points of the [? = 0] problem are all global minimizers
(An analogous result for residual MNN appears in [Hardt and Ma, 2016, Thm. 2.3]).
Theorem 1. Consider the [? = 0] optimization problem (4) with non-singular ?0 . For this problem
(provided a minimizer exists) every critical point is a global minimizer. That is,
?J[A] = 0
??
J(A) = J? := min J[A]
A
4
Moreover, for any given (not necessarily optimal) A ? L2 ,
RT ?
>
k?J[A]k2L2 ? T e?2 0 tr(At At ) dt ?min (?0 )(J(A) ? J? )
(14)
where ?min (?0 ) is the smallest eigenvalue of ?0 .
Proof. (Sketch) For the linear system (2), the fundamental solution matrix is denoted as ?t;t0 . The
solutions of the Hamilton?s equations (7)-(8) are given by
Yt = ?>
T ;t (Z ? XT )
Xt = ?t;0 X0 ,
Using the formula (10) upon taking an expectation
>
?J[A] = ??>
T ;t (R ? ?T ;0 )?0 ?t;0
which (because ? is invertible) proves that:
?J[A] = 0
??
??
?T ;0 = R
J(A) = J? := min J[A]
A
The derivation of the bound (14) is equally straightforward and appears as part of the supplementary
material.
Although the result is attractive, the conclusion is somewhat misleading because (as we will demonstrate with examples) even a small amount of regularization can lead to local (but not global) minimum
as well as saddle point solutions.
Assumption: The following assumption is made throughout the remainder of this paper:
(i) Property P1: The matrix R has no eigenvalues on R? := {x ? R : x ? 0}. The matrix R is
non-derogatory. That is, no eigenvalue of R appears in more than one Jordan block.
For the scalar (d = 1) case, this property means R is strictly positive.
For the scalar case, the
RT
A
dt
t
fundamental solution is given by the closed form formula ?T,0 = e 0
. Thus, the positivity of R
is seen to be necessary to obtain a meaningful solution.
For the vector case, this property represents a sufficient condition such that log(R) can be defined
as a real-valued matrix. That is, under property (P1), there exists a (not necessarily unique2 ) matrix
log(R) ? Md (R) whose matrix exponential elog(R) = R; cf., Culver [1966], Higham [2014].
The logarithm is trivially a minimum for the [? = 0] problem. Indeed, At ? T1 log(R) gives
log(R)
Xt = e T t X0 and thus XT = elog(R) X0 = RX0 . This shows At can be made arbitrarily small
by choosing a large enough depth T of the network. An analogous result for the linear residual MNN
appears in [Hardt and Ma, 2016, Thm. 2.1]. The question then is whether the constant solution
At ? T1 log(R) is also obtained as a critical point for the [? = 0+ ] problem?
The following proposition provides a complete characterization of the critical points (for the general
? ? R+ problem) in terms of the solutions of a matrix-valued characteristic equation:
Proposition 2. The general solution of the Hamilton?s equations (7)-(9) is given by
>
Xt = e2t? etC X0
Yt = e
2t?
At = e
2t?
e
(T ?t)C
Ce
?2t?
2
(15)
?2T ?
e
(Z ? XT )
(16)
(17)
Under Property (P1), log(R) is uniquely defined if and only if all the eigenvalues of R are positive. When
not unique there are countably many matrix logarithms, all denoted as log(R). The principal logarithm of R is
the unique such matrix whose eigenvalues lie in the strip {z ? C : ?? < Im(z) < ?}.
5
where C ? Md (R) is an arbitrary solution of the characteristic equation
?C = F > (R ? F )?0
(18)
>
where F := e2T ? eT C and the matrix ? := 12 (C ? C> ) is the skew-symmetric component of C. The
associated cost is given by
1
1
?T
tr C> C + tr (F ? R)> (F ? R)?0 + E[|?|2 ]
J[A] =
2
2
2
And the following holds:
(?0 =I)
At ? C ?? C is normal =? R is normal
Proof. (Sketch) Differentiating both sides of (9) with respect to t and using the Hamilton?s equations (7)-(8), one obtains
dAt
>
= ?A>
t At + At At
dt
whose general solution is given by (17). The remainder of the analysis is straightforward and appears
as part of the supplementary material.
Remark 2. Prop. 2 shows that the answer to the question posed above concerning the constant
solution At ? T1 log(R) is false in general for the [? = 0+ ] problem: For ? > 0 and ?0 = I, a
constant solution is a critical point only if R is a normal matrix. For the generic case of non-normal
R, any critical point is necessarily non-constant for any positive choice of the parameter ?. Some of
these non-constant critical points are described as part of the Example 2.
Remark 3. The linear structure of the input-output model (1) is not necessary to derive the results in
Prop. 2. For correlated input-output random variables (X, Z), the general form of the characteristic
equation is as follows:
?C = F > (E[ZX0> ] ? F ?0 )
>
where (as before) ?0 = E[X0 X0> ], and F := e2T ? eT C where ? := 12 (C ? C> ).
Prop. 2 is useful because it helps reduce the infinite-dimensional problem to a finite-dimensional
characteristic equation (18). The solutions C of the characteristic equation fully parametrize the
solutions of the Hamilton?s equations (7)-(9) which in turn represent the critical points of the optimal
control problem (4).
The matrix-valued nonlinear characteristic equation (18) is still formidable. To gain analytical and
numerical insight into the matrix case, the following strategy is employed:
(i) A solution C is obtained by setting ? = 0 in the characteristic equation. The corresponding
equation is
>
>
eT (C?C ) eT C = R
This solution is denoted as C(0).
(ii) Implicit function theorem is used to establish (local) existence of a solution branch C(?) in a
neighborhood of the ? = 0 solution.
(iii) Numerical continuation is used to compute the solution C(?) as a function of the parameter ?.
The following theorem provides a characterization of normal solutions C for the case where R is
assumed to be a normal matrix and ? = I. Its proof appears as part of the supplementary material.
Theorem 2. Consider the characteristic equation (18) where R is assumed to be a normal matrix
that satisfies the Property (P1) and ?0 = I.
6
Figure 1: (a) Critical points in Example 1 (the (2, 1) entry of the solution matrix C(?; n) is depicted
for n = 0, ?1, ?2); (b) The cost J[A] for these solutions.
(i) For ? = 0 the normal solutions of (18) are given by
1
T
log(R).
(ii) For each such solution, there exists a neighborhood N ? R+ of ? = 0 such that the solution
of the characteristic equation (18) is well-defined as a continuous map from ? ? N ? C(?) ?
Md (R) with C(0) = T1 log(R). This solution is given by the asymptotic formula
C(?) =
?
1
log(R) ? 2 (RR> )?1 log(R) + O(?2 )
T
T
Remark 4. For the scalar case log(?) is a single-valued function. Therefore, At ? C = T1 log(R) is
the unique critical point (minimizer) for the [? = 0+ ] problem. While the [? = 0+ ] problem admits a
unique minimizer, the [? = 0] problem does not. In fact, any At of the form At = T1 log(R) + A?t
RT
where 0 A?t dt = 0 is also a minimizer of the [? = 0] problem. So, while there are infinitely
many minimizers of the [? = 0] problem, only one of these survives with even a small amount of
regularization. A global characterization of critical points as a function of parameters (?, R, ?0 , T ) ?
R+ ? R+ ? R+ ? R+ is possible and appears as part of the supplementary material.
"
#
0 ?1
Example 1 (Normal matrix case). Consider the characteristic equation (18) with R =
1 0
(rotation in the plane by ?/2), ?0 = I and T = 1. For ? = 0, the normal solutions of the
characteristic equation are given by the multi-valued matrix logarithm function:
"
#
0 ?1
log(R) = (?/2 + 2n?)
=: C(0; n), n = 0, ?1, ?2, . . .
1 0
It is easy to verify that eC(0;n) = R. C(0; 0) is referred to as the principal logarithm.
The software package PyDSTool Clewley et al. [2007] is used to numerically continue the solution
C(?; n) as a function of the parameter ?. Fig. 1(a) depicts the solutions branches in terms of the (2, 1)
entry of the matrix C(?; n) for n = 0, ?1, ?2. The following observations are made concerning
these solutions:
? n ) for which there exist two solutions, a local
(i) For each fixed n 6= 0, there exist a range (0, ?
? n , there is a qualitative change
minimum and a saddle point. At the limit (turning) point ? = ?
in the solution from a minimum to a saddle point.
? ?1 , only a single
? n decreases monotonically as |n| increases. For ? > ?
(ii) As a function of n, ?
solution, the principal branch C(?; 0) was found using numerical continuation.
7
Figure 2: (a) Numerical continuation of the solution in Example 2; (b) The cost J[A] for the critical
point (minimum) and the constant T1 log(R) solution.
(iii) Along the branch with a fixed n 6= 0, as ? ? 0, the saddle point solution"escapes to infinity.
#
?? ?1
That is as ? ? 0, the saddle point solution C(?; n) ? (?/2 + (2n ? 1)?)
. The
1
??
associated cost J[A] ? 1 (The cost of global minimizer J ? = 0).
(iv) Among the numerically obtained solution branches, the principal branch C(?; 0) has the
lowest cost. Fig. 1 (b) depicts the cost for the solutions depicted in Fig. 1 (a).
The numerical calculations indicate that while the [? = 0] problem has infinitely many critical points
(all global minimizers), only a finitely many critical points persist for any finite positive value of ?.
Moreover, there exists both local (but not global) minimum as well as saddle points for this case.
Among the solutions computed, the principal branch (continued from the principal logarithm C(0; 0))
has the minimum cost.
Example 2 (Non-normal
"
# matrix case). Numerical continuation is used to obtain solutions for non0 ?1
normal R =
, where ? is a continuation parameter and T = 1. Fig. 2(a) depicts a solution
1 ?
branch as a function of parameter ?. The solution is initialized with the normal solution C(0; 0)
described in Example 1. By varying" ?, the# solution is continued
" to ?# = ?/2 (indicated as in
0 0
0 ?1
part (a)). This way, the solution C = ?
is found for R =
. It is easy to verify that C
0
1 ?2
2
is a solution of the characteristic equation (18)
" for ? = 0 and T = 1. For this
# solution, the critical
?? sin(?t)
? cos(?t) ? ?
point of the optimal control problem At =
is non-constant. The
? cos(?t) + ?
? sin(?t)
"
#
?
?? tan ? ?? sec ?
principal logarithm log(R) =
, where ? = sin?1
. The regularization
4
? sec ?
? tan ?
cost for the non-constant solution At is strictly smaller than the constant T1 log(R) solution:
Z 1
Z 1
Z 1
?2
>
<
3.76
=
tr(log(R) log(R)> ) dt
tr(At A>
)
dt
=
tr(CC
)
dt
=
t
4
0
0
0
Next, the parameter ? = ?2 is fixed, and the solution continued in the parameter ?. Fig. 2(b) depicts
the cost J[A] for the resulting solution branch of critical points (minimum). The cost with the
constant T1 log(R) is also depicted. It is noted that the latter is not a critical point of the optimal
control problem for any positive value of ?.
8
4
Conclusions and directions for future work
In this paper, we studied the optimization problem of learning the weights of a linear neural network
with mean-squared loss function. In order to do so, we introduced a novel formulation:
(i) The linear network is modeled as a continuous time (surrogate for layer) optimal control problem;
(ii) A weight decay type regularization is considered where the interest is in the limit as the
regularization parameter ? ? 0 (the limit is referred to as the [? = 0+ ] problem).
The Maximum Principle of optimal control theory is used to derive the Hamilton?s equations for
the critical points. A remarkable result of our paper is that the critical point solutions of the
infinite-dimensional problem are completely characterized via the solutions of a finite-dimensional
characteristic equation (Eq. (18)). That such a reduction is possible is unexpected because the weight
update equation is nonlinear (even in the settings of linear networks).
Based on the analysis of the characteristic equation, several conclusions are obtained3 :
(i) It has been noted in literature that, for linear networks, all critical points are global minimum.
While this is also true here for the [? = 0] and the [? = 0+ ] problems, even a small amount of
regularization alters the picture, e.g., saddle points emerge (Example 1).
(ii) The critical points of the regularized [? = 0+ ] problem is qualitatively very different compared
to the non-regularized [? = 0] problem (Remark 4). Several quantitative results on the critical
points of the regularized problem are described in Theorem 2 and Examples 1 and 2.
(iii) The study of the characteristic equation revealed an unexpected qualitative difference in the
critical points between the two cases where R := E[ZX0> ] is a normal or non-normal matrix.
In the latter (generic) case, the network weights are necessarily non-constant (Prop. 2).
We believe that the ideas and tools introduced in this paper will be useful for the researchers working
on the analysis of deep learning. In particular, the paper is expected to highlight and spur work on
implicit regularization. Some directions for future work are briefly noted next:
(i) Non-normal solutions of the characteristic equation: Analysis of the non-normal solutions
of the characteristic equation remains an open problem. The non-normal solutions are important
because of the following empirical observation (summarized as part of the supplementary
material): In numerical experiments with learning, the weights can get stuck at non-normal
critical points before eventually converging to a ?good? minimum.
(ii) Generalization error: With a finite number of samples (X0i , Z i )N
i=1 , the characteristic equation
(N )
?C = F > (R ? F )?0
+ F > Q(N )
PN
PN
>
>
(N )
where ?0 := N1 i=1 X0i X0i and Q(N ) := N1 i=1 X0i ? i . Sensitivity analysis of the
(N )
solution of the characteristic equation, with respect to variations in ?0 and Q(N ) , can shed
light on the generalization error for different critical points.
(iii) Second order analysis: The paper does not contain second order analysis of the critical points ?
to determine whether they are local minimum or saddle points. Based on certain preliminary
results for the scalar case, it is conjectured that the second order analysis is possible in terms of
the first order variation for the characteristic equation.
3
Qualitative aspects of some of the conclusions may be obvious to experts in Deep Learning. The objective
here is to obtain quantitative characterization in the (relatively tractable) setting of linear networks.
9
References
P. F. Baldi and K. Hornik. Neural networks and principal component analysis: Learning from
examples without local minima. Neural networks, 2(1):53?58, 1989.
P. F. Baldi and K. Hornik. Learning in linear neural networks: A survey. IEEE Transactions on
neural networks, 6(4):837?858, 1995.
S. Bhojanapalli, B. Neyshabur, and N. Srebro. Global optimality of local search for low rank matrix
recovery. In Advances in Neural Information Processing Systems, pages 3873?3881, 2016.
A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun. The loss surfaces of multilayer
networks. In AISTATS, 2015a.
A. Choromanska, Y. LeCun, and G. B. Arous. Open problem: The landscape of the loss surfaces of
multilayer networks. In COLT, pages 1756?1760, 2015b.
R. Clewley, W. E. Sherwood, M. D. LaMar, and J. Guckenheimer. Pydstool, a software environment
for dynamical systems modeling, 2007. URL http://pydstool.sourceforge.net.
W. J. Culver. On the existence and uniqueness of the real logarithm of a matrix. Proceedings of the
American Mathematical Society, 17(5):1146?1151, 1966.
Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking
the saddle point problem in high-dimensional non-convex optimization. In Advances in neural
information processing systems, pages 2933?2941, 2014.
O. Farotimi, A. Dembo, and T. Kailath. A general weight matrix formulation using optimal control.
IEEE Transactions on neural networks, 2(3):378?394, 1991.
R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping From Saddle Points ? Online Stochastic Gradient
for Tensor Decomposition. arXiv:1503.02101, March 2015.
I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016.
S. Gunasekar, B. Woodworth, S. Bhojanapalli, B. Neyshabur, and N. Srebro. Implicit regularization
in matrix factorization. arXiv preprint arXiv:1705.09280, 2017.
M. Hardt and T. Ma. Identity matters in deep learning. arXiv:1611.04231, November 2016.
N. J. Higham. Functions of matrices. CRC Press, 2014.
K. Kawaguchi. Deep learning without poor local minima. In Advances In Neural Information
Processing Systems, pages 586?594, 2016.
Y. LeCun, D. Touresky, G. Hinton, and T. Sejnowski. A theoretical framework for back-propagation.
In The Connectionist Models Summer School, volume 1, pages 21?28, 1988.
J. D. Lee, M. Simchowitz, M. I. Jordan, and B. Recht. Gradient Descent Converges to Minimizers.
arXiv:1602.04915, February 2016.
B. Neyshabur, R. Tomioka, and N. Srebro. In search of the real inductive bias: On the role of implicit
regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014.
10
A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning
in deep linear neural networks. arXiv:1312.6120, December 2013.
D. Soudry and Y. Carmon. No bad local minima: Data independent training error guarantees for
multilayer neural networks. arXiv:1605.08361, May 2016.
W. Su, S. Boyd, and E. Candes. A differential equation for modeling nesterov?s accelerated gradient
method: Theory and insights. In Advances in Neural Information Processing Systems, pages
2510?2518, 2014.
A. Wibisono, A. Wilson, and M. Jordan. A variational perspective on accelerated methods in
optimization. Proceedings of the National Academy of Sciences, page 201614734, 2016.
C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires
rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
11
| 6844 |@word mild:1 briefly:1 stronger:1 mehta:1 open:2 decomposition:1 p0:3 tr:11 arous:2 reduction:1 moment:1 initial:1 renewed:1 numerical:9 update:2 v:1 plane:1 dembo:1 steepest:2 hamiltonian:4 characterization:6 provides:2 pascanu:1 simpler:1 zhang:2 mathematical:1 along:2 differential:1 qualitative:3 yuan:1 baldi:4 x0:16 indeed:1 expected:2 p1:5 growing:1 multi:1 terminal:1 provided:1 moreover:2 maximizes:1 formidable:1 bhojanapalli:3 lowest:1 interpreted:1 transformation:3 guarantee:1 quantitative:2 every:3 shed:1 control:14 grant:1 hamilton:14 positive:5 t1:9 before:2 local:13 limit:6 soudry:2 k2l2:1 studied:1 conversely:2 co:3 factorization:1 range:1 unique:4 lecun:5 block:1 empirical:1 boyd:1 get:1 applying:1 equivalent:1 map:1 yt:16 straightforward:2 convex:2 survey:1 recovery:1 identifying:1 insight:3 continued:3 financial:1 variation:4 analogous:3 construction:1 suppose:1 tan:2 exact:1 goodfellow:2 persist:1 observed:1 role:1 preprint:3 solved:1 decrease:1 environment:1 nesterov:1 dynamic:1 prescribes:1 solving:2 upon:1 completely:1 farotimi:2 derivation:1 sejnowski:1 choosing:1 neighborhood:2 whose:4 supplementary:7 valued:8 posed:1 superscript:1 online:1 eigenvalue:5 rr:1 analytical:3 net:1 simchowitz:1 product:1 remainder:4 spur:1 academy:1 sourceforge:1 convergence:2 converges:1 help:1 derive:5 x0i:4 finitely:1 school:1 eq:2 strong:1 indicate:1 direction:4 stochastic:5 subsequently:1 saxe:3 material:7 backprop:7 crc:1 generalization:3 investigation:1 preliminary:1 proposition:4 im:1 strictly:2 hold:1 considered:2 normal:21 elog:2 smallest:1 purpose:2 e2t:3 uniqueness:1 tool:2 survives:1 guckenheimer:1 mit:1 rather:1 pn:2 varying:1 wilson:1 derived:1 focus:3 rank:1 kim:1 ganguli:2 minimizers:4 choromanska:4 arg:1 among:2 colt:1 dauphin:3 denoted:7 special:1 bifurcation:1 beach:1 represents:2 future:2 connectionist:1 escape:1 national:1 n1:2 interest:3 light:1 partial:1 necessary:3 il2:2 indexed:1 iv:1 logarithm:8 initialized:1 theoretical:1 lamar:1 modeling:2 cost:12 entry:2 answer:1 cho:1 st:1 recht:2 fundamental:2 sensitivity:1 lee:2 invertible:1 squared:3 satisfied:2 choose:1 huang:1 positivity:1 expert:1 derivative:3 american:1 bx:2 sec:5 summarized:1 matter:1 coordinated:3 satisfy:1 mp:2 non0:1 closed:2 candes:1 minimize:2 il:3 variance:1 characteristic:23 yield:1 landscape:1 directional:1 trajectory:1 cc:1 researcher:1 carmon:2 strip:1 definition:1 obvious:1 proof:4 associated:2 propagated:1 gain:2 hardt:6 lim:1 hilbert:1 back:1 appears:9 dt:16 formulation:6 evaluated:1 implicit:6 sketch:2 working:1 su:2 nonlinear:9 propagation:3 indicated:1 believe:1 usa:1 verify:2 multiplier:1 true:2 counterpart:1 contain:1 regularization:15 inductive:1 symmetric:1 laboratory:3 attractive:1 sin:3 uniquely:1 noted:3 outline:1 complete:2 demonstrate:1 dissipation:2 variational:1 consideration:1 novel:1 rotation:1 functional:4 empirically:1 volume:1 numerically:2 rd:6 trivially:1 illinois:6 gratefully:1 moving:1 surface:2 etc:1 recent:2 perspective:1 conjectured:1 henaff:1 certain:2 success:2 arbitrarily:1 vt:1 continue:1 seen:1 minimum:20 additional:1 somewhat:1 employed:2 converge:1 determine:1 attacking:1 monotonically:1 ii:7 branch:9 champaign:3 characterized:1 calculation:1 believed:1 long:1 concerning:2 equally:1 gunasekar:3 converging:1 multilayer:4 expectation:1 arxiv:11 iteration:1 represent:1 remarkably:1 singular:1 ascent:1 subject:1 december:1 jordan:3 backwards:1 feedforward:1 bengio:3 iii:4 enough:1 concerned:2 easy:2 revealed:1 affect:1 architecture:1 escaping:1 inner:1 reduce:1 idea:1 t0:1 whether:2 motivated:1 url:1 remark:5 deep:11 useful:2 amount:4 mcclelland:1 continuation:6 http:1 exist:2 nsf:1 rx0:2 alters:1 discrete:3 express:1 acknowledged:1 ce:1 backward:3 year:1 package:1 throughout:1 layer:2 bound:1 summer:1 courville:1 infinity:1 software:2 aspect:1 optimality:3 prescribed:1 min:4 relatively:1 march:1 poor:1 smaller:1 equation:42 remains:1 skew:1 turn:1 eventually:1 ge:2 tractable:1 studying:1 parametrize:1 gulcehre:1 neyshabur:4 generic:2 robustness:1 existence:3 denotes:2 spurred:1 cf:3 yx:1 woodworth:1 prof:1 kawaguchi:3 establish:1 classical:1 dat:1 society:1 february:1 tensor:1 objective:2 question:3 strategy:1 primary:1 cmmi:1 md:10 rt:5 surrogate:2 gradient:9 rethinking:1 reason:1 modeled:2 trace:1 implementation:1 observation:2 urbana:6 zx0:2 finite:6 jin:2 descent:6 november:1 hinton:1 arbitrary:1 thm:3 introduced:5 pair:1 nip:1 dynamical:1 max:1 critical:44 regularized:6 turning:1 residual:3 representing:2 misleading:1 picture:1 mathieu:1 literature:2 l2:4 understanding:1 x0k:1 asymptotic:2 loss:7 expect:1 dxt:3 fully:1 highlight:1 srebro:3 remarkable:1 sufficient:1 principle:4 transpose:1 side:1 bias:1 face:1 taking:1 differentiating:1 emerge:1 depth:2 forward:3 made:3 qualitatively:1 stuck:1 ec:1 transaction:2 obtains:2 countably:1 global:12 assumed:6 continuous:7 search:2 learn:2 ca:1 init:1 hornik:4 mnn:4 necessarily:5 aistats:1 significance:1 main:1 motivation:1 noise:2 body:1 fig:5 referred:9 depicts:4 tomioka:1 explicit:1 exponential:1 lie:1 formula:7 theorem:6 bad:1 xt:27 insightful:1 symbol:1 decay:5 admits:1 concern:1 exists:6 false:1 higham:2 horizon:1 depicted:3 saddle:12 infinitely:2 lagrange:1 unexpected:2 vinyals:1 scalar:4 minimizer:8 satisfies:1 ma:5 prop:6 kailath:1 identity:1 consequently:1 change:1 infinite:2 principal:8 prashant:1 dyt:1 pas:2 pontryagin:1 cond:1 meaningful:1 support:1 latter:2 pertains:1 accelerated:2 wibisono:2 phenomenon:1 correlated:1 |
6,462 | 6,845 | Fisher GAN
Youssef Mroueh? , Tom Sercu?
[email protected], [email protected]
? Equal Contribution
AI Foundations, IBM Research AI
IBM T.J Watson Research Center
Abstract
Generative Adversarial Networks (GANs) are powerful models for learning complex distributions. Stable training of GANs has been addressed in many recent
works which explore different metrics between distributions. In this paper we
introduce Fisher GAN which fits within the Integral Probability Metrics (IPM)
framework for training GANs. Fisher GAN defines a critic with a data dependent
constraint on its second order moments. We show in this paper that Fisher GAN
allows for stable and time efficient training that does not compromise the capacity
of the critic, and does not need data independent constraints such as weight clipping. We analyze our Fisher IPM theoretically and provide an algorithm based on
Augmented Lagrangian for Fisher GAN. We validate our claims on both image
sample generation and semi-supervised classification using Fisher GAN.
1
Introduction
Generative Adversarial Networks (GANs) [1] have recently become a prominent method to learn
high-dimensional probability distributions. The basic framework consists of a generator neural
network which learns to generate samples which approximate the distribution, while the discriminator
measures the distance between the real data distribution, and this learned distribution that is referred
to as fake distribution. The generator uses the gradients from the discriminator to minimize the
distance with the real data distribution. The distance between these distributions was the object of
study in [2], and highlighted the impact of the distance choice on the stability of the optimization. The
original GAN formulation optimizes the Jensen-Shannon divergence, while later work generalized
this to optimize f-divergences [3], KL [4], the Least Squares objective [5]. Closely related to our
work, Wasserstein GAN (WGAN) [6] uses the earth mover distance, for which the discriminator
function class needs to be constrained to be Lipschitz. To impose this Lipschitz constraint, WGAN
proposes to use weight clipping, i.e. a data independent constraint, but this comes at the cost of
reducing the capacity of the critic and high sensitivity to the choice of the clipping hyper-parameter.
A recent development Improved Wasserstein GAN (WGAN-GP) [7] introduced a data dependent
constraint namely a gradient penalty to enforce the Lipschitz constraint on the critic, which does not
compromise the capacity of the critic but comes at a high computational cost.
We build in this work on the Integral probability Metrics (IPM) framework for learning GAN of [8].
Intuitively the IPM defines a critic function f , that maximally discriminates between the real and
fake distributions. We propose a theoretically sound and time efficient data dependent constraint on
the critic of Wasserstein GAN, that allows a stable training of GAN and does not compromise the
capacity of the critic. Where WGAN-GP uses a penalty on the gradients of the critic, Fisher GAN
imposes a constraint on the second order moments of the critic. This extension to the IPM framework
is inspired by the Fisher Discriminant Analysis method.
The main contributions of our paper are:
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1. We introduce in Section 2 the Fisher IPM, a scaling invariant distance between distributions.
Fisher IPM introduces a data dependent constraint on the second order moments of the critic that
discriminates between the two distributions. Such a constraint ensures the boundedness of the metric
and the critic. We show in Section 2.2 that Fisher IPM when approximated with neural networks,
corresponds to a discrepancy between whitened mean feature embeddings of the distributions. In
other words a mean feature discrepancy that is measured with a Mahalanobis distance in the space
computed by the neural network.
2. We show in Section 3 that Fisher IPM corresponds to the Chi-squared distance ( 2 ) when the
critic has unlimited capacity (the critic belongs to a universal hypothesis function class). Moreover
we prove in Theorem 2 that even when the critic is parametrized by a neural network, it approximates
the 2 distance with a factor which is a inner product between optimal and neural network critic. We
finally derive generalization bounds of the learned critic from samples from the two distributions,
assessing the statistical error and its convergence to the Chi-squared distance from finite sample size.
3. We use Fisher IPM as a GAN objective 1 and formulate an algorithm that combines desirable
properties (Table 1): a stable and meaningful loss between distributions for GAN as in Wasserstein
GAN [6], at a low computational cost similar to simple weight clipping, while not compromising the
capacity of the critic via a data dependent constraint but at a much lower computational cost than [7].
Fisher GAN achieves strong semi-supervised learning results without need of batch normalization in
the critic.
Table 1: Comparison between Fisher GAN and recent related approaches.
Stability Unconstrained Efficient
Representation
capacity
Computation power (SSL)
Standard GAN [1, 9]
7
3
3
3
WGAN, McGan [6, 8]
3
7
3
7
WGAN-GP [7]
3
3
7
?
Fisher Gan (Ours)
3
3
3
3
2
Learning GANs with Fisher IPM
2.1
Fisher IPM in an arbitrary function space: General framework
Integral Probability Metric (IPM). Intuitively an IPM defines a critic function f belonging to a
function class F , that maximally discriminates between two distributions. The function class F
defines how f is bounded, which is crucial to define the metric. More formally, consider a compact
space X in Rd . Let F be a set of measurable, symmetric and bounded real valued functions on
X . Let P(X ) be the set of measurable probability distributions on X . Given two probability
distributions P, Q 2 P(X ), the IPM indexed by
n a symmetric functionospace F is defined as follows
[10]:
dF (P, Q) = sup E f (x)
E f (x) .
(1)
f 2F
x?P
x?Q
It is easy to see that dF defines a pseudo-metric over P(X ). Note specifically that if F is not
bounded, supf will scale f to be arbitrarily large. By choosing F appropriately [11], various
distances between probability measures can be defined.
First formulation: Rayleigh Quotient. In order to define an IPM in the GAN context, [6, 8] impose
the boundedness of the function space via a data independent constraint. This was achieved via
restricting the norms of the weights parametrizing the function space to a `p ball. Imposing such a
data independent constraint makes the training highly dependent on the constraint hyper-parameters
and restricts the capacity of the learned network, limiting the usability of the learned critic in a semisupervised learning task. Here we take a different angle and design the IPM to be scaling invariant
as a Rayleigh quotient. Instead of measuring the discrepancy between means as in Equation (1), we
measure a standardized discrepancy, so that the distance is bounded by construction. Standardizing
this discrepancy introduces as we will see a data dependent constraint, that controls the growth of the
weights of the critic f and ensures the stability of the training while maintaining the capacity of the
critic. Given two distributions P, Q 2 P(X ) the Fisher IPM for a function space F is defined as
follows:
E [f (x)]
E [f (x)]
x?P
x?Q
dF (P, Q) = sup p
.
(2)
2
2
1/2E
1
f 2F
x?P f (x) + /2Ex?Q f (x)
1
Code is available at https://github.com/tomsercu/FisherGAN
2
Real
P
!
v
x
Q
! (x)
2 Rm
Fake
Figure 1: Illustration of Fisher IPM with Neural Networks. ! is a convolutional neural network
which defines the embedding space. v is the direction in this embedding space with maximal mean
separation hv, ?! (P) ?! (Q)i, constrained by the hyperellipsoid v > ?! (P; Q) v = 1.
While a standard IPM (Equation (1)) maximizes the discrepancy between the means of a function
under two different distributions, Fisher IPM looks for critic f that achieves a tradeoff between
maximizing the discrepancy between the means under the two distributions (between class variance),
and reducing the pooled second order moment (an upper bound on the intra-class variance).
Standardized discrepancies have a long history in statistics and the so-called two-samples hypothesis
testing. For example the classic two samples Student?s t test defines the student statistics as the
ratio between means discrepancy and the sum of standard deviations. It is now well established that
learning generative models has its roots in the two-samples hypothesis testing problem [12]. Non
parametric two samples testing and model criticism from the kernel literature lead to the so called
maximum kernel mean discrepancy (MMD) [13]. The MMD cost function and the mean matching
IPM for a general function space has been recently used for training GAN [14, 15, 8].
Interestingly Harchaoui et al [16] proposed Kernel Fisher Discriminant Analysis for the two samples
hypothesis testing problem, and showed its statistical consistency. The Standard Fisher discrepancy
used in Linear Discriminant
Analysis (LDA)
or Kernel Fisher Discriminant Analysis (KFDA) can
?
?
E [f (x)]
E [f (x)]
2
be written: supf 2F Varx?P (f(x))+Varx?Q (f(x)) , where Varx?P (f(x)) = Ex?P f 2 (x) (Ex?P (f(x)))2 .
Note that in LDA F is restricted to linear functions, in KFDA F is restricted to a Reproducing
Kernel Hilbert Space (RKHS). Our Fisher IPM (Eq (2)) deviates from the standard Fisher discrepancy
since the numerator is not squared, and we use in the denominator the second order moments instead
of the variances. Moreover in our definition of Fisher IPM, F can be any symmetric function class.
x?P
x?Q
Second formulation: Constrained form. Since the distance is scaling invariant, dF can be written
equivalently in the following constrained form:
dF (P, Q) =
sup
f 2F , 12 Ex?P f 2 (x)+ 12 Ex?Q f 2 (x)=1
E (f ) := E [f (x)]
x?P
E [f (x)].
x?Q
(3)
Specifying P, Q: Learning GAN with Fisher IPM. We turn now to the problem of learning GAN
with Fisher IPM. Given a distribution Pr 2 P(X ), we learn a function g? : Z ? Rnz ! X , such
that for z ? pz , the distribution of g? (z) is close to the real data distribution Pr , where pz is a fixed
distribution on Z (for instance z ? N (0, Inz )). Let P? be the distribution of g? (z), z ? pz . Using
Fisher IPM (Equation (3)) indexed by a parametric function class Fp , the generator minimizes the
IPM: ming? dFp (Pr , P? ). Given samples {xi , 1 . . . N } from Pr and samples {zi , 1 . . . M } from pz
we shall solve the following empirical problem:
N
1 X
fp (xi )
min sup E?(fp , g? ) :=
g? f 2F
N i=1
p
p
? p , g? ) =
where ?(f
1
2N
PN
i=1
fp2 (xi ) +
1
2M
PM
M
1 X
? p , g? ) = 1,
fp (g? (zj )) Subject to ?(f
M j=1
j=1
3
(4)
fp2 (g? (zj )). For simplicity we will have M = N .
2.2
Fisher IPM with Neural Networks
We will specifically study the case where F is a finite dimensional Hilbert space induced by a
neural network ! (see Figure 1 for an illustration). In this case, an IPM with data-independent
constraint will be equivalent to mean matching [8]. We will now show that Fisher IPM will give rise
to a whitened mean matching interpretation, or equivalently to mean matching with a Mahalanobis
distance.
Rayleigh Quotient. Consider the function space Fv,! , defined as follows
Fv,! = {f (x) = hv, ! (x)i |v 2 Rm , ! : X ! Rm },
! is typically parametrized with a multi-layer neural network. We define the mean and covariance
(Gramian) feature embedding of a distribution as in McGan [8]:
?! (P) = E (
x?P
! (x))
and
?! (P) = E
x?P
! (x)
! (x)
>
,
Fisher IPM as defined in Equation (2) on Fv,! can be written as follows:
hv, ?! (P) ?! (Q)i
dFv,! (P, Q) = max max q
,
!
v
v > ( 12 ?! (P) + 12 ?! (Q) + Im )v
(5)
where we added a regularization term ( > 0) to avoid singularity of the covariances. Note that if !
was implemented with homogeneous non linearities such as RELU, if we swap (v, !) with (cv, c0 !)
for any constants c, c0 > 0, the distance dFv,! remains unchanged, hence the scaling invariance.
Constrained Form. Since the Rayleigh Quotient is not amenable to optimization, we will consider
Fisher IPM as a constrained optimization problem. By virtue of the scaling invariance and the
constrained form of the Fisher IPM given in Equation (3), dFv,! can be written equivalently as:
dFv,! (P, Q) =
max
!,v,v > ( 12 ?! (P)+ 12 ?! (Q)+ Im )v=1
hv, ?! (P)
?! (Q)i
(6)
Define the pooled covariance: ?! (P; Q) = 12 ?! (P) + 12 ?! (Q) + Im . Doing a simple change of
1
variable u = (?! (P; Q)) 2 v we see that:
D
E
1
dFu,! (P, Q) = max max u, (?! (P; Q)) 2 (?! (P) ?! (Q))
!
=
u,kuk=1
max (?! (P; Q))
!
1
2
(?! (P)
?! (Q)) ,
(7)
hence we see that fisher IPM corresponds to the worst case distance between whitened means.
Since the means are white, we don?t need to impose further constraints on ! as in [6, 8]. Another
interpretation of the Fisher IPM stems from the fact that:
q
dFv,! (P, Q) = max (?! (P) ?! (Q))> ?! 1 (P; Q)(?! (P) ?! (Q)),
!
from which we see that Fisher IPM is a Mahalanobis distance between the mean feature embeddings
of the distributions. The Mahalanobis distance is defined by the positive definite matrix ?w (P; Q).
We show in Appendix A that the gradient penalty in Improved Wasserstein [7] gives rise to a similar
Mahalanobis mean matching interpretation.
Learning GAN with Fisher IPM. Hence we see that learning GAN with Fisher IPM:
min max
max
hv, ?w (Pr ) ?! (P? )i
1
1
g?
!
v,v > ( 2 ?! (Pr )+ 2 ?! (P? )+ Im )v=1
corresponds to a min-max game between a feature space and a generator. The feature space tries
to maximize the Mahalanobis distance between the feature means embeddings of real and fake
distributions. The generator tries to minimize the mean embedding distance.
3
Theory
We will start first by studying the Fisher IPM defined in Equation (2) when the function space has full
R
capacity i.e when the critic belongs to L2 (X , 12 (P + Q)) meaning that X f 2 (x) (P(x)+Q(x))
dx < 1.
2
Theorem 1 shows that under this condition, the Fisher IPM corresponds to the Chi-squared distance
between distributions, and gives a closed form expression of the optimal critic function f (See
Appendix B for its relation with the Pearson Divergence). Proofs are given in Appendix D.
4
(b)
2
1
0
1
2
1.0
0.5
Exact
MLP, N=M=10k
2
3
1.5
0.0
4
2
0
2
4
0
1
2
3
distance and MLP estimate
3
(c)
2.0
2.0
2
distance and MLP estimate
(a) 2D Gaussians, contour plot
1.5
1.0
0.5
MLP, shift=3
MLP, shift=1
MLP, shift=0.5
0.0
101
4
Mean shift
102
103
N=M=num training samples
Figure 2: Example on 2D synthetic data, where both P and Q are fixed normal distributions with the
same covariance and shifted means along the x-axis, see (a). Fig (b, c) show the exact 2 distance
from numerically integrating Eq (8), together with the estimate obtained from training a 5-layer MLP
with layer size = 16 and LeakyReLU nonlinearity on different training sample sizes. The MLP is
trained using Algorithm 1, where sampling from the generator is replaced by sampling from Q, and
the 2 MLP estimate is computed with Equation (2) on a large number of samples (i.e. out of sample
estimate). We see in (b) that for large enough sample size, the MLP estimate is extremely good. In (c)
we see that for smaller sample sizes, the MLP approximation bounds the ground truth 2 from below
(see Theorem 2) and converges to the ground truth roughly as O( p1N ) (Theorem 3). We notice that
when the distributions have small 2 distance, a larger training size is needed to get a better estimate again this is in line with Theorem 3.
Theorem 1 (Chi-squared distance at full capacity). Consider the Fisher IPM for F being the space
of all measurable functions endowed by 12 (P + Q), i.e. F := L2 (X , P+Q
2 ). Define the Chi-squared
distance between two distributions:
sZ
(P(x) Q(x))2
dx
(8)
2 (P, Q) =
P(x)+Q(x)
X
2
The following holds true for any P, Q, P 6= Q:
1) The Fisher IPM for F = L2 (X , P+Q
2 ) is equal to the Chi-squared distance defined above:
dF (P, Q) = 2 (P, Q).
2) The optimal critic of the Fisher IPM on L2 (X , P+Q
2 ) is :
f (x) =
P(x)
1
2 (P, Q)
Q(x)
P(x)+Q(x)
2
.
We note here that LSGAN [5] at full capacity corresponds to a Chi-Squared divergence, with the
main difference that LSGAN has different objectives for the generator and the discriminator (bilevel
optimizaton), and hence does not optimize a single objective that is a distance between distributions.
The Chi-squared divergence can also be achieved in the f -gan framework from [3]. We discuss the
advantages of the Fisher formulation in Appendix C.
Optimizing over L2 (X , P+Q
2 ) is not tractable, hence we have to restrict our function class, to a
hypothesis class H , that enables tractable computations. Here are some typical choices of the space
H : Linear functions in the input features, RKHS, a non linear multilayer neural network with a
linear last layer (Fv,! ). In this Section we don?t make any assumptions about the function space and
show in Theorem 2 how the Chi-squared distance is approximated in H , and how this depends on
the approximation error of the optimal critic f in H .
Theorem 2 (Approximating Chi-squared distance in an arbitrary function space H ). Let H
be an arbitrary symmetric function space. We define the inner product hf, f iL2 (X , P+Q ) =
2
R
P(x)+Q(x)
f
(x)f
(x)
dx,
which
induces
the
Lebesgue
norm.
Let
S
P+Q
be
the
unit
sphere
L2 (X ,
)
2
X
2
in L2 (X , P+Q
= {f : X ! R, kf kL2 (X , P+Q ) = 1}. The fisher IPM defined on an
2 ): SL2 (X , P+Q
2 )
2
arbitrary function space H dH (P, Q), approximates the Chi-squared distance. The approximation
5
quality depends on the cosine of the approximation of the optimal critic f in H . Since H is
symmetric this cosine is always positive (otherwise the same equality holds with an absolute value)
dH (P, Q) =
2 (P, Q)
sup
f 2H \ SL
2 (X ,
P+Q )
2
hf, f iL2 (X , P+Q ) ,
2
Equivalently we have following relative approximation error:
2 (P, Q)
dH (P, Q)
1
=
inf
kf
2 f 2H \ SL2 (X , P+Q )
2 (P, Q)
2
2
f kL2 (X , P+Q ) .
2
From Theorem 2, we know that we have always dH (P, Q) ? 2 (P, Q). Moreover if the space
H was rich enough to provide a good approximation of the optimal critic f , then dH is a good
approximation of the Chi-squared distance 2 .
Generalization bounds for the sample quality of the estimated Fisher IPM from samples from P and
Q can be done akin to [11], with the main difficulty that for Fisher IPM we have to bound the excess
risk of a cost function with data dependent constraints on the function class. We give generalization
bounds for learning the Fisher IPM in the supplementary material (Theorem 3, Appendix E). In a
nutshell the generalization error of the critic learned in a hypothesis class H from samples of P and
Q, decomposes to the approximation error from Theorem 2 and a statisticalperror that is bounded
using data dependent local Rademacher complexities [17] and scales like O( 1/n), n = M N/M +N .
We illustrate in Figure 2 our main theoretical claims on a toy problem.
4
Fisher GAN Algorithm using ALM
For any choice of the parametric function class Fp (for example Fv,! ), note the constraint in Equation
? p , g? ) = 1 PN f 2 (xi ) + 1 PN f 2 (g? (zj )). Define the Augmented Lagrangian
(4) by ?(f
i=1 p
j=1 p
2N
2N
[18] corresponding to Fisher GAN objective and constraint given in Equation (4):
? ?
(?(fp , g? ) 1)2
(9)
2
where is the Lagrange multiplier and ? > 0 is the quadratic penalty weight. We alternate between
optimizing the critic and the generator. Similarly to [7] we impose the constraint when training the
critic only. Given ?, for training the critic we solve maxp min LF (p, ?, ). Then given the critic
parameters p we optimize the generator weights ? to minimize the objective min? E?(fp , g? ). We
give in Algorithm 1, an algorithm for Fisher GAN, note that we use ADAM [19] for optimizing the
parameters of the critic and the generator. We use SGD for the Lagrange multiplier with learning rate
? following practices in Augmented Lagrangian [18].
LF (p, ?, ) = E?(fp , g? ) + (1
? p , g? ))
?(f
Algorithm 1 Fisher GAN
Input: ? penalty weight, ? Learning rate, nc number of iterations for training the critic, N batch
size
Initialize p, ?, = 0
repeat
for j = 1 to nc do
Sample a minibatch xi , i = 1 . . . N, xi ? Pr
Sample a minibatch zi , i = 1 . . . N, zi ? pz
(gp , g )
(rp LF , r LF )(p, ?, )
p
p + ? ADAM (p, gp )
?g {SGD rule on with learning rate ?}
end for
Sample zi , i = 1 . . . N, zi ? pz
PN
d?
r? E?(fp , g? ) = r? N1 i=1 fp (g? (zi ))
?
? ? ADAM (?, d? )
until ? converges
6
(c) CIFAR-10
0
4
4
E? train
E? val
E? train
4
2
Mean difference E?
(b) CelebA
Mean difference E?
Mean difference E?
(a) LSUN
4
2
2
0
0
3
4
?
?
3
?
?
2
2
1
1
1
0
0
0.5
1.0
g? iterations
1.5
0
0
?10
5
?
?
3
2
0.0
E? train
E? val
4
1
2
g? iterations
3
4
?10
5
0.0
0.5
1.0
g? iterations
1.5
?105
?
Figure 3: Samples and plots of the loss E?(.), lagrange multiplier , and constraint ?(.)
on 3
benchmark datasets. We see that during training as grows slowly, the constraint becomes tight.
Figure 4: No Batch Norm: Training results from a critic f without batch normalization. Fisher GAN
(left) produces decent samples, while WGAN with weight clipping (right) does not. We hypothesize
that this is due to the implicit whitening that Fisher GAN provides. (Note that WGAN-GP does also
succesfully converge without BN [7]). For both models the learning rate was appropriately reduced.
5
Experiments
We experimentally validate the proposed Fisher GAN. We claim three main results: (1) stable training
with a meaningful and stable loss going down as training progresses and correlating with sample
quality, similar to [6, 7]. (2) very fast convergence to good sample quality as measured by inception
score. (3) competitive semi-supervised learning performance, on par with literature baselines, without
requiring normalization of the critic.
We report results on three benchmark datasets: CIFAR-10 [20], LSUN [21] and CelebA [22]. We
parametrize the generator g? and critic f with convolutional neural networks following the model
design from DCGAN [23]. For 64 ? 64 images (LSUN, CelebA) we use the model architecture in
Appendix F.2, for CIFAR-10 we train at a 32 ? 32 resolution using architecture in F.3 for experiments
regarding sample quality (inception score), while for semi-supervised learning we use a better
regularized discriminator similar to the Openai [9] and ALI [24] architectures, as given in F.4.We
used Adam [19] as optimizer for all our experiments, hyper-parameters given in Appendix F.
Qualitative: Loss stability and sample quality. Figure 3 shows samples and plots during training.
For LSUN we use a higher number of D updates (nc = 5) , since we see similarly to WGAN that
the loss shows large fluctuations with lower nc values. For CIFAR-10 and CelebA we use reduced
nc = 2 with no negative impact on loss stability. CIFAR-10 here was trained without any label
information. We show both train and validation loss on LSUN and CIFAR-10 showing, as can be
expected, no overfitting on the large LSUN dataset and some overfitting on the small CIFAR-10
dataset. To back up our claim that Fisher GAN provides stable training, we trained both a Fisher Gan
and WGAN where the batch normalization in the critic f was removed (Figure 4).
Quantitative analysis: Inception Score and Speed. It is agreed upon that evaluating generative
models is hard [25]. We follow the literature in using ?inception score? [9] as a metric for the quality
7
8
7
Inception score
6
5
4
3
(a) Fisher GAN: CE, Conditional
(b) Fisher GAN: CE, G Not Cond.
2
(c) Fisher GAN: No Lab
WGAN-GP
WGAN
DCGAN
1
0
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
g? iterations
2.00
?105
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Wallclock time (seconds)
3.5
4.0
?104
Figure 5: CIFAR-10 inception scores under 3 training conditions. Corresponding samples are given
in rows from top to bottom (a,b,c). The inception score plots are mirroring Figure 3 from [7].
Note All inception scores are computed from the same tensorflow codebase, using the architecture
described in appendix F.3, and with weight initialization from a normal distribution with stdev=0.02.
In Appendix F.1 we show that these choices are also benefiting our WGAN-GP baseline.
of CIFAR-10 samples. Figure 5 shows the inception score as a function of number of g? updates
and wallclock time. All timings are obtained by running on a single K40 GPU on the same cluster.
We see from Figure 5, that Fisher GAN both produces better inception scores, and has a clear speed
advantage over WGAN-GP.
Quantitative analysis: SSL. One of the main premises of unsupervised learning, is to learn features
on a large corpus of unlabeled data in an unsupervised fashion, which are then transferable to other
tasks. This provides a proper framework to measure the performance of our algorithm. This leads
us to quantify the performance of Fisher GAN by semi-supervised learning (SSL) experiments on
CIFAR-10. We do joint supervised and unsupervised training on CIFAR-10, by adding a cross-entropy
term to the IPM objective, in conditional and unconditional generation.
Table 2: CIFAR-10 inception scores using resnet architecture and codebase from [7]. We used
Layer Normalization [26] which outperformed unnormalized resnets. Apart from this, no additional
hyperparameter tuning was done to get stable training of the resnets.
Method
Score
Method
Score
ALI [24]
BEGAN [27]
DCGAN [23] (in [28])
Improved GAN (-L+HA) [9]
EGAN-Ent-VI [29]
DFM [30]
WGAN-GP ResNet [7]
Fisher GAN ResNet (ours)
SteinGan [31]
DCGAN (with labels, in [31])
Improved GAN [9]
Fisher GAN ResNet (ours)
AC-GAN [32]
SGAN-no-joint [28]
WGAN-GP ResNet [7]
SGAN [28]
5.34 ? .05
5.62
6.16 ? .07
6.86 ? .06
7.07 ? .10
7.72 ? .13
7.86 ? .07
7.90 ? .05
Unsupervised
6.35
6.58
8.09 ? .07
8.16 ? .12
8.25 ? .07
8.37 ? .08
8.42 ? .10
8.59 ? .12
Supervised
Unconditional Generation with CE Regularization. We parametrize the critic f as in Fv,! .
While training the critic using the Fisher GAN objective LF given in Equation (9), we train a linear
classifier on the feature space ! of the critic, whenever labels are available (K labels). The linear
classifier isPtrained with Cross-Entropy (CE) minimization. Then the critic loss becomes LD =
LF
log [Softmax(hS, ! (x)i)y ],
D
(x,y)2lab CE(x, y; S, ! ), where CE(x, y; S, ! ) =
K?m
K
where S 2 R
is the linear classifier and hS, ! i 2 R with slight abuse of notation. D is the
regularization hyper-parameter. We now sample three minibatches for each critic update: one labeled
batch from the small labeled dataset for the CE term, and an unlabeled batch + generated batch for
the IPM.
Conditional Generation with CE Regularization. We also trained conditional generator models,
conditioning the generator on y by concatenating the input noise with a 1-of-K embedding of the
label: we now have g? (z, y). We parametrize the critic in Fv,! and modify the critic objective
as above. We also add a cross-entropy term for the generator to minimize during its training step:
P
LG = E? + G z?p(z),y?p(y) CE(g? (z, y), y; S, ! ). For generator updates we still need to sample
only a single minibatch since we use the minibatch of samples from g? (z, y) to compute both the
8
IPM loss E? and CE. The labels are sampled according to the prior y ? p(y), which defaults to the
discrete uniform prior when there is no class imbalance. We found D = G = 0.1 to be optimal.
New Parametrization of the Critic: ?K + 1 SSL?. One specific successful formulation of SSL in
the standard GAN framework was provided in [9], where the discriminator classifies samples into
K + 1 categories: the K correct clases, and K + 1 for fake samples. Intuitively this puts the real
classes in competition with the fake class. In order to implement this idea in the Fisher framework,
we define a new function class of the critic that puts in competition the K class directions of the
classifier Sy , and another ?K+1? direction v that indicates fake samples. Hence we propose the
PK
following parametrization for the critic: f (x) = y=1 p(y|x) hSy , ! (x)i hv, ! (x)i, where
p(y|x) = Softmax(hS, ! (x)i)y which is also optimized with Cross-Entropy. Note that this critic
does not fall under the interpretation with whitened means from Section 2.2, but does fall under
the general Fisher IPM framework from Section 2.1. We can use this critic with both conditional
and unconditional generation in the same way as described above. In this setting we found D =
1.5, G = 0.1 to be optimal.
Layerwise normalization on the critic. For most GAN formulations following DCGAN design
principles, batch normalization (BN) [33] in the critic is an essential ingredient. From our semisupervised learning experiments however, it appears that batch normalization gives substantially
worse performance than layer normalization (LN) [26] or even no layerwise normalization. We
attribute this to the implicit whitening Fisher GAN provides.
Table 3 shows the SSL results on CIFAR-10. We show that Fisher GAN has competitive results, on
par with state of the art literature baselines. When comparing to WGAN with weight clipping, it
becomes clear that we recover the lost SSL performance. Results with the K + 1 critic are better
across the board, proving consistently the advantage of our proposed K + 1 formulation. Conditional
generation does not provide gains in the setting with layer normalization or without normalization.
Number of labeled examples
Model
Table 3: CIFAR-10 SSL results.
1000
2000
4000
Misclassification rate
CatGAN [34]
Improved GAN (FM) [9]
ALI [24]
8000
21.83 ? 2.01
19.98 ? 0.89
19.61 ? 2.09
19.09 ? 0.44
19.58
18.63 ? 2.32
17.99 ? 1.62
40.85
42.00
30.56
30.91
Fisher GAN BN Cond
Fisher GAN BN Uncond
Fisher GAN BN K+1 Cond
Fisher GAN BN K+1 Uncond
36.37
36.42
34.94
33.49
32.03
33.49
28.04
28.60
27.42
27.36
23.85
24.19
22.85
22.82
20.75
21.59
Fisher GAN LN Cond
Fisher GAN LN Uncond
Fisher GAN LN K+1 Cond
Fisher GAN LN K+1, Uncond
26.78 ? 1.04
24.39 ? 1.22
20.99 ? 0.66
19.74 ? 0.21
23.30 ? 0.39
22.69 ? 1.27
19.01 ? 0.21
17.87 ? 0.38
20.56 ? 0.64
19.53 ? 0.34
17.41 ? 0.38
16.13 ? 0.53
18.26 ? 0.25
17.84 ? 0.15
15.50 ? 0.41
14.81 ? 0.16
WGAN (weight clipping) Uncond
WGAN (weight clipping) Cond
Fisher GAN No Norm K+1, Uncond
6
69.01
68.11
21.15 ? 0.54
56.48
58.59
18.21 ? 0.30
16.74 ? 0.19
17.72 ? 1.82
17.05 ? 1.49
14.80 ? 0.15
Conclusion
We have defined Fisher GAN, which provide a stable and fast way of training GANs. The Fisher
GAN is based on a scale invariant IPM, by constraining the second order moments of the critic. We
provide an interpretation as whitened (Mahalanobis) mean feature matching and 2 distance. We
show graceful theoretical and empirical advantages of our proposed Fisher GAN.
Acknowledgments. The authors thank Steven J. Rennie for many helpful discussions and Martin
Arjovsky for helpful clarifications and pointers.
9
References
[1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
[2] Martin Arjovsky and L?on Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017.
[3] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural
samplers using variational divergence minimization. In NIPS, 2016.
[4] Casper Kaae S?nderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz?r. Amortised
map inference for image super-resolution. ICLR, 2017.
[5] Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, and Zhen Wang. Least squares
generative adversarial networks. arXiv:1611.04076, 2016.
[6] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein gan. ICML, 2017.
[7] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville.
Improved training of wasserstein gans. arXiv:1704.00028, 2017.
[8] Youssef Mroueh, Tom Sercu, and Vaibhava Goel. Mcgan: Mean and covariance feature
matching gan. arXiv:1702.08398 ICML, 2017.
[9] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training gans. NIPS, 2016.
[10] Alfred M?ller. Integral probability metrics and their generating classes of functions. Advances
in Applied Probability, 1997.
[11] Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Sch?lkopf, and Gert
R. G. Lanckriet. On the empirical estimation of integral probability metrics. Electronic Journal
of Statistics, 2012.
[12] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models.
arXiv:1610.03483, 2016.
[13] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch?lkopf, and Alexander
Smola. A kernel two-sample test. JMLR, 2012.
[14] Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In ICML,
2015.
[15] Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural
networks via maximum mean discrepancy optimization. UAI, 2015.
[16] Za?d Harchaoui, Francis R Bach, and Eric Moulines. Testing for homogeneity with kernel fisher
discriminant analysis. In NIPS, 2008.
[17] Peter L. Bartlett, Olivier Bousquet, and Shahar Mendelson. Local rademacher complexities.
Ann. Statist., 2005.
[18] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 2nd edition, 2006.
[19] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[20] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master?s
thesis, 2009.
[21] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao.
Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop.
arXiv:1506.03365, 2015.
[22] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the
wild. In ICCV, 2015.
10
[23] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with
deep convolutional generative adversarial networks. arXiv:1511.06434, 2015.
[24] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. ICLR, 2017.
[25] Lucas Theis, A?ron van den Oord, and Matthias Bethge. A note on the evaluation of generative
models. ICLR, 2016.
[26] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint
arXiv:1607.06450, 2016.
[27] David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative
adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
[28] Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, and Serge Belongie. Stacked generative
adversarial networks. arXiv preprint arXiv:1612.04357, 2016.
[29] Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating
energy-based generative adversarial networks. arXiv preprint arXiv:1702.01691, 2017.
[30] D Warde-Farley and Y Bengio. Improving generative adversarial networks with denoising
feature matching. ICLR submissions, 8, 2017.
[31] Dilin Wang and Qiang Liu. Learning to draw samples: With application to amortized mle for
generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016.
[32] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with
auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
[33] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. Proc. ICML, 2015.
[34] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv:1511.06390, 2015.
[35] Alessandra Tosi, S?ren Hauberg, Alfredo Vellido, and Neil D. Lawrence. Metrics for probabilistic geometries. 2014.
[36] Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Scholkopf, and Gert
R. G. Lanckriet. On integral probability metrics, -divergences and binary classification. 2009.
[37] I. Ekeland and T. Turnbull. Infinite-dimensional Optimization and Convexity. The University of
Chicago Press, 1983.
[38] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward
neural networks. In International conference on artificial intelligence and statistics, pages
249?256, 2010.
[39] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv preprint arXiv:1502.01852,
2015.
11
| 6845 |@word h:3 norm:4 nd:1 c0:2 bn:6 covariance:5 bachman:1 sgd:2 boundedness:2 ipm:53 ld:1 moment:7 liu:2 score:13 daniel:1 ours:3 interestingly:1 rkhs:2 com:3 comparing:1 varx:3 luo:1 amjad:1 diederik:1 dx:3 written:4 gpu:1 john:1 numerical:1 chicago:1 enables:1 christian:1 hypothesize:1 plot:4 yinda:1 update:4 generative:19 alec:2 intelligence:1 parametrization:2 sgan:2 pointer:1 num:1 provides:4 ron:1 zhang:2 along:1 olah:1 become:1 scholkopf:1 qualitative:1 consists:1 prove:1 combine:1 wild:1 introduce:2 alm:1 theoretically:2 expected:1 karsten:1 roughly:1 kiros:1 multi:1 chi:12 moulines:1 inspired:1 ming:1 soumith:2 becomes:3 provided:1 classifies:1 moreover:3 bounded:5 maximizes:1 notation:1 linearity:1 gintare:1 minimizes:1 substantially:1 pseudo:1 quantitative:2 growth:1 nutshell:1 zaremba:1 rm:3 classifier:5 sherjil:1 control:1 dfu:1 unit:1 faruk:1 positive:2 local:2 timing:1 modify:1 karolina:1 fluctuation:1 abuse:1 initialization:1 specifying:1 luke:2 succesfully:1 catgan:1 acknowledgment:1 testing:5 practice:1 lost:1 definite:1 lf:6 implement:1 universal:1 empirical:3 matching:9 word:1 integrating:1 zoubin:1 get:2 close:1 unlabeled:2 put:2 context:1 risk:1 zihang:1 optimize:3 measurable:3 equivalent:1 lagrangian:3 center:1 maximizing:1 shi:1 map:1 jimmy:2 formulate:1 resolution:2 simplicity:1 pouget:1 rule:1 shlens:1 stability:5 sercu:2 embedding:5 classic:1 proving:1 gert:2 limiting:1 construction:2 exact:2 olivier:2 homogeneous:1 us:3 hypothesis:6 goodfellow:2 lanckriet:2 amortized:1 roy:1 approximated:2 nderby:1 balaji:1 submission:1 labeled:3 bottom:1 steven:1 preprint:7 wang:3 hv:6 worst:1 ensures:2 sun:1 k40:1 removed:1 yk:1 principled:1 discriminates:3 convexity:1 complexity:2 warde:2 tobias:1 trained:4 tight:1 ferenc:1 compromise:3 ali:3 upon:1 eric:1 swap:1 joint:2 hopcroft:1 xiaoou:1 various:1 stdev:1 train:6 stacked:1 fast:2 artificial:1 youssef:2 vicki:1 zemel:1 hyper:4 choosing:1 pearson:1 kevin:1 jean:1 larger:1 valued:1 solve:2 supplementary:1 rennie:1 otherwise:1 maxp:1 statistic:4 neil:1 augustus:1 gp:11 highlighted:1 shakir:1 dfp:1 advantage:4 wallclock:2 net:1 matthias:1 propose:2 jamie:1 product:2 maximal:1 loop:1 lsgan:2 benefiting:1 validate:2 competition:2 xun:1 ent:1 convergence:2 cluster:1 assessing:1 rademacher:2 produce:2 generating:1 adam:5 converges:2 ben:1 object:1 resnet:5 derive:1 illustrate:1 ac:1 tim:1 measured:2 progress:1 eq:2 strong:1 implemented:1 quotient:4 kenji:2 come:2 auxiliary:1 quantify:1 rasch:1 direction:3 kaae:1 closely:1 correct:1 compromising:1 attribute:2 stochastic:1 human:2 jonathon:1 material:1 premise:1 vaibhava:1 generalization:4 ryan:1 singularity:1 im:4 extension:1 hold:2 ground:2 normal:2 caballero:1 wright:1 equilibrium:1 lawrence:1 eduard:1 claim:4 achieves:2 optimizer:1 earth:1 estimation:1 proc:1 outperformed:1 label:6 almahairi:1 minimization:2 fukumizu:2 always:2 dfv:5 super:1 clases:1 pn:4 avoid:1 husz:1 cseke:1 consistently:1 indicates:1 adversarial:12 criticism:1 baseline:3 hauberg:1 helpful:2 inference:2 dependent:9 typically:1 relation:1 dfm:1 going:1 classification:3 lucas:2 proposes:1 development:1 constrained:7 ssl:8 initialize:1 softmax:2 art:1 equal:2 beach:1 sampling:2 qiang:1 mcgan:3 adversarially:1 look:1 unsupervised:6 icml:4 yu:1 celeba:4 discrepancy:13 report:1 mirza:1 yoshua:2 richard:1 divergence:7 wgan:19 mover:1 homogeneity:1 qing:1 replaced:1 geometry:1 lebesgue:1 n1:1 mlp:11 highly:1 intra:1 yixuan:1 evaluation:1 introduces:2 farley:2 unconditional:3 amenable:1 integral:6 arthur:3 shuran:1 il2:2 indexed:2 egan:1 theoretical:2 instance:1 measuring:1 ishmael:1 tosi:1 turnbull:1 clipping:8 cost:6 deviation:1 uniform:1 krizhevsky:1 successful:1 lsun:7 synthetic:1 st:1 borgwardt:1 international:1 sensitivity:1 oord:1 probabilistic:1 together:1 bethge:1 synthesis:1 gans:9 squared:13 again:1 thesis:1 huang:1 slowly:1 worse:1 wojciech:1 toy:1 li:3 szegedy:1 standardizing:1 haoran:1 pooled:2 student:2 lakshminarayanan:1 depends:2 vi:1 later:1 root:1 try:2 closed:1 lab:2 analyze:1 sup:5 doing:1 start:1 hf:2 competitive:2 recover:1 uncond:6 francis:1 metz:2 contribution:2 minimize:4 square:2 botond:1 hovy:1 convolutional:3 variance:3 sy:1 serge:1 lkopf:2 vincent:2 ren:2 history:1 bharath:2 za:1 ping:1 whenever:1 sebastian:1 definition:1 sriperumbudur:2 energy:1 kl2:2 mohamed:1 chintala:2 proof:1 sampled:1 gain:1 dataset:4 hilbert:2 agreed:1 back:1 appears:1 higher:1 supervised:8 follow:1 tom:4 xie:1 improved:7 maximally:2 formulation:7 done:2 inception:11 implicit:3 smola:1 until:1 christopher:1 mehdi:1 minibatch:4 defines:7 lda:2 quality:7 gulrajani:1 lei:1 grows:1 semisupervised:2 usa:1 dziugaite:1 requiring:1 true:1 multiplier:3 calibrating:1 xavier:1 regularization:4 hence:6 equality:1 symmetric:5 funkhouser:1 white:1 mahalanobis:7 numerator:1 game:1 during:3 seff:1 transferable:1 cosine:2 unnormalized:1 generalized:1 prominent:1 alfredo:1 leakyrelu:1 image:6 meaning:1 variational:1 recently:2 ari:1 began:2 conditioning:1 interpretation:5 approximates:2 slight:1 numerically:1 berthelot:1 he:1 surpassing:1 ishaan:1 imposing:1 ai:2 cv:1 tuning:1 mroueh:3 unconstrained:1 rd:1 consistency:1 pm:1 similarly:2 nonlinearity:1 stable:9 whitening:2 add:1 recent:3 showed:1 optimizing:3 optimizes:1 belongs:2 inf:1 apart:1 poursaeed:1 shahar:1 watson:1 arbitrarily:1 binary:1 arjovsky:5 wasserstein:7 additional:1 impose:4 dai:1 goel:1 converge:1 maximize:1 ller:1 xiangyu:1 semi:6 multiple:1 full:3 sound:1 desirable:1 harchaoui:2 stem:1 gretton:3 usability:1 ahmed:1 cross:4 long:2 sphere:1 cifar:14 bach:1 mle:1 impact:2 jost:1 basic:1 whitened:5 denominator:1 metric:12 df:6 multilayer:1 arxiv:21 iteration:5 normalization:14 kernel:7 mmd:2 resnets:2 achieved:2 sergey:1 addressed:1 jian:1 crucial:1 appropriately:2 sch:2 dumoulin:2 subject:1 induced:1 feedforward:1 mastropietro:1 constraining:1 bengio:3 embeddings:3 easy:1 enough:2 decent:1 codebase:2 fit:1 zi:6 relu:1 architecture:5 restrict:1 fm:1 inner:2 regarding:1 idea:1 tradeoff:1 shift:5 expression:1 bartlett:1 accelerating:1 akin:1 penalty:5 song:1 peter:1 shaoqing:1 mirroring:1 deep:6 fake:7 clear:2 statist:1 induces:1 category:1 reduced:2 generate:1 http:1 sl:1 restricts:1 zj:3 shifted:1 notice:1 inz:1 estimated:1 alfred:1 discrete:1 hyperparameter:1 shall:1 alessandra:1 openai:1 ce:10 kuk:1 nocedal:1 schumm:1 sum:1 angle:1 jose:1 powerful:1 master:1 springenberg:1 swersky:1 sl2:2 lamb:1 electronic:1 separation:1 draw:1 appendix:9 scaling:5 bound:6 layer:9 courville:4 quadratic:1 xiaogang:1 bilevel:1 constraint:23 alex:1 unlimited:1 bousquet:1 speed:2 layerwise:2 rnz:1 min:5 extremely:1 graceful:1 martin:5 according:1 alternate:1 ball:1 belonging:1 smaller:1 across:1 den:1 iccv:1 intuitively:3 invariant:4 restricted:2 pr:7 lau:1 ln:5 equation:10 remains:1 bing:1 turn:1 discus:1 dilin:1 needed:1 know:1 tractable:2 end:1 studying:1 available:2 gaussians:1 endowed:1 parametrize:3 salimans:1 enforce:1 batch:11 rp:1 original:1 thomas:1 standardized:2 top:1 running:1 gan:67 maintaining:1 ghahramani:1 build:1 approximating:1 unchanged:1 objective:9 added:1 parametric:3 gradient:4 iclr:6 distance:34 thank:1 capacity:12 parametrized:2 philip:1 discriminant:5 kfda:2 ozair:1 code:1 gramian:1 illustration:2 ratio:1 equivalently:4 nc:5 lg:1 fp2:2 ryota:1 negative:1 rise:2 ba:2 ziwei:1 design:3 proper:1 upper:1 imbalance:1 datasets:2 benchmark:2 finite:2 parametrizing:1 hinton:2 reproducing:1 arbitrary:4 introduced:1 david:2 namely:1 kl:1 optimized:1 discriminator:6 imagenet:1 fv:7 learned:6 tensorflow:1 established:1 kingma:1 nip:5 poole:1 below:1 yujia:1 fp:10 omid:1 max:10 power:1 misclassification:1 malte:1 difficulty:2 odena:1 regularized:1 github:1 axis:1 zhen:1 categorical:1 raymond:1 deviate:1 prior:2 literature:4 l2:7 understanding:1 kf:2 val:2 theis:2 relative:1 loss:9 par:2 generation:6 geoffrey:1 ingredient:1 generator:15 validation:1 foundation:1 imposes:1 xiao:1 principle:1 nowozin:1 critic:57 tiny:1 ibm:4 row:1 casper:1 repeat:1 last:1 optimizaton:1 fall:2 amortised:1 face:1 absolute:1 van:1 boundary:1 default:1 evaluating:1 contour:1 rich:1 author:1 excess:1 approximate:1 compact:1 bernhard:3 belghazi:1 sz:1 correlating:1 overfitting:2 uai:1 ioffe:1 corpus:1 belongie:1 xi:7 don:2 decomposes:1 table:5 learn:3 delving:1 ca:1 improving:1 bottou:2 complex:1 pk:1 main:6 wenzhe:1 noise:1 edition:1 p1n:1 xu:1 augmented:3 fig:1 referred:1 board:1 fashion:1 tomioka:1 mao:1 concatenating:1 jmlr:1 learns:1 ian:2 tang:1 theorem:11 down:1 specific:1 covariate:1 rectifier:1 showing:1 jensen:1 pz:6 abadie:1 virtue:1 glorot:1 essential:1 mendelson:1 restricting:1 adding:1 chen:1 supf:2 entropy:4 rayleigh:4 explore:1 lagrange:3 dcgan:5 kaiming:1 radford:2 springer:1 corresponds:6 truth:2 dh:5 minibatches:1 conditional:7 cheung:1 ann:1 towards:1 lipschitz:3 fisher:86 change:1 experimentally:1 hard:1 specifically:2 typical:1 reducing:3 infinite:1 sampler:1 denoising:1 called:2 clarification:1 invariance:2 shannon:1 meaningful:2 cond:6 aaron:4 formally:1 internal:1 jianxiong:1 alexander:1 ex:5 |
6,463 | 6,846 | Information-theoretic analysis of generalization
capability of learning algorithms
Aolin Xu
Maxim Raginsky
{aolinxu2,maxim}@illinois.edu ?
Abstract
We derive upper bounds on the generalization error of a learning algorithm in
terms of the mutual information between its input and output. The bounds provide
an information-theoretic understanding of generalization in learning problems,
and give theoretical guidelines for striking the right balance between data fit and
generalization by controlling the input-output mutual information. We propose a
number of methods for this purpose, among which are algorithms that regularize
the ERM algorithm with relative entropy or with random noise. Our work extends
and leads to nontrivial improvements on the recent results of Russo and Zou.
1
Introduction
A learning algorithm can be viewed as a randomized mapping, or a channel in the informationtheoretic language, which takes a training dataset as input and generates a hypothesis as output.
The generalization error is the difference between the population risk of the output hypothesis and
its empirical risk on the training data. It measures how much the learned hypothesis suffers from
overfitting. The traditional way of analyzing the generalization error relies either on certain complexity
measures of the hypothesis space, e.g. the VC dimension and the Rademacher complexity [1], or
on certain properties of the learning algorithm, e.g., uniform stability [2]. Recently, motivated
by improving the accuracy of adaptive data analysis, Russo and Zou [3] showed that the mutual
information between the collection of empirical risks of the available hypotheses and the final output
of the algorithm can be used effectively to analyze and control the bias in data analysis, which is
equivalent to the generalization error in learning problems. Compared to the methods of analysis
based on differential privacy, e.g., by Dwork et al. [4, 5] and Bassily et al. [6], the method proposed
in [3] is simpler and can handle unbounded loss functions; moreover, it provides elegant informationtheoretic insights into improving the generalization capability of learning algorithms. In a similar
information-theoretic spirit, Alabdulmohsin [7, 8] proposed to bound the generalization error in
learning problems using the total-variation information between a random instance in the dataset and
the output hypothesis, but the analysis apply only to bounded loss functions.
In this paper, we follow the information-theoretic framework proposed by Russo and Zou [3] to
derive upper bounds on the generalization error of learning algorithms. We extend the results in [3]
to the situation where the hypothesis space is uncountably infinite, and provide improved upper
bounds on the expected absolute generalization error. We also obtain concentration inequalities for
the generalization error, which were not given in [3]. While the main quantity examined in [3] is the
mutual information between the collection of empirical risks of the hypotheses and the output of the
algorithm, we mainly focus on relating the generalization error to the mutual information between the
input dataset and the output of the algorithm, which formalizes the intuition that the less information
?
Department of Electrical and Computer Engineering and Coordinated Science Laboratory, University of
Illinois, Urbana, IL 61801, USA. This work was supported in part by the NSF CAREER award CCF-1254041
and in part by the Center for Science of Information (CSoI), an NSF Science and Technology Center, under
grant agreement CCF-0939370.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
a learning algorithm can extract from the input dataset, the less it will overfit. This viewpoint
provides theoretical guidelines for striking the right balance between data fit and generalization by
controlling the algorithm?s input-output mutual information. For example, we show that regularizing
the empirical risk minimization (ERM) algorithm with the input-output mutual information leads to
the well-known Gibbs algorithm. As another example, regularizing the ERM algorithm with random
noise can also control the input-output mutual information. For both the Gibbs algorithm and the
noisy ERM algorithm, we also discuss how to calibrate the regularization in order to incorporate
any prior knowledge of the population risks of the hypotheses into algorithm design. Additionally,
we discuss adaptive composition of learning algorithms, and show that the generalization capability
of the overall algorithm can be analyzed by examining the input-output mutual information of the
constituent algorithms.
Another advantage of relating the generalization error to the input-output mutual information is that
the latter quantity depends on all ingredients of the learning problem, including the distribution of
the dataset, the hypothesis space, the learning algorithm itself, and potentially the loss function, in
contrast to the VC dimension or the uniform stability, which only depend on the hypothesis space or
on the learning algorithm. As the generalization error can strongly depend on the input dataset [9],
the input-output mutual information can be more tightly coupled to the generalization error than the
traditional generalization-guaranteeing quantities of interest. We hope that our work can provide
some information-theoretic understanding of generalization in modern learning problems, which may
not be sufficiently addressed by the traditional analysis tools [9].
For the rest of this section, we define the quantities that will be used in the paper. In the standard
framework of statistical learning theory [10], there is an instance space Z, a hypothesis space W,
and a nonnegative loss function ` : W ? Z ! R+ . A learning algorithm characterized by a Markov
kernel PW |S takes as input a dataset of size n, i.e., an n-tuple
(1)
S = (Z1 , . . . , Zn )
of i.i.d. random elements of Z with some unknown distribution ?, and picks a random element W of
W as the output hypothesis according to PW |S . The population risk of a hypothesis w 2 W on ? is
Z
L? (w) , E[`(w, Z)] =
`(w, z)?(dz).
(2)
Z
The goal of learning is to ensure that the population risk of the output hypothesis W is small, either in
expectation or with high probability, under any data generating distribution ?. The excess risk of W
is the difference L? (W ) inf w2W L? (w), and its expected value is denoted as Rexcess (?, PW |S ).
Since ? is unknown, the learning algorithm cannot directly compute L? (w) for any w 2 W, but can
instead compute the empirical risk of w on the dataset S as a proxy, defined as
n
LS (w) ,
1X
`(w, Zi ).
n i=1
(3)
For a learning algorithm characterized by PW |S , the generalization error on ? is the difference
L? (W ) LS (W ), and its expected value is denoted as
gen(?, PW |S ) , E[L? (W )
LS (W )],
(4)
where the expectation is taken with respect to the joint distribution PS,W = ??n ? PW |S . The
expected population risk can then be decomposed as
E[L? (W )] = E[LS (W )] + gen(?, PW |S ),
(5)
where the first term reflects how well the output hypothesis fits the dataset, while the second term
reflects how well the output hypothesis generalizes. To minimize E[L? (W )] we need both terms in
(5) to be small. However, it is generally impossible to minimize the two terms simultaneously, and
any learning algorithm faces a trade-off between the empirical risk and the generalization error. In
what follows, we will show how the generalization error can be related to the mutual information
between the input and output of the learning algorithm, and how we can use these relationships to
guide the algorithm design to reduce the population risk by balancing fitting and generalization.
2
2
Algorithmic stability in input-output mutual information
As discussed above, having a small generalization error is crucial for a learning algorithm to produce
an output hypothesis with a small population risk. It turns out that the generalization error of a learning
algorithm can be determined by its stability properties. Traditionally, a learning algorithm is said to be
stable if a small change of the input to the algorithm does not change the output of the algorithm much.
Examples include uniform stability defined by Bousquet and Elisseeff [2] and on-average stability
defined by Shalev-Shwartz et al. [11]. In recent years, information-theoretic stability notions, such as
those measured by differential privacy [5], KL divergence [6, 12], total-variation information [7], and
erasure mutual information [13], have been proposed. All existing notions of stability show that the
generalization capability of a learning algorithm hinges on how sensitive the output of the algorithm is
to local modifications of the input dataset. It implies that the less dependent the output hypothesis W
is on the input dataset S, the better the learning algorithm generalizes. From an information-theoretic
point of view, the dependence between S and W can be naturally measured by the mutual information
between them, which prompts the following information-theoretic definition of stability. We say that
a learning algorithm is (", ?)-stable in input-output mutual information if, under the data-generating
distribution ?,
I(S; W ) ? ".
(6)
sup I(S; W ) ? ".
(7)
Further, we say that a learning algorithm is "-stable in input-output mutual information if
?
According to the definitions in (6) and (7), the less information the output of a learning algorithm can
provide about its input dataset, the more stable it is. Interestingly, if we view the learning algorithm
PW |S as a channel from Zn to W, the quantity sup? I(S; W ) can be viewed as the information
capacity of the channel, under the constraint that the input distribution is of a product form. The
definition in (7) means that a learning algorithm is more stable if its information capacity is smaller.
The advantage of the weaker definition in (6) is that I(S; W ) depends on both the algorithm and the
distribution of the dataset. Therefore, it can be more tightly coupled with the generalization error,
which itself depends on the dataset. We mainly focus on studying the consequence of this notion of
(", ?)-stability in input-output mutual information for the rest of this paper.
3
Upper-bounding generalization error via I(S; W )
In this section, we derive various generalization guarantees for learning algorithms that are stable in
input-output mutual information.
3.1
A decoupling estimate
We start with a digression from the statistical learning problem to a more general problem, which may
be of independent interest. Consider a pair of random variables X and Y with joint distribution PX,Y .
? be an independent copy of X, and Y? an independent copy of Y , such that PX,
Let X
? Y? = PX ? PY .
For an arbitrary real-valued function f : X ? Y ! R, we have the following upper bound on the
? Y? )].
absolute difference between E[f (X, Y )] and E[f (X,
2
? Y? ) is -subgaussian under PX,
Lemma 1 (proved in Appendix A). If f (X,
, then
? Y? = PX ? PY
p
? Y? )] ? 2 2 I(X; Y ).
E[f (X, Y )] E[f (X,
(8)
3.2
Upper bound on expected generalization error
Upper-bounding the generalization error of a learning algorithm PW |S can be cast as a special case of
Pn
the preceding problem, by setting X = S, Y = W , and f (s, w) = n1 i=1 `(w, zi ). For an arbitrary
w 2 W, the empirical risk can be expressed as LS (w) = f (S, w) and the population risk can be
expressed as L? (w) = E[f (S, w)]. Moreover, the expected generalization error can be written as
? W
? )]
gen(?, PW |S ) = E[f (S,
2
Recall that a random variable U is -subgaussian if log E[e
3
(9)
E[f (S, W )],
(U
EU )
]?
2
2
/2 for all
2 R.
where the joint distribution of S and W is PS,W = ??n ? PW |S . If `(w, Z) is -subgaussian for all
p
? ?
w2
p W, then f (S, w) is / n-subgaussian due to the i.i.d. assumption on Zi ?s, hence f (S, W ) is
/ n-subgaussian. This, together with Lemma 1, leads to the following theorem.
Theorem 1. Suppose `(w, Z) is -subgaussian under ? for all w 2 W, then
r
2 2
gen(?, PW |S ) ?
I(S; W ).
(10)
n
Theorem 1 suggests that, by controlling the mutual information between the input and the output
of a learning algorithm, we can control its generalization error. The theorem allows us to consider
unbounded loss functions as long as the subgaussian condition is satisfied. For a bounded loss
function `(?, ?) 2 [a, b], `(w, Z) is guaranteed to be (b a)/2-subgaussian for all ? and all w 2 W.
Russo and Zou [3] considered the same problem setup with the restriction that the hypothesis space
W is finite, and showed that |gen(?, PW |S )| can be upper-bounded in terms of I(?W (S); W ), where
?W (S) , LS (w)
w2W
(11)
is the collection of empirical risks of the hypotheses in W. Using Lemma 1 by setting X = ?W (S),
Y = W , and f (?W (s), w) = Ls (w), we immediately recover the result by Russo and Zou even
when W is uncountably infinite:
Theorem 2 (Russo and Zou [3]). Suppose `(w, Z) is -subgaussian under ? for all w 2 W, then
r
2 2
gen(?, PW |S ) ?
I(?W (S); W ).
(12)
n
It should be noted that Theorem 1 can be obtained as a consequence of Theorem 2 because
I(?W (S); W ) ? I(S; W ),
(13)
which is due to the Markov chain ?W (S) S W , as for each w 2 W, LS (w) is a function of S.
However, if the output W depends on S only through the empirical risks ?W (S), in other words,
when the Markov chain S ?W (S) W holds, then Theorem 1 and Theorem 2 are equivalent. The
advantage of Theorem 1 is that I(S; W ) can be much easier to evaluate than I(?W (S); W ), and can
provide better insights to guide the algorithm design. We will elaborate on this when we discuss the
Gibbs algorithm and the adaptive composition of learning algorithms.
Theorem 1 and Theorem 2 only provide upper bounds on the expected generalization error. We are
often interested in analyzing the absolute generalization error |L? (W ) LS (W )|, e.g., its expected
value or the probability for it to be small. We need to develop stronger tools to tackle these problems,
which is the subject of the next two subsections.
3.3
A concentration inequality for |L? (W )
LS (W )|
For any fixed w 2 W, if `(w, Z) is -subgaussian, the Chernoff-Hoeffding bound gives P[|L? (w)
2
2
LS (w)| > ?] ? 2e ? n/2 . It implies that, if S and W are independent, then a sample size of
n=
2 2
2
log
2
?
(14)
LS (W )| > ?] ? .
(15)
suffices to guarantee
P[|L? (W )
The following results show that, when W is dependent on S, as long as I(S; W ) is sufficiently small,
a sample complexity polynomial in 1/? and logarithmic in 1/ still suffices to guarantee (15), where
the probability now is taken with respect to the joint distribution PS,W = ??n ? PW |S .
Theorem 3 (proved in Appendix B). Suppose `(w, Z) is -subgaussian under ? for all w 2 W. If
a learning algorithm satisfies I(?W (S); W ) ? ", then for any ? > 0 and 0 < ? 1, (15) can be
guaranteed by a sample complexity of
?
?
8 2 "
2
n= 2
+ log
.
(16)
?
4
In view of (13), any learning algorithm that is (", ?)-stable in input-output mutual information
satisfies the condition I(?W (S); W ) ? ". The proof of Theorem 3 is based on Lemma 1 and an
adaptation of the ?monitor technique? proposed by Bassily et al. [6]. While the high-probability
bounds of [4?6] based on differential privacy are for bounded loss functions and for functions with
bounded differences, the result in Theorem 3 only requires `(w, Z) to be subgaussian. We have the
following corollary of Theorem 3.
Corollary 1. Under the conditions in Theorem 3, if for some function g(n)
1, " ? (g(n)
2
8 2
2
1) log , then a sample complexity that satisfies n/g(n)
guarantees (15).
?2 log
For example, taking g(n) = 2, Corollary 1 implies that if " ? log(2/ ), then (15) can be
guaranteed by a sample complexity of n = (16 2 /?2 ) log(2/ ), which is on the same order of
the sample
as in (14). As another example, taking
p complexity when S and W are independent
p
g(n) = n, Corollary 1 implies that if " ? ( n 1) log(2/ ), then a sample complexity of
2
n = (64 4 /?4 ) (log(2/ )) guarantees (15).
3.4
Upper bound on E|L? (W )
LS (W )|
A byproduct of the proof of Theorem 3 (setting m = 1 in the proof) is an upper bound on the expected
absolute generalization error.
Theorem 4. Suppose `(w, Z) is -subgaussian under ? for all w 2 W. If a learning algorithm
satisfies that I(?W (S); W ) ? ", then
r
2 2
E L? (W ) LS (W ) ?
(" + log 2).
(17)
n
p
p
This result improves [3, Prop. 3.2], which states that E LS (W ) L? (W ) ? / n + 36 2 2 "/n.
2
Theorem 4 together with Markov?s inequality implies that (15) can be guaranteed by n = ?22 2 " +
log 2 , but it has a worse dependence on as compared to the sample complexity given by Theorem 3.
4
Learning algorithms with input-output mutual information stability
In this section, we discuss several learning problems and algorithms from the viewpoint of inputoutput mutual information stability. We first consider two cases where the input-output mutual
information can be upper-bounded via the properties of the hypothesis space. Then we propose
two learning algorithms with controlled input-output mutual information by regularizing the ERM
algorithm. We also discuss other methods to induce input-output mutual information stability, and
the stability of learning algorithms obtained from adaptive composition of constituent algorithms.
4.1
Countable hypothesis space
When the hypothesis space is countable, the input-output mutual information can be directly upperbounded by H(W ), the entropy of W . If |W| = k, we have H(W ) ? log k. From Theorem 1, if
`(w, Z) is -subgaussian for all w 2 W, then for any learning algorithm PW |S with countable W,
r
2 2 H(W )
gen(?, PW |S ) ?
.
(18)
n
For the ERM algorithm, the upper bounds for the expected generalization error also hold for the
expected excess risk, since the empirical risk of the ERM algorithm satisfies
h
i
E[LS (WERM )] = E inf LS (w) ? inf E[LS (w)] = inf L? (w).
(19)
w2W
w2W
w2W
For an uncountable hypothesis space, we can always convert it to a finite one by quantizing the output
hypothesis. For example, if W ? Rm , we can define the covering number N (r, W) as the cardinality
of the smallest set W0 ? Rm such that for all w 2 W there is w0 2 W0 with kw w0 k ? r, and we
can use W0 as the codebook for quantization. The final output hypothesis W 0 will be an element of
5
p
W0 . If W lies in a d-dimensional
subspace of Rm and maxw2W kwk = B, then setting r = 1/ n,
p
we have N (r, W) ? (2B dn)d , and under the subgaussian condition of `,
r
p
2 2d
gen(?, PW 0 |S ) ?
log 2B dn .
(20)
n
4.2
Binary Classification
For the problem of binary classification, Z = X ? Y, Y = {0, 1}, W is a collection of classifiers
w : X ! Y, which could be uncountably infinite, and `(w, z) = 1{w(x) 6= y}. Using Theorem 1,
we can perform a simple analysis of the following two-stage algorithm [14, 15] that can achieve the
same performance as ERM. Given the dataset S, split it into S1 and S2 with lengths n1 and n2 . First,
pick a subset of hypotheses W1 ? W based on S1 such that (w(X1 ), . . . , w(Xn1 )) for w 2 W1 are
all distinct and {(w(X1 ), . . . , w(Xn1 )), w 2 W1 } = {(w(X1 ), . . . , w(Xn1 )), w 2 W}. In other
words, W1 forms an empirical cover of W with respect to S1 . Then pick a hypothesis from W1 with
the minimal empirical risk on S2 , i.e.,
W = arg min LS2 (w).
(21)
w2W1
Denoting the nth shatter coefficient and the VC dimension of W by Sn and V , we can upper-bound
the expected generalization error of W with respect to S2 as
s
?
?
V log(n1 + 1)
E[L? (W )] E[LS2 (W )] = E E[L? (W ) LS2 (W )|S1 ] ?
,
(22)
2n2
where we have used the fact that I(S2 ; W |S1 = s1 ) ? H(W |S1 = s1 ) ? log Sn1 ? V log(n1 + 1),
by Sauer?s Lemma, and Theorem 1. It can also be shown that [14, 15]
r
h
i
V
E[LS2 (W )] ? E inf L? (w) ? inf L? (w) + c
,
(23)
w2W1
w2W
n1
where the second expectation is taken with respect to W1 which depends on S1 , and c is a constant.
Combining (22) and (23) and setting n1 = n2 = n/2, we have for some constant c,
r
V log n
E[L? (W )] ? inf L? (w) + c
.
(24)
w2W
n
From an information-theoretic point of view, the above two-stage algorithm effectively controls the
conditional mutual information I(S2 ; W |S1 ) by extracting an empirical cover of W using S1 , while
maintaining a small empirical risk using S2 .
4.3
Gibbs algorithm
As Theorem 1 shows that the generalization error can be upper-bounded in terms of I(S; W ), it is
natural to consider an algorithm that minimizes the empirical risk regularized by I(S; W ):
?
?
1
?
PW |S = arg inf E[LS (W )] + I(S; W ) ,
(25)
PW |S
where > 0 is a parameter that balances fitting and generalization. To deal with the issue that ?
is unknown to the learning algorithm, we can relax the above optimization problem by replacing
I(S; W ) with an upper bound D(PW |S kQ|P
R S ) = I(S; W ) + D(PW kQ), where Q is an arbitrary
distribution on W and D(PW |S kQ|PS ) = Zn D(PW |S=s kQ)??n (ds), so that the solution of the
relaxed optimization problem does not depend on ?. It turns out that the well-known Gibbs algorithm
solves the relaxed optimization problem.
Theorem 5 (proved in Appendix C). The solution to the optimization problem
?
?
1
?
PW |S = arg inf E[LS (W )] + D(PW |S kQ|PS )
(26)
PW |S
is the Gibbs algorithm, which satisfies
?
PW
|S=s (dw) =
e Ls (w) Q(dw)
EQ [e Ls (W ) ]
6
for each s 2 Zn .
(27)
We would not have been able to arrive at the Gibbs algorithm had we used I(?W (S); W )
as the regularization term instead of I(S; W ) in (25), even if we upper-bound I(?W (S)) by
D(PW |?W (S) kQ|P?W (S) ). Using the fact that the Gibbs algorithm is (2 /n,0)-differentially private when ` 2 [0, 1] [16] and the group property of differential privacy [17], we can upper-bound the
input-output mutual information of the Gibbs algorithm
as I(S; W ) ? 2 . Then from Theorem 1,
p
?
we know that for ` 2 [0, 1], gen(?, PW
)
?
/n.
Using
Hoeffding?s lemma, a tighter upper
|S
bound on the expected generalization error for the Gibbs algorithm is obtained in [13], which states
that if ` 2 [0, 1],
?
gen(?, PW
|S ) ?
.
(28)
2n
With the guarantee on the generalization error, we can analyze the population risk of the Gibbs
algorithm. We first present a result for countable hypothesis spaces.
Corollary 2 (proved in Appendix D). Suppose W is countable. Let W denote the output of the
Gibbs algorithm applied on dataset S, and let wo denote the hypothesis that achieves the minimum
population risk among W. For ` 2 [0, 1], the population risk of W satisfies
E[L? (W )] ? inf L? (w) +
w2W
1
log
1
+
.
Q(wo ) 2n
(29)
The distribution Q in the Gibbs algorithm can be used to express our preference, or our prior
knowledge of the population risks, of the hypotheses in W, in a way that a higher probability under
Q is assigned to a hypothesis that we prefer. For example, we can order the hypotheses according to
2 2
our prior knowledge of their
p population risks, and set Q(wi ) = 6/? i for the ith hypothesis in the
order, then, setting = n, (29) becomes
E[L? (W )] ? inf L? (w) +
w2W
2 log io + 1
p
,
n
(30)
where io is the index of wo . It means that a better prior knowledge on the population risks leads to a
smaller sample complexity to achieve a certain expected excess risk. As another example, if |W| = k
and we have no
distribution on W and
p
p preference on any hypothesis, then taking Q as the uniform
setting = 2 n log k, (29) becomes E[L? (W )] ? inf w2W L? (w) + (1/n)log k.
For uncountable hypothesis spaces, we can do a similar analysis for the population risk under a
Lipschitz assumption on the loss function.
Corollary 3 (proved in Appendix E). Suppose W = Rd . Let wo be the hypothesis that achieves the
minimum population risk among W. Suppose ` 2 [0, 1] and `(?, z) is ?-Lipschitz for all z 2 Z. Let
W denote the output of the Gibbs algorithm applied on dataset S. The population risk of W satisfies
?
?
p
1
E[L? (W )] ? inf L? (w) +
+ inf a? d + D N (wo , a2 Id )kQ .
(31)
w2W
2n a>0
Again, we can use the distribution Q to express our preference of the hypotheses in W. For example,
we can choose Q = N (wQ , b2 Id ) with b = n 1/4 d 1/4 ? 1/2 and choose = n3/4 d1/4 ?1/2 . Then,
setting a = b in (31), we have
d1/4 ?1/2
kwQ wo k2 + 3 .
(32)
w2W
2n1/4
This result essentially has no restriction on W, which could be unbounded, and only requires the
Lipschitz condition on `(?, z), which could be non-convex. The sample complexity decreases with a
better prior knowledge of the optimal hypothesis.
E[L? (W )] ? inf L? (w) +
4.4
Noisy empirical risk minimization
Another algorithm with controlled input-output mutual information is the noisy empirical risk
minimization algorithm, where independent noise Nw , w 2 W, is added to the empirical risk of each
hypothesis, and the algorithm outputs a hypothesis that minimizes the noisy empirical risks:
W = arg min LS (w) + Nw .
w2W
7
(33)
Similar to the Gibbs algorithm, we can express our preference of the hypotheses by controlling the
amount of noise added to each hypothesis, such that our preferred hypotheses will be more likely
to be selected when they have similar empirical risks as other hypotheses. The following result
formalizes this idea.
Corollary 4 (proved in Appendix F). Suppose W is countable and is indexed such that a hypothesis
with a lower index is preferred over one with a higher index. Also suppose ` 2 [0, 1]. For the noisy
ERM algorithm in (33), choosing Ni to be an exponential random variable with mean bi , we have
v
! 1
u
1
1
X
u 1 X
L
(w
)
1
?
i
E[L? (W )] ? min L? (wi ) + bio + t
,
(34)
i
2n i=1
bi
b
i=1 i
where io = arg mini L? (wi ). In particular, choosing bi = i1.1 /n1/3 , we have
E[L? (W )] ? min L? (wi ) +
i
i1.1
o +3
.
n1/3
(35)
Without adding noise, the ERM algorithm
p applied to the above case when |W| = k can achieve
E[L? (WERM )] ? mini2[k] L? (wi ) + (1/2n)log k. Compared with (35), we see that performing
noisy ERM may be beneficial when we have high-quality prior knowledge of wo and when k is large.
4.5
Other methods to induce input-output mutual information stability
In addition to the Gibbs algorithm and the noisy ERM algorithm, many other methods may be
used to control the input-output mutual information of the learning algorithm. One method is to
? and then run a learning algorithm on S.
? The preprocessing
preprocess the dataset S to obtain S,
can be adding noise to the data or erasing some of the instances in the dataset, etc. In any case, we
? I(S;
? W ) . Another
have the Markov chain S S? W, which implies I(S; W ) ? min I(S; S),
?
method is the postprocessing of the output of a learning algorithm. For example, the weights W
generated by a neural network training algorithm can be quantized or perturbed by noise. This
?
? ; W ), I(S; W
?) .
gives rise to the Markov chain S W
W, which implies I(S; W ) ? min I(W
Moreover, strong data processing inequalities [18] may be used to sharpen these upper bounds
on I(S; W ). Preprocessing of the dataset and postprocessing of the output hypothesis are among
numerous regularization methods used in the field of deep learning [19, Ch. 7.5]. Other regularization
methods may also be interpreted as ways to induce the input-output mutual information stability of a
learning algorithm, and this would be an interesting direction of future research.
4.6
Adaptive composition of learning algorithms
Beyond analyzing the generalization error of individual learning algorithms, examining the inputoutput mutual information is also useful for analyzing the generalization capability of complex
learning algorithms obtained by adaptively composing simple constituent algorithms. Under a k-fold
adaptive composition, the dataset S is shared by k learning algorithms that are sequentially executed.
For j = 1, . . . , k, the output Wj of the jth algorithm may be drawn from a different hypothesis
space Wj based on S and the outputs W j 1 of the previously executed algorithms, according to
PWj |S,W j 1 . An example with k = 2 is model selection followed by a learning algorithm using the
same dataset. Various boosting techniques in machine learning can also be viewed as instances of
adaptive composition. From the data processing inequality and the chain rule of mutual information,
I(S; Wk ) ? I(S; W k ) =
k
X
j=1
I(S; Wj |W j
1
).
(36)
If the Markov chain S ?Wj (S) Wj holds conditional on W j 1 for j = 1, . . . , k, then the
Pk
upper bound in (36) can be sharpened to j=1 I(?Wj (S); Wj |W j 1 ). We can thus control the
generalization error of the final output by controlling the conditional mutual information at each step of
the composition. This also gives us a way to analyze the generalization error of the composed learning
algorithm using the knowledge of local generalization guarantees of the constituent algorithms.
8
Acknowledgement
We would like to thank Vitaly Feldman and Vivek Bagaria for pointing out errors in the earlier version
of this paper. We also would like to thank Peng Guan for helpful discussions.
References
[1] S. Boucheron, O. Bousquet, and G. Lugosi, ?Theory of classification: a survey of some recent
advances,? ESAIM: Probability and Statistics, vol. 9, pp. 323?375, 2005.
[2] O. Bousquet and A. Elisseeff, ?Stability and generalization,? J. Machine Learning Res., vol. 2,
pp. 499?526, 2002.
[3] D. Russo and J. Zou, ?How much does your data exploration overfit? Controlling bias via
information usage,? arXiv preprint, 2016. [Online]. Available: https://arxiv.org/abs/1511.05219
[4] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. Roth, ?Preserving statistical
validity in adaptive data analysis,? in Proc. of 47th ACM Symposium on Theory of Computing
(STOC), 2015.
[5] ??, ?Generalization in adaptive data analysis and holdout reuse,? in 28th Annual Conference
on Neural Information Processing Systems (NIPS), 2015.
[6] R. Bassily, K. Nissim, A. Smith, T. Steinke, U. Stemmer, and J. Ullman, ?Algorithmic stability
for adaptive data analysis,? in Proceedings of The 48th Annual ACM Symposium on Theory of
Computing (STOC), 2016.
[7] I. Alabdulmohsin, ?Algorithmic stability and uniform generalization,? in 28th Annual Conference on Neural Information Processing Systems (NIPS), 2015.
[8] ??, ?An information-theoretic route from generalization in expectation to generalization in
probability,? in 20th International Conference on Artificial Intelligence and Statistics (AISTATS),
2017.
[9] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, ?Understanding deep learning requires
rethinking generalization,? in International Conference on Learning Representations (ICLR),
2017.
[10] S. Shalev-Shwartz and S. Ben-David, Understanding Machine Learning: From Theory to
Algorithms. Cambridge University Press, 2014.
[11] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan, ?Learnability, stability and uniform
convergence,? J. Mach. Learn. Res., vol. 11, pp. 2635?2670, 2010.
[12] Y.-X. Wang, J. Lei, and S. E. Fienberg, ?On-average kl-privacy and its equivalence to generalization for max-entropy mechanisms,? in Proceedings of the International Conference on
Privacy in Statistical Databases, 2016.
[13] M. Raginsky, A. Rakhlin, M. Tsao, Y. Wu, and A. Xu, ?Information-theoretic analysis of stability
and bias of learning algorithms,? in Proceedings of IEEE Information Theory Workshop, 2016.
[14] K. L. Buescher and P. R. Kumar, ?Learning by canonical smooth estimation. I. Simultaneous
estimation,? IEEE Transactions on Automatic Control, vol. 41, no. 4, pp. 545?556, Apr 1996.
[15] L. Devroye, L. Gy?rfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition. Springer,
1996.
[16] F. McSherry and K. Talwar, ?Mechanism design via differential privacy,? in Proceedings of 48th
Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2007.
[17] C. Dwork and A. Roth, ?The algorithmic foundations of differential privacy,? Foundations and
Trends in Theoretical Computer Science, vol. 9, no. 3-4, 2014.
[18] M. Raginsky, ?Strong data processing inequalities and -Sobolev inequalities for discrete
channels,? IEEE Trans. Inform. Theory, vol. 62, no. 6, pp. 3355?3389, 2016.
[19] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning.
MIT Press, 2016.
[20] S. Boucheron, G. Lugosi, and P. Massart, Concentration Inequalities: A Nonasymptotic Theory
of Independence. Oxford Univ. Press, 2013.
9
[21] T. Zhang, ?Information-theoretic upper and lower bounds for statistical estimation,? IEEE Trans.
Inform. Theory, vol. 52, no. 4, pp. 1307 ? 1321, 2006.
[22] Y. Polyanskiy and Y. Wu, ?Lecture Notes on Information Theory,? Lecture Notes for ECE563
(UIUC) and 6.441 (MIT), 2012-2016. [Online]. Available: http://people.lids.mit.edu/yp/
homepage/data/itlectures_v4.pdf
[23] S. Verd?, ?The exponential distribution in information theory,? Problems of Information Transmission, vol. 32, no. 1, pp. 86?95, 1996.
10
| 6846 |@word private:1 version:1 pw:31 polynomial:1 stronger:1 elisseeff:2 pick:3 denoting:1 interestingly:1 existing:1 written:1 intelligence:1 selected:1 ith:1 smith:1 provides:2 boosting:1 quantized:1 codebook:1 preference:4 org:1 simpler:1 zhang:2 unbounded:3 dn:2 shatter:1 differential:6 symposium:3 pwj:1 focs:1 fitting:2 privacy:8 peng:1 expected:14 uiuc:1 decomposed:1 cardinality:1 becomes:2 moreover:3 bounded:7 homepage:1 what:1 interpreted:1 minimizes:2 formalizes:2 guarantee:7 tackle:1 rm:3 classifier:1 k2:1 control:7 bio:1 grant:1 engineering:1 local:2 consequence:2 io:3 mach:1 analyzing:4 id:2 oxford:1 lugosi:3 ls2:4 examined:1 equivalence:1 suggests:1 bi:3 russo:7 erasure:1 empirical:20 word:2 induce:3 cannot:1 selection:1 risk:38 impossible:1 py:2 restriction:2 equivalent:2 center:2 dz:1 roth:2 l:23 convex:1 survey:1 immediately:1 insight:2 rule:1 regularize:1 dw:2 population:17 stability:21 handle:1 variation:2 traditionally:1 notion:3 controlling:6 suppose:9 alabdulmohsin:2 digression:1 shamir:1 verd:1 hypothesis:49 goodfellow:1 agreement:1 element:3 trend:1 recognition:1 database:1 preprint:1 electrical:1 wang:1 wj:7 eu:1 trade:1 decrease:1 intuition:1 complexity:11 depend:3 joint:4 various:2 univ:1 distinct:1 artificial:1 shalev:3 choosing:2 valued:1 say:2 relax:1 statistic:2 noisy:7 itself:2 final:3 online:2 advantage:3 quantizing:1 propose:2 product:1 adaptation:1 combining:1 gen:10 achieve:3 inputoutput:2 constituent:4 differentially:1 convergence:1 p:5 transmission:1 rademacher:1 produce:1 generating:2 guaranteeing:1 ben:1 derive:3 develop:1 measured:2 eq:1 strong:2 solves:1 implies:7 direction:1 vc:3 exploration:1 suffices:2 generalization:55 tighter:1 hold:3 sufficiently:2 considered:1 mapping:1 algorithmic:4 nw:2 pointing:1 achieves:2 smallest:1 a2:1 purpose:1 estimation:3 proc:1 sensitive:1 w2w:13 tool:2 reflects:2 minimization:3 hope:1 mit:3 always:1 pn:1 corollary:7 focus:2 improvement:1 mainly:2 contrast:1 helpful:1 dependent:2 kwq:1 interested:1 i1:2 overall:1 among:4 classification:3 arg:5 denoted:2 issue:1 special:1 mutual:36 field:1 having:1 beach:1 chernoff:1 kw:1 future:1 modern:1 composed:1 simultaneously:1 tightly:2 divergence:1 individual:1 n1:9 ab:1 interest:2 dwork:3 analyzed:1 upperbounded:1 mcsherry:1 chain:6 tuple:1 byproduct:1 sauer:1 indexed:1 re:2 theoretical:3 minimal:1 instance:4 earlier:1 cover:2 zn:4 calibrate:1 subset:1 uniform:6 kq:7 examining:2 learnability:1 perturbed:1 adaptively:1 st:1 recht:1 international:3 randomized:1 probabilistic:1 off:1 together:2 w1:6 again:1 sharpened:1 satisfied:1 choose:2 hoeffding:2 worse:1 ullman:1 yp:1 nonasymptotic:1 gy:1 b2:1 wk:1 coefficient:1 coordinated:1 depends:5 view:4 analyze:3 sup:2 kwk:1 start:1 recover:1 capability:5 minimize:2 il:1 ni:1 accuracy:1 preprocess:1 sn1:1 simultaneous:1 inform:2 suffers:1 definition:4 pp:7 naturally:1 proof:3 xn1:3 holdout:1 dataset:22 proved:6 hardt:2 recall:1 knowledge:7 subsection:1 improves:1 higher:2 follow:1 improved:1 strongly:1 stage:2 overfit:2 d:1 replacing:1 quality:1 lei:1 usage:1 usa:2 validity:1 ccf:2 mini2:1 regularization:4 hence:1 assigned:1 boucheron:2 laboratory:1 deal:1 vivek:1 covering:1 noted:1 pdf:1 theoretic:12 polyanskiy:1 postprocessing:2 regularizing:3 recently:1 extend:1 discussed:1 relating:2 composition:7 cambridge:1 gibbs:16 feldman:2 rd:1 automatic:1 sharpen:1 illinois:2 language:1 had:1 stable:7 etc:1 pitassi:1 recent:3 showed:2 inf:15 route:1 certain:3 inequality:8 binary:2 preserving:1 minimum:2 relaxed:2 preceding:1 smooth:1 characterized:2 long:3 award:1 controlled:2 essentially:1 expectation:4 arxiv:2 kernel:1 addition:1 addressed:1 crucial:1 w2:1 rest:2 massart:1 subject:1 elegant:1 vitaly:1 reingold:1 spirit:1 sridharan:1 extracting:1 subgaussian:15 split:1 bengio:2 independence:1 fit:3 zi:3 reduce:1 idea:1 motivated:1 reuse:1 wo:7 deep:3 generally:1 useful:1 rfi:1 amount:1 http:2 nsf:2 canonical:1 discrete:1 vol:8 express:3 group:1 monitor:1 drawn:1 year:1 convert:1 raginsky:3 run:1 talwar:1 striking:2 extends:1 arrive:1 wu:2 sobolev:1 appendix:6 prefer:1 bound:21 guaranteed:4 followed:1 courville:1 fold:1 nonnegative:1 annual:4 nontrivial:1 constraint:1 your:1 n3:1 bousquet:3 generates:1 min:6 kumar:1 performing:1 px:5 department:1 according:4 smaller:2 beneficial:1 wi:5 lid:1 modification:1 s1:11 erm:12 taken:3 fienberg:1 previously:1 discus:5 turn:2 mechanism:2 know:1 studying:1 available:3 generalizes:2 apply:1 uncountable:2 ensure:1 include:1 hinge:1 maintaining:1 added:2 quantity:5 concentration:3 dependence:2 traditional:3 said:1 iclr:1 subspace:1 thank:2 capacity:2 rethinking:1 w0:6 nissim:1 length:1 devroye:1 index:3 relationship:1 mini:1 balance:3 setup:1 executed:2 potentially:1 stoc:2 rise:1 design:4 guideline:2 countable:6 unknown:3 perform:1 upper:22 markov:7 urbana:1 finite:2 situation:1 arbitrary:3 prompt:1 david:1 pair:1 cast:1 kl:2 z1:1 learned:1 nip:3 trans:2 able:1 beyond:1 pattern:1 including:1 max:1 natural:1 regularized:1 nth:1 technology:1 esaim:1 numerous:1 extract:1 coupled:2 sn:1 prior:6 understanding:4 acknowledgement:1 relative:1 loss:8 lecture:2 interesting:1 srebro:1 ingredient:1 foundation:3 proxy:1 viewpoint:2 balancing:1 erasing:1 uncountably:3 supported:1 copy:2 jth:1 bias:3 guide:2 weaker:1 steinke:1 stemmer:1 face:1 taking:3 absolute:4 dimension:3 collection:4 adaptive:10 preprocessing:2 transaction:1 excess:3 informationtheoretic:2 preferred:2 overfitting:1 sequentially:1 shwartz:3 additionally:1 channel:4 learn:1 ca:1 career:1 decoupling:1 composing:1 improving:2 complex:1 zou:7 aistats:1 pk:1 main:1 apr:1 bounding:2 noise:7 s2:6 n2:3 xu:2 x1:3 bassily:3 elaborate:1 exponential:2 lie:1 guan:1 theorem:27 rakhlin:1 workshop:1 quantization:1 adding:2 effectively:2 maxim:2 easier:1 entropy:3 logarithmic:1 likely:1 vinyals:1 expressed:2 springer:1 ch:1 satisfies:8 relies:1 acm:2 prop:1 conditional:3 viewed:3 goal:1 tsao:1 lipschitz:3 shared:1 change:2 infinite:3 determined:1 csoi:1 lemma:6 total:2 wq:1 people:1 latter:1 incorporate:1 evaluate:1 d1:2 |
6,464 | 6,847 | Sparse Approximate Conic Hulls
Gregory Van Buskirk, Benjamin Raichel, and Nicholas Ruozzi
Department of Computer Science
University of Texas at Dallas
Richardson, TX 75080
{greg.vanbuskirk, benjamin.raichel, nicholas.ruozzi}@utdallas.edu
Abstract
We consider the problem of computing a restricted nonnegative matrix factorization
(NMF) of an m ? n matrix X. Specifically, we seek a factorization X ? BC,
where the k columns of B are a subset of those from X and C ? Rk?n
?0 . Equivalently, given the matrix X, consider the problem of finding a small subset, S,
of the columns of X such that the conic hull of S ?-approximates the conic
hull of the columns of X, i.e., the distance of every column of X to the conic
hull of the columns of S should be at most an ?-fraction of the angular diameter of X. If k is the size of the smallest ?-approximation, then we produce an
O(k/?2/3 ) sized O(?1/3 )-approximation, yielding the first provable, polynomial
time ?-approximation for this class of NMF problems, where also desirably the
approximation is independent of n and m. Furthermore, we prove an approximate
conic Carath?odory theorem, a general sparsity result, that shows that any column
of X can be ?-approximated with an O(1/?2 ) sparse combination from S. Our
results are facilitated by a reduction to the problem of approximating convex hulls,
and we prove that both the convex and conic hull variants are d-SUM-hard, resolving an open problem. Finally, we provide experimental results for the convex and
conic algorithms on a variety of feature selection tasks.
1
Introduction
Matrix factorizations of all sorts (SVD, NMF, CU, etc.) are ubiquitous in machine learning and
computer science. In general, given an m ? n matrix X, the goal is to find a decomposition into a
product of two matrices B ? Rm?k and C ? Rk?n such that the Frobenius norm between X and
BC is minimized. If no further restrictions are placed on the matrices B and C, this problem can be
solved optimally by computing the singular value decomposition. However, imposing restrictions on
B and C can lead to factorizations which are more desirable for reasons such as interpretability and
sparsity. One of the most common restrictions is non-negative matrix factorization (NMF), requiring
B and C to consist only of non-negative entries (see [Berry et al., 2007] for a survey). Practically,
NMF has seen widespread usage as it often produces nice factorizations that are frequently sparse.
Typically NMF is accomplished by applying local search heuristics, and while NMF can be solved
exactly in certain cases (see [Arora et al., 2016]), in general NMF is not only NP-hard [Vavasis, 2009]
but also d-SUM-hard [Arora et al., 2016].
One drawback of factorizations such as SVD or NMF is that they can represent the data using a basis
that may have no clear relation to the data. CU decompositions [Mahoney and Drineas, 2009] address
this by requiring the basis to consist of input points. While it appears that the hardness of this problem
has not been resolved, approximate solutions are known. Most notable is the additive approximation
of Frieze et al. [2004], though more recently there have been advances on the multiplicative front
[Drineas et al., 2008, ?ivril and Magdon-Ismail, 2012, Guruswami and Sinop, 2012]. Similar
restrictions have also been considered for NMF. Donoho and Stodden [2003] introduced a separability
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
assumption for NMF, and Arora et al. [2016] showed that a NMF can be computed in polynomial
time under this assumption. Various other methods have since been proposed for NMF under the
separability (or near separability) assumption [Recht et al., 2012, Kumar et al., 2013, Benson et al.,
2014, Gillis and Vavasis, 2014, Zhou et al., 2014, Kumar and Sindhwani, 2015]. The separability
assumption requires that there exists a subset S of the columns of X such that X = XS C for some
nonnegative matrix C. This assumption can be restrictive in practice, e.g., when an exact subset
does not exist but a close approximate subset does, i.e., X ? XS C. To our knowledge, no exact or
approximate polynomial time algorithms have been proposed for the general problem of computing a
NMF under only the restriction that the columns must be selected from those of X.
In this work, we fill this gap by arguing that a simple greedy algorithm can be used to provide a
polynomial time ?-approximation algorithm for NMF under the column subset restriction. Note that
the separability assumption is not required here: our theoretical analysis bounds the error of our
selected columns versus the best possible columns that could have been chosen. The algorithm is
based off of recent work on fast algorithms for approximately computing the convex hull of a set
of points [Blum et al., 2016]. As in previous approaches [Donoho and Stodden, 2003, Kumar et al.,
2013], we formulate restricted NMF geometrically as finding a subset, S, of the columns of the matrix
X whose conic hull, the set of all nonnegative combinations of columns of S, well-approximates
the conic hull of X. Using gnomonic projection, we reduce the conic hull problem to a convex hull
problem and then apply the greedy strategy of Blum et al. [2016] to compute the convex hull of the
projected points. Given a set of points P in Rm , the convex hull of S ? P , denoted Convex(S), is
said to ?-approximate Convex(P ) if the Hausdorff distance between Convex(S) and Convex(P ) is
at most ? ? diameter(P ). For a fixed ? > 0, suppose the minimum sized subset of P whose convex
hull ?-approximates the convex hull of P has size k, then Blum et al. [2016] show that a simple
greedy algorithm gives an ?0 = O(?1/3 ) approximation using at most k 0 = O(k/?2/3 ) points of P ,
with an efficient O(nc(m + c/?2 + c2 )) running time, where c = O(kopt /?2/3 ). By careful analysis,
we show that our reduction achieves the same guarantees for the conic problem. (Note Blum et al.
[2016] present other trade-offs between k 0 and ?0 , which we argue carry to the conic case as well).
Significantly, k 0 and ?0 are independent of n and m, making this algorithm desirable for large high
dimensional point sets. Note that our bounds on the approximation quality and the number of points
do not explicitly depend on the dimension as they are relative to the size of the optimal solution,
which itself may or may not depend on dimension. Like the X-RAY algorithm [Kumar et al., 2013],
our algorithm is easy to parallelize, allowing it to be applied to large-scale problems.
In addition to the above ?-approximation algorithm, we also present two additional theoretical
results of independent interest. The first theoretical contribution provides justification for empirical
observations about the sparsity of NMF [Lee and Seung, 1999, Ding et al., 2010]. Due to the
high dimensional nature of many data sets, there is significant interest in sparse representations
requiring far fewer points than the dimension. Our theoretical justification for sparsity is based
on Carath?odory?s theorem: any point q in the convex hull of P can be expressed as a convex
combination of at most m + 1 points from P . This is tight in the worst case for exact representation,
however the approximate Carath?odory theorem [Clarkson, 2010, Barman, 2015] states there is a
point q 0 which is a convex combination of O(1/?2 ) points of P (i.e., independent of n and m) such
that ||q ? q 0 || ? ? ? diameter(P ). This result has a long history with significant implications in
machine learning, e.g., relating to the analysis of the perceptron algorithm [Novikoff, 1962], though
the clean geometric statement of this theorem appears to be not well known outside the geometry
community. Moreover, this approximation is easily computable with a greedy algorithm (e.g., [Blum
et al., 2016]) similar to the Frank-Wolfe algorithm. The analogous statement for the linear case
does not hold, so it is not immediately obvious whether such an approximate Carath?odory theorem
should hold for the conic case, a question which we answer in the affirmative. As a second theoretical
contribution, we address the question of whether or not the convex/conic hull problems are actually
hard, i.e., whether approximations are actually necessary. We answer this question for both problems
in the affirmative, resolving an open question of Blum et al. [2016], by showing both that the conic
and convex problems are d-SUM-hard.
Finally, we evaluate the performance of the greedy algorithms for computing the convex and conic
hulls on a variety of feature selection tasks against existing methods. We observe that, both the
conic and convex algorithms perform well for a variety of feature selection tasks, though, somewhat
surprisingly, the convex hull algorithm, for which previously no experimental results had been
2
produced, yields consistently superior results on text datasets. We use our theoretical results to
provide intuition for these empirical observations.
2
Preliminaries
Let P be a point set in Rm . For any p ? P , we interchangeably use the terms vector and point,
depending on whether or not we wish to emphasize the direction from the origin. Let ray(p)
denote the unbounded ray passing through p, whose base lies at the origin. Let unit(p) denote the
unit vector in the direction of p, or equivalently unit(p) is the intersection of ray(p) with the unit
hypersphere S(m?1) . For any subset X = {x1 , . . . , xk } ? P , ray(X) = {ray(x1 ), . . . , ray(xk )}
and unit(X) = {unit(x1 ), . . . , unit(xk )}.
Given points p, q ? P , let d(p, q) = ||p?q|| denote their Euclidean distance, and let hp, qi denote their
dot product. Let angle(ray(p), ray(q)) = angle(p, q) = cos?1 (hunit(p), unit(q)i) denote the angle
between the rays ray(p) and ray(q), or equivalently between vectors p and q. For two sets, P, Q ? Rm ,
we write d(P, Q) = minp?P,q?Q d(p, q) and for a single point q we write d(q, P ) = d({q}, P ), and
the same definitions apply to angle().
P
P
For any subset X = {x1 , . . . , xk } ? P , let Convex(X)
= { i ?i xi | ?i ? 0, i ?i = 1} denote
P
the convex hull of X. Similarly, let Conic(X) = { i ?i xi | ?i ? 0} denote the conic hull of X and
DualCone(X) = {z ? X | hx, zi ? 0 ?x ? X} the dual cone. For any point q ? Rm , the projection
of q onto Convex(X) is the closest point to q in Convex(X), proj(q) = proj(q, Convex(X)) =
arg minx?Convex(X) d(q, x). Similarly the angular projection of q onto Conic(X) is the angularly
closest point to q in Conic(X), aproj(q) = aproj(q, Conic(X)) = arg minx?Conic(X) angle(q, x).
Note that angular projection defines an entire ray of Conic(X), rather than a single point, which
without loss of generality we choose the point on the ray minimizing the Euclidean distance to q. In
fact, abusing notation, we sometimes equivalently view Conic(X) as a set of rays rather than points,
in which case aproj(ray(q)) = aproj(q) is the entire ray.
For X ? Rm , let ? = ?X = maxp,q?X d(p, q) denote the diameter of X. The angular diameter of
X is ? = ?X = maxp,q?X angle(p, q). Similarly ?X (q) = maxp?X angle(p, q) denotes the angular
radius of the minimum radius cone centered around the ray through q and containing all of P .
Definition 2.1. Consider a subset X of a point set P ? Rm . X is an ?-approximation to Convex(P )
if dconvex (X, P ) = maxp?Convex(P ) d(p, Convex(X)) ? ??. Note dconvex (X, P ) is the Hausdorff
distance between Convex(X) and Convex(P ). Similarly X is an ?-approximation to Conic(P ) if
dconic (X, P ) = maxp?Conic(P ) angle(p, Conic(X)) ? ??P .
Note that the definition of ?-approximation for Conic(P ) uses angular rather than Euclidean distance
in order to be defined for rays, i.e., scaling a point outside the conic hull changes its Euclidean
distance but its angular distance is unchanged since its ray stays the same. Thus we find considering
angles better captures what it means to approximate the conic hull than the distance based Frobenius
norm which is often used to evaluate the quality of approximation for NMF.
As we are concerned only with angles, without loss of generality we often will assume that all points
in the input set P have been scaled to have unit length, i.e., P = unit(P ). In our theoretical results,
we will always assume that ?P < ?/2. Note that if P lies in the non-negative orthant, then for any
strictly positive q, ?P (q) < ?/2. In the case that the P is not strictly inside the positive orthant, the
points can be uniformly translated a small amount to ensure that ?P < ?/2.
3
A Simple Greedy Algorithm
Let P be a finite point set in Rm (with unit lengths). Call a point p ? P extreme if it lies on the
boundary of the conic hull (resp. convex hull). Observe that for any X ? P , containing all the
extreme points, it holds that Conic(X) = Conic(P ) (resp. Convex(X) = Convex(P )). Consider
the simple greedy algorithm which builds a subset of points S, by iteratively adding to S the point
angularly furthest from the conic hull of the current point set S (for the convex hull take the furthest
point in distance). One can argue in each round this algorithm selects an extreme point, and thus can
be used to find a subset of points whose hull captures that of P . Note if the hull is not degenerate, i.e.,
3
no point on the boundary is expressible as a combination of other points on the boundary, then this
produces the minimum sized subset capturing P . Otherwise, one can solve a recursive subproblem as
discussed by Kumar et al. [2013] to exactly recover S.
Here instead we consider finding a small subset of points (potentially much smaller than the number
of extreme points) to approximate the hull. The question is then whether this greedy approach
still yields a reasonable solution, which is not clear as there are simple examples showing the best
approximate subset includes non-extreme points. Moreover, arguing about the conic approximation
directly is challenging as it involves angles and hence spherical (rather than planar) geometry. For the
convex case, Blum et al. [2016] argued that this greedy strategy does yield a good approximation.
Thus we seek a way to reduce our conic problem to an instance of the convex problem, without
introducing too much error in the process, which brings us to the gnomonic projection. Let hplane(q)
be the hyperplane defined by the equation h(q ? x), qi = 0 where q ? Rm is a unit length normal
vector. The gnomonic projection of P onto hplane(q), is defined as gpq (P ) = {ray(P ) ? hplane(q)}
(see Figure 3.1). Note that gpq (q) = q. For any point x in hplane(q), the inverse gnomonic projection
is pgq (x) = ray(x) ? S(m?1) . Similar to other work [Kumar et al., 2013], we allow projections onto
any hyperplane tangent to the unit hypersphere with normal q in the strictly positive orthant.
A key property of the gnomonic projection, is that the problem of finding the extreme points of the
convex hull of the projected points is equivalent to finding the extreme points of the conic hull of P .
(Additional properties of the gnomonic projection are discussed in the full version.) Thus the strategy
to approximate the conic hull should now be clear. Let P 0 = gpq (P ). We apply the greedy strategy
of Blum et al. [2016] to P 0 to build a set of extreme points S, by iteratively adding to S the point
furthest from the convex hull of the current point set S. This procedure is shown in Algorithm 1.
We show that Algorithm 1 can be used to produce an ?-approximation to the restricted NMF
problem. Formally, for ? > 0, let opt(P, ?) denote any minimum cardinality subset X ? P which
?-approximates Conic(P ), and let kopt = |opt(P, ?)|. We consider the following problem.
Problem 3.1. Given a set P of n points in Rm such that ?P ? ?/2 ? ?, for a constant ? > 0, and
a value ? > 0, compute opt(P, ?).
Alternatively one can fix k rather than ?, defining opt(P, k) = arg minX?P,|X|=k dconic (X, P ) and
?opt = dconic (opt(P, k), P ). Our approach works for either variant, though here we focus on the
version in Problem 3.1. Note the bounded angle assumption applies to any collection of points in the
strictly positive orthant (a small translation can be used to ensure this for any nonnegative data set).
In this section we argue Algorithm 1 produces an (?, ?)-approximation to an instance (P, ?) of
Problem 3.1, that is a subset X ? P such that dconic (X, P ) ? ? and |X| ? ? ? kopt = ? ? |opt(P, ?)|.
For ? > 0, similarly define optconvex (P, ?) to be any minimum cardinality subset X ? P which
?-approximates Convex(P ). Blum et al. [2016] gave (?, ?)-approximation for the following.
Problem 3.2. Given a set P of n points in Rm , and a value ? > 0, compute optconvex (P, ?).
Note the proofs of correctness and approximation quality from Blum et al. [2016] for Problem 3.2 do
not immediately imply the same results for using Algorithm 1 for Problem 3.1. To see this, consider
any points u, v on S(m?1) . Note the angle between u and v is the same as their geodesic distance
on S(m?1) . Intuitively, we want to claim the geodesic distance between u and v is roughly the same
as the Euclidean distance between gpq (u) and gpq (v). While this is true for points near q, as we
move away from q the correspondence breaks down (and is unbounded as you approach ?/2). This
non-uniform distortion requires care, and thus the proofs had to be moved to the full version.
Finally, observe that Algorithm 1, requires being able to compute the point furthest from the convex
hull. To do so we use the (convex) approximate Carath?odory, which is both theoretically and
practically very efficient, and produces provably sparse solutions. As a stand alone result, we first
prove the conic analog of the approximate Carath?odory theorem. This result is of independent
interest since it can be used to sparsify the returned solution from Algorithm 1, or any other algorithm.
3.1
Sparsity and the Approximate Conic Carath?odory Theorem
Our first result is a conic approximate Carath?odory theorem. That is, given a point set P ? Rm and
a query point q, then the angularly closest point to q in Conic(P ) can be approximately expressed as
4
Algorithm 1: Greedy Conic Hull
Data: A set of n points, P , in Rm such
that ?P < ?/2, a positive integer k,
and a normal vector q in
DualCone(P ).
Result: S ? P such that |S| = k
Y ? gpq (P );
Select an arbitrary starting point p0 ? Y ;
S ? {p0 };
for i = 2 to k do
Select
p? ? arg maxp?Y dconvex (p, S);
S ? S ? {p? };
hplane(q)
q
x0
x
Figure 3.1: Side view of gnomonic projection.
a sparse combination of point from P . More precisely, one can compute a point t which is a conic
combination of O(1/?2 ) points from P such that angle(q, t) ? angle(q, Conic(P )) + ??P .
The significance of this result is as follows. Recall that we seek a factorization X ? BC, where the k
columns of B are a subset of those from X and the entries of C are non-negative. Ideally each point
in X is expressed as a sparse combination from the basis B, that is each column of C has very few
non-zero entries. So suppose we are given any factorization BC, but C is dense. Then no problem,
just throw out C, and use our Carath?odory theorem to compute a new matrix C 0 with sparse columns.
Namely treat each column of X as the query q and run the theorem for the point set P = B, and then
the non-zero entries of corresponding column of C 0 are just the selected combination from B. Not
only does this mean we can sparsify any solution to our NMF problem (including those obtained by
other methods), but it also means conceptually that rather than finding a good pair BC, one only
needs to focus on finding the subset B, as is done in Algorithm 1. Note that Algorithm 1 allows
non-negative inputs in P because ?P < ?/2 ensures P can be rotated into the positive orthant.
While it appears the conic approximate Carath?odory theorem had not previously been stated, the
convex version has a long history (e.g., implied by [Novikoff, 1962]). The algorithm to compute this
sparse convex approximation is again a simple and fast greedy algorithm, which roughly speaking is
a simplification of the Frank-Wolfe algorithm for this particular problem. Specifically, to find the
projection of q onto Convex(P ), start with any point t0 ? Convex(P ). In the ith round, find the point
pi ? P most extreme in the direction of q from ti?1 (i.e., maximizing hq ? ti?1 , pi i) and set ti to be
the closest point to q on the segment ti?1 pi (thus simplifying Frank Wolfe, as we ignore step size
issues). The standard analysis of this algorithm (e.g., [Blum et al., 2016]) gives the following.
Theorem 3.3 (Convex Carath?odory).
For a point set P ? Rm , ? > 0, and q ? Rm , one can
compute, in O |P | m/?2 time, a point t ? Convex(P ), such that d(q, t) ? d(q, Convex(P )) + ??,
where ? = ?P . Furthermore, t is a convex combination of O(1/?2 ) points of P .
Again by exploiting properties of the gnomonic projection we are able to prove a conic analog of the
above theorem. Note for P ? Rm , P is contained in the linear span of at most m points from P , and
similarly the exact Carath?odory theorem states any point q ? Convex(P ) is expressible as a convex
combination of at most m + 1 points from P . As the conic hull lies between the linear case (with
all combinations) and the convex case (with non-negative combinations summing to one), it is not
surprising an exact conic Carath?odory theorem holds. However, the linear analog of the approximate
convex Caratheodory theorem does not hold, and so the following conic result is not a priori obvious.
Theorem 3.4. Let P ? Rm be a point set, let q be such that ?P (q) < ?/2 ? ? for some constant
? > 0, and let ? > 0 be a parameter. Then one can find, in O(|P |m/?2 ) time, a point t ? Conic(P )
such that angle(q, t) ? angle(q, Conic(P )) + ??P (q). Moreover, t is a conic combination of O(1/?2 )
points from P .
Due to space constraints, the detailed proof of Theorem 3.4 appears in the full version. In the proof,
the dependence on ? is made clear but we make a remark about it here. If ? is kept fixed, ? shows up
5
in the running time roughly by a factor of tan2 (?/2 ? ?). Alternatively, if the running time is fixed,
the approximation error will roughly depend on the factor 1/ tan(?/2 ? ?).
We now give a simple example of a high dimensional point set which shows our bounded angle
assumption is required for the conic Carath?odory theorem to hold. Let P consist of the standard
basis vectors in Rm , let q be the all ones vector, and let ? be a parameter. Let X be a subset of P of
size k, and consider aproj(q) = aproj(q, X). As P consists of basis vectors, each of which have all
but one entry set to zero, aproj(q) will have at most k non-zero entries. By the symmetry of q it is
also clear that all non-zero entries in aproj(q) should have the same value.
? Without loss of generality
assume that this value is 1, and hence the magnitude of aproj(q) is pk. Thus for aproj(q) to be an
?-approximation to q, angle(aproj(q), q) = cos?1 ( ?kk?m ) = cos?1 ( k/m) < ?. Hence for a fixed
?, the number of points required to ?-approximate q depends on m, while the conic Carath?odory
theorem should be independent of m.
3.2
Approximating the Conic Hull
We now prove that Algorithm 1 yields an approximation to the conic hull of a given point set
and hence an approximation to the nonnegative matrix factorization problem. As discussed above,
previously Blum et al. [2016] provided the following (?, ?)-approximation for Problem 3.2.
Theorem 3.5 ([Blum et al., 2016]). For a set P of n points in Rm , and ? > 0, the greedy strategy, which iteratively adds the point furthest from the current convex hull, gives a ((8?1/3 +
?)?, O(1/?2/3 ))-approximation to Problem 3.2, and has running time O(nc(m + c/?2 + c2 ))
time, where c = O(kopt /?2/3 ).
Our second result, is a conic analog of the above theorem.
Theorem 3.6. Given a set P of n points in Rm such that ?P ? ?2 ? ? for a constant ? > 0, and a
value ? > 0, Algorithm 1 gives an ((8?1/3 + ?)?P , O(1/?2/3 ))-approximation to Problem 3.1, and
has running time O(nc(m + c/?2 + c2 )), where c = O(kopt /?2/3 ).
Bounding the approximation error requires carefully handling the distortion due to the gnomonic
project, and the details are presented in the full version. Additionally, Blum et al. [2016] provide
other (?, ?)-approximations, for different values of ? and ?, and in the full version these other results
are also shown to hold for the conic case.
4
Hardness of the Convex and Conic Problems
This section gives a reduction from d-SUM to the convex approximation of Problem 3.2, implying
it is d-SUM-hard. In the full version a similar setup is used to argue the conic approximation
of Problem 3.1 is d-SUM-hard. Actually if Problem 3.1 allowed instances where ?P = ?/2 the
reduction would be virtually the same. However, arguing that the problem remains hard under our
requirement that ?P ? ?/2 ? ?, is non-trivial and some of the calculations become challenging and
lengthy. The reductions to both problems are partly inspired by Arora et al. [2016]. However, here,
we use the somewhat non-standard version of d-SUM where repetitions are allowed as described
below.
Problem 4.1 (d-SUM). In the d-SUM problem we are given a set S = {s1 , s2 , ? ? ? , sN } of N
values, each in the interval [0, 1], and the goal is to determine if there is a set of d numbers (not
necessarily distinct) whose sum is exactly d/2.
It was shown by Patrascu and Williams [2010] that if d-SUM can be solved in N o(d) time then
3-SAT has a sub-exponential time algorithm, i.e., that the Exponential Time Hypothesis is false.
Theorem 4.2 (d-SUM-hard). Let d < N 0.99 , ? < 1. If d-SUM on N numbers of O(d log(N )) bits
can be solved in O(N ?d ) time, then 3-SAT on n variables can be solved in 2o(n) time.
6
We will prove the following decision version of Problem 3.2 is d-SUM-hard. Note in this section the
dimension will be denoted by d rather than m, as this is standard for d-SUM reductions.
Problem 4.3. Given a set P of n points in Rd , a value ? > 0, and an integer k, is there a subset
X ? P of k points such that dconvex (X, P ) ? ??, where ? is the diameter of P .
Given an instance of d-SUM with N values S = {s1 , s2 , ? ? ? , sN } we construct an instance of
Problem 4.3 where P ? Rd+2 , k = d, and ? = 1/3 (or any sufficiently small value). The idea is to
create d clusters each containing N points corresponding to a choice of one of the si values. The
clusters are positioned such that exactly one point from each cluster must be chosen. The d + 2
coordinates are labeled ai for i ? [d], w, and v. Together, a1 , ? ? ? , ad determine the cluster. The w
dimension is used to compute the sum of the chosen si values. The v dimension is used as a threshold
to determine whether d-SUM is a yes or no instance to Problem 4.3. Let w(pj ) denote the w value of
an arbitrary point pj .
We assume d ? 2 as d-SUM is trivial for d = 1. Let e1 , e2 , ? ? ? , ed ? Rd be the standard basis in Rd ,
e1 = (1, ? ? ? , 0), e2 = (0, 1, ? ? ? , 0), . . . , and ed = (0, ? ? ? , 1). Together
pthey form the unit d-simplex,
and they define the d clusters in the construction. Finally, let ?? = 2 + (?smax ? ?smin )2 be a
constant where smax and smin are, respectively, the maximum and minimum values in S.
Definition 4.4. The set of points P ? Rd+2 are the following
pij points: For each i ? [d], j ? [N ], set (a1 , ? ? ? , ad ) = ei , w = ?sj and v = 0
q point: For each i ? [d], ai = 1/d, w = ?/2, v = 0
q 0 point: For each i ? [d], ai = 1/d and w = ?/2, v = ???
Lemma 4.5 (Proof in full version). The diameter of P , ?P , is equal to ?? .
We prove completeness and soundness of the reduction. Below P i = ?j pij denotes the ith cluster.
Observation 4.6. If maxp?P d(p, Convex(X)) ? ??, then dconvex (X, P ) ? ??: For point sets A
and BP= {b1 , . . . , bmP
}, if we fix a ? Convex(A),
then for any b ? Convex(B) we have ||a ? b|| =
P
||a ? i ?i bi || = || i ?i (a ? bi )|| ? i ?i ||a ? bi || ? maxi ||a ? bi ||.
Lemma 4.7 (Completeness).
If there is a subset {sk1 , sk2 , ? ? ? , skd } of d values (not necessarily
P
distinct) such that i?[d] ski = d/2, then the above described instance of Problem 4.3 is a true
instance, i.e. there is a d sized subset X ? P with dconvex (X, P ) ? ??.
Proof: For each value ski consider the point xi = (ei , ? ? ski , 0), which by Definition 4.4 is a
point in P . Let X = {x1 , . . . , xd }. We now prove maxp?P d(p, Convex(X)) ? ??, which by
Observation 4.6 implies that dconvex (X, P ) ? ??.
q
First observe that for any pij in P , d(pij , xi ) = (w(pij ) ? w(xi ))2 ? |?sj ? ?ski | ? ??. The
only other points in P are q and q 0 . Note that d(q, q 0 ) = ??? = ?? from Lemma 4.5. Thus
if we can prove that q ? Convex(X) then we will have shown maxp?P d(p, Convex(X)) ? ??.
Pd
Specifically, we prove that the convex combination x = d1 i xi is the point q. As X contains
exactly one point from each set P i , and in each such set all points have ai = 1 and all other
aj = 0, it holds that x has 1/d for all the a coordinates. All points in X have v = 0 and so this
holds for xPas well. ThusPwe only need to verify that w(x) = w(q) = ?/2, for which we have
w(x) = d1 i w(xi ) = d1 i ?ski = d1 (?d/2) = ?/2.
Proving soundness requires some helper lemmas. Note that in the above proof we constructed a
solution to Problem 4.3 that selected exactly one point from each cluster P i . We now prove that this
is a required property.
Lemma 4.8 (Proof in full version). Let P ? Rd+2 be as defined above, and let X ? P be a subset
of size d. If dconvex (X, P ) ? ??, then for all i, X contains exactly one point from P i .
7
USPS
1
0.8
SVM Accuracy
COIL20
1
Isolet
1
0.8
0.8
0.6
0.6
0.6
Conic
Convex
0.4
X-RAY
0.4
0.4
Mutant X-RAY
0.2
Conic+?
0.2
0.2
25
50
125
150
Reuters
1
SVM Accuracy
75 100
# Features
0
25
50
75 100
# Features
125
150
25
75 100
# Features
125
150
125
150
warpPIE10P
BBC
1
50
1
0.8
0.8
0.6
0.6
0.4
0.4
0.8
0.6
0.4
0.2
25
50
75 100
# Features
125
150
0.2
25
50
75 100
# Features
125
150
25
50
75 100
# Features
Figure 4.1: Experimental results for feature selection on six different data sets. Best viewed in color.
Lemma
P4.9 (Proof in full version). If dconvex (X, P ) ? ??, then q ? Convex(X) and moreover
q = d1 xi ?X xi .
Lemma 4.10 (Soundness). Let P be an instance of Problem 4.3 generated from a d-SUM instance
S, as described in Definition 4.4. If there is a subset X ? P of size d such that dconvex (X, P ) ? ??,
then there is a choice of d values from S that sum to exactly d/2.
each cluster P i . Thus
Proof: From Lemma 4.8 we know that X consist of exactly one point from P
1
for each xi ? X, w(xi ) = ?ski for some ski ? S. By Lemma 4.9, q = d i xi , which implies
P
P
P
1
1
w(q) = d1 i w(xi ). By Definition 4.4 w(q) = ?/2, which implies
P ?/2 = d i w(xi ) = d i ?ski .
Thus we have a set {sk1 , . . . , skd } of d values from S such that i ski = d/2.
Lemma 4.7 and Lemma 4.10 immediately imply the following.
Theorem 4.11. For point sets in Rd+2 , Problem 4.3 is d-SUM-hard.
5
Experimental Results
We report an experimental comparison of the proposed greedy algorithm for conic hulls, the greedy
algorithm for convex hulls (the conic hull algorithm without the projection step) [Blum et al., 2016],
the X-RAY (max) algorithm [Kumar et al., 2013], a modified version of X-RAY, dubbed mutant
X-RAY, which simply selects the point furthest away from the current cone (i.e., with the largest
residual), and a ?-shifted version of the conic hull algorithm described below. Other methods such
as Hottopixx [Recht et al., 2012, Gillis and Luce, 2014] and SPA [Gillis and Vavasis, 2014] were
not included due to their similar performance to the above methods. For our experiments, we
considered the performance of each of the methods when used to select features for a variety of SVM
classification tasks on various image, text, and speech data sets including several from the Arizona
State University feature selection repository [Li et al., 2016] as well as the UCI Reuters dataset and
the BBC News dataset [Greene and Cunningham, 2006]. The Reuters and BBC text datasets are
represented using the TF-IDF representation. For the Reuters dataset, only the ten most frequent
8
topics were used for classification. In all datasets, columns (corresponding to features) that were
identically equal to zero were removed from the data matrix.
For each problem, the data is divided using a 30/70 train/test split, the features are selected by the
indicated method, and then an SVM classifier is trained using only the selected features. For the conic
and convex hull methods, is set to 0.1. The accuracy (percent of correctly classified instances) is
plotted versus the number of selected features for each method in Figure 4.1. Additional experimental
results can be found in the full version. Generally speaking, the convex, mutant X-RAY, and shifted
conic algorithms seem to consistently perform the best on the tasks. The difference in performance
between convex and conic is most striking on the two text data sets Reuters and BBC. In the case of
BBC and Reuters, this is likely due to the fact that many of the columns of the TF-IDF matrix are
orthogonal. We note that the quality of both X-RAY and conic is improved if thresholding is used
when constructing the feature matrix, but they still seem to under perform the convex method for text
datasets.
The text datasets are also interesting as not only do they violate the explicit assumption in our
theorems that the angular diameter of the conic hull be strictly less than ?/2, but that there are many
such mutually orthogonal columns of the document-feature matrix. This observation motivates the
?-shifted version of the conic hull algorithm that simply takes the input matrix X and adds ? to all
of the entries (essentially translating the data along the all ones vector) and then applies the conic
hull algorithm. Let 1a,b denote the a ? b matrix of ones. After a nonnegative shift, the angular
assumption is satisfied, and the restricted NMF problem is that of approximating (X + ?1m,n ) as
(B + ?1m,k )C, where the columns of B areP
again chosen from those of X. Under the Frobenus
norm ||(X + ?1m,n ) ? (B + ?1m,k )C||22 = i,j (Xij ? Bi,: C:,j + ?(1 ? ||C:,j ||1 ))2 . As C must
be a nonnegative matrix, the shifted conic case acts like the original conic case plus a penalty that
encourages the columns of C to sum to one (i.e., it is a hybrid between the conic case and the convex
case). The plots illustrate the performance of the ?-shifted conic hull algorithm for ? = 10. After
the shift, the performance more closely matches that of the convex and mutant X-RAY methods on
TF-IDF features.
Given these experimental results and the simplicity of the proposed convex and conic methods, we
suggest that both methods should be added to practitioners? toolboxes. In particular, the superior
performance of the convex algorithm on text datasets, compared to X-RAY and the conic algorithm,
seems to suggest that these types of ?convex? factorizations may be more desirable for TF-IDF
features.
Acknowledgments
Greg Van Buskirk and Ben Raichel were partially supported by NSF CRII Award-1566137. Nicholas
Ruozzi was partially supported by DARPA Explainable Artificial Intelligence Program under contract
number N66001-17- 2-4032 and NSF grant III-1527312
References
M. Berry, M. Browne, A. Langville, V. Pauca, and R. Plemmons. Algorithms and applications for
approximate nonnegative matrix factorization. Computational Statistics & Data Analysis, 52(1):
155?173, 2007.
S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization - provably.
SIAM J. Comput., 45(4):1582?1611, 2016.
S. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Journal on Optimization,
20(3):1364?1377, 2009.
M. Mahoney and P. Drineas. Cur matrix decompositions for improved data analysis. Proceedings of
the National Academy of Sciences, 106(3):697?702, 2009.
A. Frieze, R. Kannan, and S. Vempala. Fast monte-carlo algorithms for finding low-rank approximations. J. ACM, 51(6):1025?1041, 2004.
9
P. Drineas, M. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM J.
Matrix Analysis Applications, 30(2):844?881, 2008.
A. ?ivril and M. Magdon-Ismail. Column subset selection via sparse approximation of SVD. Theor.
Comput. Sci., 421:1?14, 2012.
V. Guruswami and A. Sinop. Optimal column-based low-rank matrix reconstruction. In Proceedings
of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1207?
1214, 2012.
D. L. Donoho and V. Stodden. When does non-negative matrix factorization give a correct decomposition into parts? In Advances in Neural Information Processing Systems (NIPS), 2003.
B. Recht, C. Re, J. Tropp, and V. Bittorf. Factoring nonnegative matrices with linear programs. In
Advances in Neural Information Processing Systems (NIPS), pages 1214?1222, 2012.
A. Kumar, V. Sindhwani, and P. Kambadur. Fast conical hull algorithms for near-separable nonnegative matrix factorization. In International Conference on Machine Learning (ICML), pages
231?239, 2013.
A. R. Benson, J. D. Lee, B. Rajwa, and D. F. Gleich. Scalable methods for nonnegative matrix
factorizations of near-separable tall-and-skinny matrices. In Advances in Neural Information
Processing Systems (NIPS), pages 945?953, 2014.
N. Gillis and S. A. Vavasis. Fast and robust recursive algorithms for separable nonnegative matrix
factorization. IEEE transactions on pattern analysis and machine intelligence, 36(4):698?714,
2014.
T. Zhou, J. A. Bilmes, and C. Guestrin. Divide-and-conquer learning by anchoring a conical hull. In
Advances in Neural Information Processing Systems (NIPS), pages 1242?1250, 2014.
A. Kumar and V. Sindhwani. Near-separable Non-negative Matrix Factorization with l1 and Bregman
Loss Functions, pages 343?351. 2015.
A. Blum, S. Har-Peled, and B. Raichel. Sparse approximation via generating point sets. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages
548?557, 2016.
D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature,
401(6755):788?791, 1999.
C. H. Q. Ding, T. Li, and M. I. Jordan. Convex and semi-nonnegative matrix factorizations. IEEE
transactions on pattern analysis and machine intelligence, 32(1):45?55, 2010.
K. L. Clarkson. Coresets, sparse greedy approximation, and the frank-wolfe algorithm. 6(4), 2010.
S. Barman. Approximating nash equilibria and dense bipartite subgraphs via an approximate version
of caratheodory?s theorem. In Proceedings of the Forty-Seventh Annual ACM on Symposium on
Theory of Computing (STOC), pages 361?369, 2015.
A.B.J. Novikoff. On convergence proofs on perceptrons. In Proc. Symp. Math. Theo. Automata,
volume 12, pages 615?622, 1962.
M. Patrascu and R. Williams. On the possibility of faster SAT algorithms. In Proceedings of the
Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1065?1075,
2010.
Nicolas Gillis and Robert Luce. Robust near-separable nonnegative matrix factorization using linear
optimization. Journal of Machine Learning Research, 15(1):1249?1280, 2014.
10
J. Li, K. Cheng, S. Wang, F. Morstatter, T. Robert, J. Tang, and H. Liu. Feature selection: A data
perspective. arXiv:1601.07996, 2016.
Derek Greene and P?draig Cunningham. Practical solutions to the problem of diagonal dominance
in kernel document clustering. In Proceedings of the 23rd international conference on Machine
learning, pages 377?384. ACM, 2006.
11
| 6847 |@word cu:2 version:18 repository:1 polynomial:4 norm:3 seems:1 open:2 seek:3 decomposition:6 p0:2 simplifying:1 carry:1 reduction:7 liu:1 contains:2 bc:5 skd:2 hottopixx:1 document:2 existing:1 current:4 surprising:1 si:2 must:3 additive:1 plot:1 alone:1 greedy:16 selected:7 fewer:1 implying:1 intelligence:3 xk:4 ith:2 hypersphere:2 provides:1 completeness:2 math:1 bittorf:1 unbounded:2 along:1 c2:3 constructed:1 become:1 symposium:4 prove:11 consists:1 ray:31 inside:1 symp:1 x0:1 theoretically:1 hardness:2 roughly:4 frequently:1 plemmons:1 anchoring:1 inspired:1 spherical:1 considering:1 cardinality:2 provided:1 project:1 moreover:4 notation:1 bounded:2 what:1 affirmative:2 finding:8 dubbed:1 guarantee:1 every:1 ti:4 act:1 xd:1 exactly:9 rm:20 scaled:1 classifier:1 unit:14 grant:1 sinop:2 positive:6 local:1 dallas:1 treat:1 crii:1 parallelize:1 approximately:2 plus:1 challenging:2 co:3 factorization:22 bi:5 acknowledgment:1 practical:1 arguing:3 practice:1 recursive:2 procedure:1 empirical:2 significantly:1 projection:14 suggest:2 onto:5 close:1 selection:7 applying:1 restriction:6 equivalent:1 maximizing:1 williams:2 starting:1 convex:78 survey:1 formulate:1 automaton:1 simplicity:1 immediately:3 subgraphs:1 isolet:1 fill:1 proving:1 coordinate:2 justification:2 analogous:1 resp:2 construction:1 suppose:2 tan:1 exact:5 us:1 hypothesis:1 origin:2 wolfe:4 approximated:1 caratheodory:2 labeled:1 subproblem:1 ding:2 solved:5 capture:2 worst:1 wang:1 ensures:1 news:1 trade:1 removed:1 benjamin:2 intuition:1 pd:1 complexity:1 peled:1 nash:1 ideally:1 seung:2 sk1:2 geodesic:2 rajwa:1 trained:1 depend:3 tight:1 segment:1 bbc:5 bipartite:1 basis:6 usps:1 drineas:4 translated:1 resolved:1 easily:1 darpa:1 various:2 tx:1 represented:1 muthukrishnan:1 train:1 distinct:2 fast:5 monte:1 query:2 artificial:1 outside:2 whose:5 heuristic:1 solve:1 distortion:2 otherwise:1 maxp:9 soundness:3 statistic:1 richardson:1 itself:1 reconstruction:1 product:2 p4:1 frequent:1 uci:1 degenerate:1 academy:1 ismail:2 frobenius:2 moved:1 exploiting:1 convergence:1 cluster:8 requirement:1 smax:2 produce:6 generating:1 rotated:1 ben:1 tall:1 depending:1 illustrate:1 object:1 pauca:1 utdallas:1 throw:1 involves:1 implies:3 direction:3 radius:2 drawback:1 closely:1 correct:1 hull:51 centered:1 translating:1 argued:1 hx:1 fix:2 preliminary:1 opt:7 theor:1 strictly:5 hold:9 practically:2 around:1 considered:2 sufficiently:1 normal:3 equilibrium:1 claim:1 achieves:1 smallest:1 proc:1 largest:1 correctness:1 repetition:1 create:1 tf:4 offs:1 always:1 modified:1 rather:7 zhou:2 sparsify:2 focus:2 consistently:2 mutant:4 rank:2 carath:15 factoring:1 typically:1 entire:2 cunningham:2 relation:1 proj:2 expressible:2 selects:2 provably:2 arg:4 dual:1 issue:1 classification:2 denoted:2 priori:1 equal:2 construct:1 beach:1 icml:1 minimized:1 np:1 simplex:1 report:1 novikoff:3 few:1 frieze:2 national:1 geometry:2 skinny:1 interest:3 possibility:1 mahoney:3 extreme:9 yielding:1 har:1 implication:1 bregman:1 helper:1 necessary:1 orthogonal:2 euclidean:5 divide:1 re:1 plotted:1 theoretical:7 instance:11 column:25 introducing:1 subset:28 entry:8 uniform:1 seventh:2 front:1 too:1 optimally:1 answer:2 gregory:1 st:1 recht:3 international:2 siam:6 stay:1 lee:3 off:1 contract:1 together:2 again:3 satisfied:1 moitra:1 containing:3 choose:1 li:3 includes:1 coresets:1 notable:1 explicitly:1 depends:1 ad:2 multiplicative:1 view:2 break:1 start:1 sort:1 recover:1 langville:1 contribution:2 greg:2 accuracy:3 yield:4 yes:1 conceptually:1 produced:1 carlo:1 bilmes:1 history:2 classified:1 ed:2 lengthy:1 definition:7 against:1 derek:1 obvious:2 e2:2 proof:11 cur:2 dataset:3 recall:1 knowledge:1 color:1 ubiquitous:1 gleich:1 positioned:1 carefully:1 actually:3 appears:4 planar:1 improved:2 done:1 though:4 generality:3 furthermore:2 angular:9 just:2 tropp:1 ei:2 widespread:1 abusing:1 defines:1 brings:1 quality:4 aj:1 indicated:1 usa:1 usage:1 requiring:3 true:2 verify:1 hausdorff:2 hence:4 iteratively:3 round:2 interchangeably:1 encourages:1 l1:1 percent:1 tan2:1 image:1 recently:1 common:1 superior:2 volume:1 discussed:3 analog:4 approximates:5 relating:1 kopt:5 significant:2 imposing:1 ai:4 rd:8 hp:1 similarly:6 had:3 dot:1 etc:1 base:1 add:2 closest:4 showed:1 recent:1 coil20:1 perspective:1 certain:1 accomplished:1 seen:1 minimum:6 additional:3 somewhat:2 care:1 guestrin:1 determine:3 forty:1 semi:1 resolving:2 full:10 desirable:3 violate:1 match:1 faster:1 calculation:1 long:3 divided:1 e1:2 award:1 a1:2 qi:2 variant:2 scalable:1 essentially:1 arxiv:1 represent:1 sometimes:1 kernel:1 addition:1 want:1 interval:1 singular:1 virtually:1 seem:2 jordan:1 call:1 integer:2 practitioner:1 near:6 split:1 gillis:5 easy:1 concerned:1 variety:4 identically:1 iii:1 zi:1 gave:1 browne:1 reduce:2 idea:1 pgq:1 computable:1 luce:2 texas:1 shift:2 t0:1 whether:6 six:1 gnomonic:9 guruswami:2 explainable:1 penalty:1 clarkson:2 returned:1 speech:1 passing:1 speaking:2 remark:1 generally:1 stodden:3 clear:5 detailed:1 amount:1 ten:1 diameter:8 vavasis:5 exist:1 xij:1 nsf:2 shifted:5 correctly:1 ruozzi:3 write:2 discrete:3 smin:2 key:1 dominance:1 threshold:1 blum:16 pj:2 clean:1 kept:1 n66001:1 geometrically:1 fraction:1 sum:23 cone:3 run:1 facilitated:1 angle:19 inverse:1 you:1 striking:1 soda:3 reasonable:1 bmp:1 decision:1 scaling:1 spa:1 bit:1 capturing:1 bound:2 simplification:1 conical:2 correspondence:1 cheng:1 arizona:1 nonnegative:16 greene:2 annual:4 precisely:1 constraint:1 angularly:3 bp:1 idf:4 span:1 kumar:9 separable:5 vempala:1 department:1 combination:15 smaller:1 separability:5 making:1 s1:2 benson:2 intuitively:1 restricted:4 equation:1 mutually:1 previously:3 remains:1 know:1 desirably:1 ge:1 magdon:2 apply:3 observe:4 away:2 nicholas:3 odory:15 original:1 denotes:2 running:5 ensure:2 clustering:1 restrictive:1 build:2 conquer:1 approximating:4 unchanged:1 implied:1 move:1 question:5 added:1 strategy:5 dependence:1 diagonal:1 said:1 minx:3 hq:1 distance:13 sci:1 topic:1 argue:4 trivial:2 reason:1 provable:1 furthest:6 kannan:2 length:3 kambadur:1 kk:1 minimizing:1 equivalently:4 nc:3 setup:1 robert:2 statement:2 frank:4 potentially:1 stoc:1 negative:9 stated:1 ski:9 motivates:1 twenty:3 perform:3 allowing:1 observation:5 datasets:6 finite:1 orthant:5 defining:1 arbitrary:2 community:1 nmf:21 introduced:1 namely:1 required:4 pair:1 toolbox:1 nip:5 address:2 able:2 below:3 pattern:2 sparsity:5 program:2 interpretability:1 including:2 max:1 hybrid:1 residual:1 barman:2 imply:2 conic:81 arora:5 sn:2 text:7 nice:1 geometric:1 berry:2 tangent:1 relative:2 loss:4 sk2:1 interesting:1 versus:2 pij:5 minp:1 thresholding:1 pi:3 translation:1 placed:1 surprisingly:1 supported:2 theo:1 side:1 allow:1 perceptron:1 sparse:12 van:2 boundary:3 dimension:6 stand:1 collection:1 made:1 projected:2 far:1 transaction:2 sj:2 approximate:21 emphasize:1 ignore:1 sat:3 summing:1 b1:1 xi:14 alternatively:2 search:1 additionally:1 nature:2 robust:2 ca:1 nicolas:1 symmetry:1 necessarily:2 constructing:1 gpq:6 significance:1 dense:2 pk:1 bounding:1 s2:2 reuters:6 allowed:2 x1:5 sub:1 wish:1 explicit:1 exponential:2 comput:2 lie:4 ivril:2 third:1 tang:1 rk:2 theorem:27 down:1 showing:2 maxi:1 x:2 svm:4 consist:4 exists:1 false:1 adding:2 magnitude:1 gap:1 intersection:1 simply:2 likely:1 expressed:3 contained:1 patrascu:2 partially:2 sindhwani:3 applies:2 acm:6 sized:4 goal:2 viewed:1 donoho:3 careful:1 hard:11 change:1 included:1 specifically:3 uniformly:1 hyperplane:2 lemma:11 partly:1 experimental:7 svd:3 perceptrons:1 formally:1 select:3 evaluate:2 d1:6 handling:1 |
6,465 | 6,848 | Rigorous Dynamics and Consistent Estimation in
Arbitrarily Conditioned Linear Systems
Alyson K. Fletcher
Dept. Statistics
UC Los Angeles
[email protected]
Mojtaba Sahraee-Ardakan
Dept. EE,
UC Los Angeles
[email protected]
Sundeep Rangan
Dept. ECE,
NYU
[email protected]
Philip Schniter
Dept. ECE,
The Ohio State Univ.
[email protected]
Abstract
We consider the problem of estimating a random vector x from noisy linear measurements y = Ax + w in the setting where parameters ? on the distribution of
x and w must be learned in addition to the vector x. This problem arises in a
wide range of statistical learning and linear inverse problems. Our main contribution shows that a computationally simple iterative message passing algorithm can
provably obtain asymptotically consistent estimates in a certain high-dimensional
large system limit (LSL) under very general parametrizations. Importantly, this
LSL applies to all right-rotationally random A ? a much larger class of matrices
than i.i.d. sub-Gaussian matrices to which many past message passing approaches
are restricted. In addition, a simple testable condition is provided in which the
mean square error (MSE) on the vector x matches the Bayes optimal MSE predicted by the replica method. The proposed algorithm uses a combination of
Expectation-Maximization (EM) with a recently-developed Vector Approximate
Message Passing (VAMP) technique. We develop an analysis framework that
shows that the parameter estimates in each iteration of the algorithm converge to
deterministic limits that can be precisely predicted by a simple set of state evolution
(SE) equations. The SE equations, which extends those of VAMP without parameter adaptation, depend only on the initial parameter estimates and the statistical
properties of the problem and can be used to predict consistency and precisely
characterize other performance measures of the method.
1
Introduction
Consider the problem of estimating a random vector x0 from linear measurements y of the form
y = Ax0 + w,
M ?N
w ? N (0, ?2?1 I),
x0 ? p(x|?1 ),
(1)
0
where A ? R
is a known matrix, p(x|?1 ) is a density on x with parameters ?1 , w is additive
white Gaussian noise (AWGN) independent of x0 , and ?2 > 0 is the noise precision (inverse variance).
The goal is to estimate x0 along with simultaneously learning the unknown parameters ? := (?1 , ?2 )
from the data y and A. This problem arises in Bayesian forms of linear inverse problems in signal
processing, as well as in linear regression in statistics.
Exact estimation of the parameters ? via maximum likelihood or other methods is generally intractable.
One promising class of approximate methods combines approximate message passing (AMP) [1]
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
with expectation-maximization (EM). AMP and its generalizations [2] are a powerful, relatively
recent, class of algorithms based on expectation propagation-type techniques. The AMP methodology
has the benefit of being computationally fast and has been successfully applied to a wide range of
problems. Most importantly, for large, i.i.d., sub-Gaussian random matrices A, the performance of
AMP methods can be exactly predicted by a scalar state evolution (SE) [3, 4] that provides testable
conditions for optimality, even for non-convex priors. When the parameters ? are unknown, AMP
can be easily combined with EM for joint learning of the parameters ? and vector x [5?7].
A recent work [8] has combined EM with the so-called Vector AMP (VAMP) method of [9]. Similar to
AMP, VAMP is based on expectation propagation (EP) approximations of belief propagation [10, 11]
and can also be considered as a special case of expectation consistent (EC) approximate inference
[12?14]. VAMP?s key attraction is that it applies to a larger class of matrices A than standard AMP
methods. Aside from Gaussian i.i.d. A, standard AMP techniques often diverge and require a variety
of modifications for stability [15?18]. In contrast, VAMP has provable SE analyses and convergence
guarantees that apply to all right-rotationally invariant matrices A [9, 19] ? a significantly larger class
of matrices than i.i.d. Gaussians. Under further conditions, the mean-squared error (MSE) of VAMP
matches the replica predictions for optimality [20?23]. For the case when the distribution on x and
w are unknown, the work [8] proposed to combine EM and VAMP using the approximate inference
framework of [24]. The combination of AMP with EM methods have been particularly successful
in neural modeling problems [25, 26]. While [8] provides numerical simulations demonstrating
excellent performance of this EM-VAMP method on a range of synthetic data, there were no provable
convergence guarantees.
Contributions of this work The SE analysis thus provides a rigorous and exact characterization of
the dynamics of EM-VAMP. In particular, the analysis can determine under which initial conditions
and problem statistics EM-VAMP will yield asymptotically consistent parameter estimates.
? Rigorous state evolution analysis: We provide a rigorous analysis of a generalization of
EM-VAMP that we call Adaptive VAMP. Similar to the analysis of VAMP, we consider
a certain large system limit (LSL) where the matrix A is random and right-rotationally
invariant. Importantly, this class of matrices is much more general than i.i.d. Gaussians used
in the original LSL analysis of Bayati and Montanari [3]. It is shown (Theorem 1) that in
the LSL, the parameter estimates at each iteration converge to deterministic limits ?k that
can be computed from a set of SE equations that extend those of VAMP. The analysis also
b and the true vector
exactly characterizes the asymptotic joint distribution of the estimates x
x0 . The SE equations depend only on the initial parameter estimate, the adaptation function,
and statistics on the matrix A, the vector x0 and noise w.
? Asymptotic consistency: It is also shown (Theorem 2) that under an additional identifiability
condition and a simple auto-tuning procedure, Adaptive VAMP can yield provably consistent
parameter estimates in the LSL. The technique uses an ML estimation approach from [7].
Remarkably, the result is true under very general problem formulations.
? Bayes optimality: In the case when the parameter estimates converge to the true value, the
behavior of adaptive VAMP matches that of VAMP. In this case, it is shown in [9] that, when
the SE equations have a unique fixed point, the MSE of VAMP matches the MSE of the
Bayes optimal estimator predicted by the replica method [21?23].
In this way, we have developed a computationally efficient method for a large class of linear inverse
problems with the properties that, in a certain high-dimensional limit: (1) the performance of the
algorithm can be exactly characterized, (2) the parameter estimates ?b are asymptotically consistent;
b match replica predictions
and (3) the algorithm has testable conditions for which the signal estimates x
for Bayes optimality.
2
VAMP with Adaptation
Assume the prior on x can be written as
1
p(x|?1 ) =
exp [?f1 (x|?1 )] ,
Z1 (?1 )
2
f1 (x|?1 ) =
N
X
n=1
f1 (xn |?1 ),
(2)
Algorithm 1 Adaptive VAMP
Require: Matrix A ? RM ?N , measurement vector y, denoiser function g1 (?), statistic function
?1 (?), adaptation function T1 (?) and number of iterations Nit .
1: Select initial r10 , ?10 ? 0, ?b10 , ?b20 .
2: for k = 0, 1, . . . , Nit ? 1 do
3:
// Input denoising
?1
b1k = g1 (r1k , ?1k , ?b1k )),
4:
x
?1k
= ?1k /hg10 (r1k , ?1k , ?b1k )i
5:
?2k = ?1k ? ?1k
b1k ? ?1k r1k )/?2k
6:
r2k = (?1k x
7:
8:
// Input parameter update
9:
?b1,k+1 = T1 (?1k ),
?1k = h?1 (r1k , ?1k , ?b1k )i
10:
11:
// Output estimation
T
b
b2k = Q?1
12:
x
Qk = ?b2k AT A + ?2k I
k (?2k A y + ?2k r2k ),
?1
?1
13:
?2k = (1/N ) tr(Qk )
14:
?1,k+1 = ?2k ? ?2k
b2k ? ?2k r2k )/?1,k+1
15:
r1,k+1 = (?2k x
16:
17:
// Output parameter update
?1
T
18:
?b2,k+1
= (1/N ){ky ? Ab
x2k k2 + tr(AQ?1
k A )}
19: end for
where f1 (?) is a separable penalty function, ?1 is a parameter vector and Z1 (?1 ) is a normalization
constant. With some abuse of notation, we have used f1 (?) for the function on the vector x and
its components xn . Since f1 (x|?1 ) is separable, x has i.i.d. components conditioned on ?1 . The
likelihood function under the Gaussian model (1) can be written as
1
?2
p(y|x, ?2 ) :=
exp [?f2 (x, y|?2 )] , f2 (x, y|?2 ) := ky ? Axk2 ,
(3)
Z2 (?2 )
2
where Z2 (?2 ) = (2?/?2 )N/2 . The joint density of x, y given parameters ? = (?1 , ?2 ) is then
p(x, y|?) = p(x|?1 )p(y|x, ?2 ).
(4)
0
The problem is to estimate the parameters ? = (?1 , ?2 ) along with the vector x .
The steps of the proposed adaptive VAMP algorithm to perform this estimation are shown in Algorithm 1, which is a generalization of the the EM-VAMP method in [8]. In each iteration, the
bik of the
algorithm produces, for i = 1, 2, estimates ?bi of the parameter ?i , along with estimates x
vector x0 . The algorithm is tuned by selecting three key functions: (i) a denoiser function g1 (?); (ii)
an adaptation statistic ?1 (?); and (iii) a parameter selection function T1 (?). The denoiser is used to
b1k , while the adaptation statistic and parameter estimation functions produce
produce the estimates x
the estimates ?b1k .
Denoiser function The denoiser function g1 (?) is discussed in detail in [9] and is generally based
on the prior p(x|?1 ). In the original EM-VAMP algorithm [8], g1 (?) is selected as the so-called
minimum mean-squared error (MMSE) denoiser. Specifically, in each iteration, the variables ri , ?i
and ?bi were used to construct belief estimates,
i
h
?i
bi (x|ri , ?i , ?bi ) ? exp ?fi (x, y|?bi ) ? kx ? ri k2 ,
(5)
2
which represent estimates of the posterior density p(x|y, ?). To keep the notation symmetric, we
have written f1 (x, y|?b1 ) for f1 (x|?b1 ) even though the first penalty function does not depend on y.
The EM-VAMP method then selects g1 (?) to be the mean of the belief estimate,
g1 (r1 , ?1 , ?1 ) := E [x|r1 , ?1 , ?1 ] .
[g10 (r1k , ?1k , ?1 )]n
(6)
For line 4 of Algorithm 1, we define
:= ?[g (r , ? , ? )] /?r1n and we use
PN 1 1k 1k 1 n
h?i for the empirical mean of a vector, i.e., hui = (1/N ) n=1 un . Hence, ?1k in line 4 is a scaled
3
inverse divergence. It is shown in [9] that, for the MMSE denoiser (6), ?1k is the inverse average
posterior variance.
Estimation for ?1 with finite statistics For the EM-VAMP algorithm [8], the parameter update
for ?b1,k+1 is performed via a maximization
h
i
?b1,k+1 = arg max E ln p(x|?1 ) r1k , ?1k , ?b1k ,
(7)
?1
where the expectation is with respect to the belief estimate bi (?) in (5). It is shown in [8] that using (7)
is equivalent to an approximation of the M-step in the standard EM method. In the adaptive VAMP
method in Algorithm 1, the M-step maximization (7) is replaced by line 9. Note that line 9 again uses
h?i to denote empirical average,
?1k = h?1 (r1k , ?1k , ?b1k )i :=
N
1 X
?1 (r1k,n , ?1k , ?b1k ) ? Rd ,
N n=1
(8)
so ?1k is the empirical average of some d-dimensional statistic ?1 (?) over the components of r1k .
The parameter estimate update ?b1,k+1 is then computed from some function of this statistic, T1 (?1k ).
We show in the full paper [27] that there are two important cases where the EM update (7) can
be computed from a finite-dimensional statistic as in line 9: (i) The prior p(x|?1 ) is given by an
exponential family, f1 (x|?1 ) = ?1T ?(x) for some sufficient statistic ?(x); and (ii) There are a
finite number of values for the parameter ?1 . For other cases, we can approximate more general
parametrizations via discretization of the parameter values ?~1 . The updates in line 9 can also
incorporate other types of updates as we will see below. But, we stress that it is preferable to compute
the estimate for ?1 directly from the maximization (7) ? the use of a finite-dimensional statistic is for
the sake of analysis.
Estimation for ?2 with finite statistics It will be useful to also write the adaptation of ?2 in line 18
of Algorithm 1 in a similar form as line 9. First, take a singular value decomposition (SVD) of A of
the form
A = USVT , S = Diag(s),
(9)
and define the transformed error and transformed noise,
qk := VT (r2k ? x0 ),
? := UT w.
(10)
Then, it is shown in the full paper [27] that ?b2,k+1 in line 18 can be written as
1
?b2,k+1 = T2 (?2k ) :=
,
?2k
where
?2 (q, ?, s, ?2 , ?b2 ) :=
?2k = h?2 (q2 , ?, s, ?2k , ?b2k )i
?22
(s2 ?b2 + ?2 )2
(sq + ?)2 +
s2
s2 ?b2 + ?2
.
(11)
(12)
Of course, we cannot directly compute qk in (10) since we do not know the true x0 . Nevertheless,
this form will be useful for analysis.
3
3.1
State Evolution in the Large System Limit
Large System Limit
Similar to the analysis of VAMP in [9], we analyze Algorithm 1 in a certain large system limit (LSL).
The LSL framework was developed by Bayati and Montanari in [3] and we review some of the key
definitions in full paper [27]. As in the analysis of VAMP, the LSL considers a sequence of problems
indexed by the vector dimension N . For each N , we assume that there is a ?true? vector x0 ? RN
that is observed through measurements of the form
y = Ax0 + w ? RN ,
4
w ? N (0, ?2?1 IN ),
(13)
where A ? RN ?N is a known transform, w is Gaussian noise and ?2 represents a ?true? noise
precision. The noise precision ?2 does not change with N .
Identical to [9], the transform A is modeled as a large, right-orthogonally invariant random matrix.
Specifically, we assume that it has an SVD of the form (9) where U and V are N ? N orthogonal
matrices such that U is deterministic and V is Haar distributed (i.e. uniformly distributed on the set
of orthogonal matrices). As described in [9], although we have assumed a square matrix A, we can
consider general rectangular A by adding zero singular values.
Using the definitions in full paper [27], we assume that the components of the singular-value vector
s ? RN in (9) converge empirically with second-order moments as
P L(2)
lim {sn } = S,
(14)
N ??
for some non-negative random variable S with E[S] > 0 and S ? [0, Smax ] for some finite maximum
value Smax . Additionally, we assume that the components of the true vector, x0 , and the initial input
to the denoiser, r10 , converge empirically as
P L(2)
lim {(r10,n , x0n )} = (R10 , X 0 ),
R10 = X 0 + P0 ,
N ??
P0 ? N (0, ?10 ),
(15)
where X 0 is a random variable representing the true distribution of the components x0 ; P0 is an
initial error and ?10 is an initial error variance. The variable X 0 may be distributed as X 0 ? p(?|?1 )
for some true parameter ?1 . However, in order to incorporate under-modeling, the existence of such
a true parameter is not required. We also assume that the initial second-order term and parameter
estimate converge almost surely as
lim (?10 , ?b10 , ?b20 ) = (? 10 , ?10 , ?20 )
N ??
(16)
for some ? 10 > 0 and (?10 , ?20 ).
3.2
Error and Sensitivity Functions
We next need to introduce parametric forms of two key terms from [9]: error functions and sensitivity
functions. The error functions describe MSE of the denoiser and output estimators under AWGN
measurements. Specifically, for the denoiser g1 (?, ?1 , ?b1 ), we define the error function as
h
i
E1 (?1 , ?1 , ?b1 ) := E (g1 (R1 , ?1 , ?b1 ) ? X 0 )2 , R1 = X 0 + P, P ? N (0, ?1 ),
(17)
where X 0 is distributed according to the true distribution of the components x0 (see above). The
b = g1 (R1 , ?1 , ?b1 ) from a measurefunction E1 (?1 , ?1 , ?b1 ) thus represents the MSE of the estimate X
ment R1 corrupted by Gaussian noise of variance ?1 under the parameter estimate ?b1 . For the output
estimator, we define the error function as
1
E2 (?2 , ?2 , ?b2 ) := lim
Ekg2 (r2 , ?2 , ?b2 ) ? x0 k2 ,
N ?? N
x0 = r2 + q, q ? N (0, ?2 I), y = Ax0 + w, w ? N (0, ?2?1 I),
(18)
which is the average per component error of the vector estimate under Gaussian noise. The dependence
on the true noise precision, ?2 , is suppressed.
The sensitivity functions describe the expected divergence of the estimator. For the denoiser, the
sensitivity function is defined as
h
i
A1 (?1 , ?1 , ?b1 ) := E g10 (R1 , ?1 , ?b1 ) , R1 = X 0 + P, P ? N (0, ?1 ),
(19)
which is the average derivative under a Gaussian noise input. For the output estimator, the sensitivity
is defined as
"
#
b2 )
1
?g
(r
,
?
,
?
2
2
2
A2 (?2 , ?2 , ?b2 ) := lim
tr
,
(20)
N ?? N
?r2
where r2 is distributed as in (18). The paper [9] discusses the error and sensitivity functions in detail
and shows how these functions can be easily evaluated.
5
3.3
State Evolution Equations
We can now describe our main result, which are the SE equations for Adaptive VAMP. The equations
are an extension of those in the VAMP paper [9], with modifications for the parameter estimation.
For a given iteration k ? 1, consider the set of components,
{(b
x1k,n , r1k,n , x0n ), n = 1, . . . , N }.
b1k and the
This set represents the components of the true vector x0 , its corresponding estimate x
denoiser input r1k . We will show that, under certain assumptions, these components converge
empirically as
P L(2)
b1k , R1k , X 0 ),
lim {(b
x1k,n , r1k,n , x0n )} = (X
(21)
N ??
b1k , R1k , X 0 ) are given by
where the random variables (X
R1k = X 0 + Pk ,
Pk ? N (0, ?1k ),
b1k = g1 (R1k , ? 1k , ?1k ),
X
(22)
for constants ? 1k , ?1k and ?1k that will be defined below. We will also see that ?b1k ? ?1k , so ?1k
represents the asymptotic parameter estimate. The model (22) shows that each component r1k,n
appears as the true component x0n plus Gaussian noise. The corresponding estimate x
b1k,n then
appears as the denoiser output with r1k,n as the input and ?1k as the parameter estimate. Hence, the
asymptotic behavior of any component x0n and its corresponding x
b1k,n is identical to a simple scalar
system. We will refer to (21)-(22) as the denoiser?s scalar equivalent model.
We will also show that these transformed errors qk and noise ? in (10) and singular values s converge
empirically to a set of independent random variables (Qk , ?, S) given by
P L(2)
lim {(qk,n , ?n , sn )} = (Qk , ?, S),
N ??
Qk ? N (0, ?2k ),
? ? N (0, ?2?1 ),
(23)
where S has the distribution of the singular values of A, ?2k is a variance that will be defined below
and ?2 is the true noise precision in the measurement model (13). All the variables in (23) are
independent. Thus (23) is a scalar equivalent model for the output estimator.
The variance terms are defined recursively through the state evolution equations,
?
? 1k = 1k , ? 2k = ? 1k ? ? 1k
?1k
= T1 (?1k ), ?1k = E ?1 (R1k , ? 1k , ?1k )
1
E1 (? 1k , ?1k , ?1k ) ? ?21k ?1k ,
=
(1 ? ?1k )2
?
= A2 (? 2k , ?2k , ?2k ), ? 2k = 2k , ? 1,k+1 = ? 2k ? ? 2k
?2k
= T2 (?2k ), ?2k = E ?2 (Qk , ?, S, ? 2k , ?2k )
1
=
E2 (? 2k , ?2k ) ? ?22k ?2k ,
(1 ? ?2k )2
?1k = A1 (? 1k , ?1k , ?1k ),
?1,k+1
?2k
?2k
?2,k+1
?1,k+1
(24a)
(24b)
(24c)
(24d)
(24e)
(24f)
which are initialized with ?10 = E[(R10 ? X 0 )2 ] and the (? 10 , ?10 , ?20 ) defined from the limit (16).
The expectation in (24b) is with respect to the random variables (21) and the expectation in (24e) is
with respect to the random variables (23).
Theorem 1. Consider the outputs of Algorithm 1. Under the above assumptions and definitions,
assume additionally that for all iterations k:
(i) The solution ?1k from the SE equations (24) satisfies ?1k ? (0, 1).
(ii) The functions Ai (?), Ei (?) and Ti (?) are continuous at (?i , ?i , ?bi , ?i ) = (? ik , ?ik , ?ik , ?ik ).
(iii) The denoiser function g1 (r1 , ?1 , ?b1 ) and its derivative g10 (r1 , ?1 , ?b1 ) are uniformly Lipschitz
in r1 at (?1 , ?b1 ) = (? 1k , ?1k ). (See the full paper [27]. for a precise definition of uniform
Lipschitz continuity.)
6
(iv) The adaptation statistic ?1 (r1 , ?1 , ?b1 ) is uniformly pseudo-Lipschitz of order 2 in r1 at
(?1 , ?b1 ) = (? 1k , ?1k ).
Then, for any fixed iteration k ? 0,
lim (?ik , ?ik , ?ik , ?ik , ?bik ) = (?ik , ? ik , ? ik , ?ik , ?ik )
N ??
(25)
almost surely. In addition, the empirical limit (21) holds almost surely for all k > 0, and (23) holds
almost surely for all k ? 0.
Theorem 1 shows that, in the LSL, the parameter estimates ?bik converge to deterministic limits ?ik
that can be precisely predicted by the state-evolution equations. The SE equations incorporate the true
distribution of the components on the prior x0 , the true noise precision ?2 , and the specific parameter
estimation and denoiser functions used by the Adaptive VAMP method. In addition, similar to the SE
analysis of VAMP in [9], the SE equations also predict the asymptotic joint distribution of x0 and
bik . This joint distribution can be used to measure various performance metrics such
their estimates x
as MSE ? see [9]. In this way, we have provided a rigorous and precise characterization of a class of
adaptive VAMP algorithms that includes EM-VAMP.
4
Consistent Parameter Estimation with Variance Auto-Tuning
By comparing the deterministic limits ?ik with the true parameters ?i , one can determine under which
problem conditions the parameter estimates of adaptive VAMP are asymptotically consistent. In this
section, we show with a particular choice of parameter estimation functions, one can obtain provably
asymptotically consistent parameter estimates under suitable identifiability conditions. We call the
method variance auto-tuning, which generalizes the approach in [7].
Definition 1. Let p(x|?1 ) be a parametrized set of densities. Given a finite-dimensional statistic
?1 (r), consider the mapping
(?1 , ?1 ) 7? E [?1 (R)|?1 , ?1 ] ,
R = X + N (0, ?1 ),
X ? p(x|?1 ).
(26)
We say the p(x|?1 ) is identifiable in Gaussian noise if there exists a finite-dimensional statistic
?1 (r) ? Rd such that (i) ?1 (r) is pseudo-Lipschitz continuous of order 2; and (ii) the mapping (26)
has a continuous inverse.
Theorem 2. Under the assumptions of Theorem 1, suppose that X 0 follows X 0 ? p(x|?10 ) for some
true parameter ?10 . If p(x|?1 ) is identifiable in Gaussian noise, there exists an adaptation rule such
that, for any iteration k, the estimate ?b1k and noise estimate ?b1k are asymptotically consistent in that
limN ?? ?b1k = ?10 and limN ?? ?b1k = ?1k almost surely.
The theorem is proved in full paper [27]. which also provides details on how to perform the
adaptation. A similar result for consistent estimation of the noise precision ?2 is also given. The
result is remarkable as it shows that a simple variant of EM-VAMP can provide provably consistent
parameter estimates under extremely general distributions.
5
Numerical Simulations
Sparse signal recovery: The paper [8] presented several numerical experiments to assess the
performance of EM-VAMP relative to other methods. Here, our goal is to confirm that EM-VAMP?s
performance matches the SE predictions. As in [8], we consider a sparse linear regression problem of
estimating a vector x from measurements y from (1) without knowing the signal parameters ?1 or
the noise precision ?2 > 0. Details are given in the full paper [27]. Briefly, to model the sparsity, x is
drawn as an i.i.d. Bernoulli-Gaussian (i.e., spike and slab) prior with unknown sparsity level, mean
and variance. The true sparsity is ?x = 0.1. Following [15, 16], we take A ? RM ?N to be a random
right-orthogonally invariant matrix with dimensions under M = 512, N = 1024 with the condition
number set to ? = 100 (high condition number matrices are known to be problem for conventional
AMP methods). The left panel of Fig. 1 shows the normalized mean square error (NMSE) for various
algorithms. The full paper [27] describes the algorithms in details and also shows similar results for
? = 10.
7
Figure 1: Numerical simulations. Left panel: Sparse signal recovery: NMSE versus iteration for
condition number for a random matrix with a condition number ? = 100. Right panel: NMSE for
sparse image recovery as a function of the measurement ratio M/N .
We see several important features. First, for all variants of VAMP and EM-VAMP, the SE equations
provide an excellent prediction of the per iteration performance of the algorithm. Second, consistent
with the simulations in [9], the oracle VAMP converges remarkably fast (? 10 iterations). Third,
the performance of EM-VAMP with auto-tuning is virtually indistinguishable from oracle VAMP,
suggesting that the parameter estimates are near perfect from the very first iteration. Fourth, the EMVAMP method performs initially worse than the oracle-VAMP, but these errors are exactly predicted
by the SE. Finally, all the VAMP and EM-VAMP algorithm exhibit much faster convergence than the
EM-BG-AMP. In fact, consistent with observations in [8], EM-BG-AMP begins to diverge at higher
condition numbers. In contrast, the VAMP algorithms are stable.
Compressed sensing image recovery While the theory is developed on theoretical signal priors,
we demonstrate that the proposed EM-VAMP algorithm can be effective on natural images. Specifically, we repeat the experiments in [28] for recovery of a sparse image. Again, see the full paper
[27] for details including a picture of the image and the various reconstructions. An N = 256 ? 256
image of a satellite with K = 6678 pixels is transformed through an undersampled random transform
A = diag(s)PH, where H is fast Hadamard transform, P is a random subselection to M measurements and s is a scaling to adjust the condition number. As in the previous example, the image vector
x is modeled as a sparse Bernoulli-Gaussian and the EM-VAMP algorithm is used to estimate the
sparsity ratio, signal variance and noise variance. The transform is set to have a condition number
of ? = 100. We see from the right panel of Fig. 1 we see that the that the EM-VAMP algorithm is
able to reconstruct the images with improved performance over the standard basis pursuit denoising
method spgl1 [29] and the EM-BG-GAMP method from [16].
6
Conclusions
Due to its analytic tractability, computational simplicity, and potential for Bayes optimal inference,
VAMP is a promising technique for statistical linear inverse problems. However, a key challenge in
using VAMP and related methods is the need to precisely specify the distribution on the problem
parameters. This work provides a rigorous foundation for analyzing VAMP in combination with
various parameter adaptation techniques including EM. The analysis reveals that VAMP with
appropriate tuning, can also provide consistent parameter estimates under very general settings, thus
yielding a powerful approach for statistical linear inverse problems.
Acknowledgments
A. K. Fletcher and M. Saharee-Ardakan were supported in part by the National Science Foundation
under Grants 1254204 and 1738286 and the Office of Naval Research under Grant N00014-15-1-2677.
S. Rangan was supported in part by the National Science Foundation under Grants 1116589, 1302336,
and 1547332, and the industrial affiliates of NYU WIRELESS. The work of P. Schniter was supported
in part by the National Science Foundation under Grant CCF-1527162.
8
References
[1] D. L. Donoho, A. Maleki, and A. Montanari, ?Message-passing algorithms for compressed sensing,? Proc.
Nat. Acad. Sci., vol. 106, no. 45, pp. 18 914?18 919, Nov. 2009.
[2] S. Rangan, ?Generalized approximate message passing for estimation with random linear mixing,? in Proc.
IEEE Int. Symp. Inform. Theory, Saint Petersburg, Russia, Jul.?Aug. 2011, pp. 2174?2178.
[3] M. Bayati and A. Montanari, ?The dynamics of message passing on dense graphs, with applications to
compressed sensing,? IEEE Trans. Inform. Theory, vol. 57, no. 2, pp. 764?785, Feb. 2011.
[4] A. Javanmard and A. Montanari, ?State evolution for general approximate message passing algorithms,
with applications to spatial coupling,? Information and Inference, vol. 2, no. 2, pp. 115?144, 2013.
[5] F. Krzakala, M. M?zard, F. Sausset, Y. Sun, and L. Zdeborov?, ?Statistical-physics-based reconstruction in
compressed sensing,? Physical Review X, vol. 2, no. 2, p. 021005, 2012.
[6] J. P. Vila and P. Schniter, ?Expectation-maximization Gaussian-mixture approximate message passing,?
IEEE Trans. Signal Processing, vol. 61, no. 19, pp. 4658?4672, 2013.
[7] U. S. Kamilov, S. Rangan, A. K. Fletcher, and M. Unser, ?Approximate message passing with consistent
parameter estimation and applications to sparse learning,? IEEE Trans. Info. Theory, vol. 60, no. 5, pp.
2969?2985, Apr. 2014.
[8] A. K. Fletcher and P. Schniter, ?Learning and free energies for vector approximate message passing,? Proc.
IEEE ICASSP, March 2017.
[9] S. Rangan, P. Schniter, and A. K. Fletcher, ?Vector approximate message passing,? Proc. IEEE ISIT, June
2017.
[10] M. Seeger, ?Bayesian inference and optimal design for the sparse linear model,? J. Machine Learning
Research, vol. 9, pp. 759?813, Sep. 2008.
[11] M. W. Seeger and H. Nickisch, ?Fast convergent algorithms for expectation propagation approximate
bayesian inference,? in International Conference on Artificial Intelligence and Statistics, 2011, pp. 652?660.
[12] M. Opper and O. Winther, ?Expectation consistent free energies for approximate inference,? in Proc. NIPS,
2004, pp. 1001?1008.
[13] ??, ?Expectation consistent approximate inference,? J. Mach. Learning Res., vol. 1, pp. 2177?2204,
2005.
[14] A. K. Fletcher, M. Sahraee-Ardakan, S. Rangan, and P. Schniter, ?Expectation consistent approximate
inference: Generalizations and convergence,? in Proc. IEEE ISIT, 2016, pp. 190?194.
[15] S. Rangan, P. Schniter, and A. Fletcher, ?On the convergence of approximate message passing with arbitrary
matrices,? in Proc. IEEE ISIT, Jul. 2014, pp. 236?240.
[16] J. Vila, P. Schniter, S. Rangan, F. Krzakala, and L. Zdeborov?, ?Adaptive damping and mean removal for
the generalized approximate message passing algorithm,? in Proc. IEEE ICASSP, 2015, pp. 2021?2025.
[17] A. Manoel, F. Krzakala, E. W. Tramel, and L. Zdeborov?, ?Swept approximate message passing for sparse
estimation,? in Proc. ICML, 2015, pp. 1123?1132.
[18] S. Rangan, A. K. Fletcher, P. Schniter, and U. S. Kamilov, ?Inference for generalized linear models via
alternating directions and Bethe free energy minimization,? IEEE Transactions on Information Theory,
vol. 63, no. 1, pp. 676?697, 2017.
[19] K. Takeuchi, ?Rigorous dynamics of expectation-propagation-based signal recovery from unitarily invariant
measurements,? Proc. IEEE ISIT, June 2017.
[20] S. Rangan, A. Fletcher, and V. K. Goyal, ?Asymptotic analysis of MAP estimation via the replica method
and applications to compressed sensing,? IEEE Trans. Inform. Theory, vol. 58, no. 3, pp. 1902?1923, Mar.
2012.
[21] A. M. Tulino, G. Caire, S. Verd?, and S. Shamai, ?Support recovery with sparsely sampled free random
matrices,? IEEE Trans. Inform. Theory, vol. 59, no. 7, pp. 4243?4271, 2013.
[22] J. Barbier, M. Dia, N. Macris, and F. Krzakala, ?The mutual information in random linear estimation,?
arXiv:1607.02335, 2016.
9
[23] G. Reeves and H. D. Pfister, ?The replica-symmetric prediction for compressed sensing with Gaussian
matrices is exact,? in Proc. IEEE ISIT, 2016.
[24] T. Heskes, O. Zoeter, and W. Wiegerinck, ?Approximate expectation maximization,? NIPS, vol. 16, pp.
353?360, 2004.
[25] A. K. Fletcher, S. Rangan, L. Varshney, and A. Bhargava, ?Neural reconstruction with approximate
message passing (NeuRAMP),? in Proc. Neural Information Process. Syst., Granada, Spain, Dec. 2011, pp.
2555?2563.
[26] A. K. Fletcher and S. Rangan, ?Scalable inference for neuronal connectivity from calcium imaging,? in
Proc. Neural Information Processing Systems, 2014, pp. 2843?2851.
[27] A. Fletcher, M. Sahraee-Ardakan, S. Rangan, and P. Schniter, ?Rigorous dynamics and consistent estimation
in arbitrarily conditioned linear systems,? arxiv, 2017.
[28] J. P. Vila and P. Schniter, ?An empirical-Bayes approach to recovering linearly constrained non-negative
sparse signals,? IEEE Trans. Signal Process., vol. 62, no. 18, pp. 4689?4703, 2014.
[29] E. Van Den Berg and M. P. Friedlander, ?Probing the pareto frontier for basis pursuit solutions,? SIAM
Journal on Scientific Computing, vol. 31, no. 2, pp. 890?912, 2008.
10
| 6848 |@word briefly:1 simulation:4 decomposition:1 p0:3 tr:3 recursively:1 moment:1 initial:8 selecting:1 tuned:1 amp:13 mmse:2 past:1 z2:2 discretization:1 comparing:1 must:1 written:4 numerical:4 additive:1 analytic:1 shamai:1 update:7 aside:1 intelligence:1 selected:1 provides:5 characterization:2 along:3 ik:15 combine:2 symp:1 krzakala:4 introduce:1 x0:18 javanmard:1 expected:1 behavior:2 usvt:1 provided:2 estimating:3 notation:2 begin:1 panel:4 spain:1 q2:1 developed:4 petersburg:1 guarantee:2 pseudo:2 ti:1 exactly:4 preferable:1 rm:2 k2:3 scaled:1 grant:4 t1:5 limit:12 acad:1 awgn:2 mach:1 analyzing:1 abuse:1 plus:1 range:3 bi:7 unique:1 acknowledgment:1 goyal:1 sq:1 procedure:1 empirical:5 significantly:1 cannot:1 selection:1 equivalent:3 deterministic:5 map:1 conventional:1 nit:2 convex:1 rectangular:1 simplicity:1 recovery:7 estimator:6 attraction:1 rule:1 importantly:3 stability:1 lsl:10 suppose:1 exact:3 us:3 verd:1 particularly:1 sparsely:1 ep:1 observed:1 sun:1 dynamic:5 depend:3 f2:2 basis:2 alyson:1 easily:2 joint:5 icassp:2 sep:1 various:4 univ:1 fast:4 describe:3 effective:1 artificial:1 larger:3 say:1 reconstruct:1 compressed:6 statistic:18 g1:12 transform:5 noisy:1 sequence:1 reconstruction:3 ment:1 adaptation:11 hadamard:1 parametrizations:2 mixing:1 ky:2 los:2 convergence:5 r1:14 smax:2 produce:3 satellite:1 perfect:1 converges:1 coupling:1 develop:1 aug:1 recovering:1 predicted:6 direction:1 vila:3 require:2 f1:9 generalization:4 isit:5 extension:1 frontier:1 hold:2 considered:1 exp:3 fletcher:12 mapping:2 predict:2 slab:1 r1k:19 a2:2 estimation:19 proc:13 successfully:1 minimization:1 gaussian:16 pn:1 office:1 ax:1 june:2 naval:1 bernoulli:2 likelihood:2 contrast:2 rigorous:8 industrial:1 seeger:2 inference:11 initially:1 transformed:4 selects:1 provably:4 pixel:1 arg:1 spatial:1 special:1 constrained:1 uc:2 mutual:1 construct:1 beach:1 identical:2 represents:4 icml:1 t2:2 b2k:4 simultaneously:1 divergence:2 national:3 sundeep:1 replaced:1 b20:2 ab:1 message:16 adjust:1 mixture:1 yielding:1 schniter:12 orthogonal:2 damping:1 indexed:1 iv:1 initialized:1 re:1 theoretical:1 modeling:2 ax0:3 maximization:7 tractability:1 uniform:1 successful:1 characterize:1 gamp:1 corrupted:1 synthetic:1 combined:2 nickisch:1 st:1 density:4 international:1 sensitivity:6 winther:1 caire:1 siam:1 physic:1 diverge:2 connectivity:1 squared:2 again:2 russia:1 worse:1 derivative:2 syst:1 suggesting:1 potential:1 b2:10 includes:1 int:1 bg:3 performed:1 analyze:1 characterizes:1 zoeter:1 bayes:6 identifiability:2 jul:2 contribution:2 ass:1 square:3 takeuchi:1 variance:11 qk:10 yield:2 bayesian:3 r2k:4 inform:4 definition:5 energy:3 pp:22 e2:2 sampled:1 proved:1 lim:8 ut:1 appears:2 higher:1 methodology:1 specify:1 improved:1 formulation:1 evaluated:1 though:1 mar:1 ei:1 propagation:5 tramel:1 continuity:1 scientific:1 unitarily:1 usa:1 normalized:1 true:20 ccf:1 evolution:8 hence:2 maleki:1 alternating:1 symmetric:2 white:1 indistinguishable:1 generalized:3 sausset:1 stress:1 demonstrate:1 performs:1 macris:1 image:8 ohio:1 recently:1 fi:1 empirically:4 physical:1 extend:1 discussed:1 measurement:10 refer:1 ai:1 tuning:5 rd:2 consistency:2 reef:1 heskes:1 aq:1 stable:1 vamp:55 feb:1 posterior:2 recent:2 certain:5 n00014:1 neuramp:1 arbitrarily:2 kamilov:2 vt:1 swept:1 rotationally:3 minimum:1 additional:1 b1k:21 surely:5 converge:9 determine:2 affiliate:1 signal:11 ii:4 full:9 match:6 characterized:1 faster:1 long:1 e1:3 a1:2 prediction:5 variant:2 regression:2 scalable:1 expectation:15 metric:1 arxiv:2 iteration:13 normalization:1 represent:1 dec:1 addition:4 remarkably:2 singular:5 limn:2 virtually:1 bik:4 call:2 ee:1 near:1 iii:2 variety:1 srangan:1 knowing:1 angeles:2 x1k:2 penalty:2 passing:16 generally:2 useful:2 se:16 subselection:1 ph:1 per:2 write:1 vol:14 key:5 demonstrating:1 nevertheless:1 drawn:1 r10:6 replica:6 imaging:1 asymptotically:6 graph:1 inverse:9 powerful:2 fourth:1 extends:1 family:1 x0n:5 almost:5 scaling:1 x2k:1 convergent:1 identifiable:2 oracle:3 zard:1 precisely:4 rangan:13 ri:3 sake:1 ucla:2 optimality:4 extremely:1 separable:2 relatively:1 according:1 combination:3 march:1 describes:1 em:30 suppressed:1 modification:2 axk2:1 den:1 restricted:1 invariant:5 computationally:3 equation:14 ln:1 discus:1 know:1 end:1 dia:1 generalizes:1 gaussians:2 pursuit:2 apply:1 appropriate:1 existence:1 original:2 saint:1 testable:3 spike:1 parametric:1 dependence:1 exhibit:1 zdeborov:3 sci:1 philip:1 parametrized:1 considers:1 provable:2 r1n:1 denoiser:16 modeled:2 ratio:2 info:1 negative:2 design:1 calcium:1 unknown:4 perform:2 observation:1 finite:8 precise:2 rn:4 arbitrary:1 required:1 z1:2 learned:1 manoel:1 nip:3 trans:6 able:1 below:3 sparsity:4 challenge:1 max:1 including:2 belief:4 suitable:1 natural:1 haar:1 undersampled:1 bhargava:1 representing:1 orthogonally:2 picture:1 b10:2 auto:4 sn:2 prior:7 review:2 removal:1 friedlander:1 asymptotic:6 relative:1 versus:1 remarkable:1 bayati:3 foundation:4 sufficient:1 consistent:20 granada:1 pareto:1 course:1 repeat:1 supported:3 wireless:1 free:4 wide:2 sparse:10 benefit:1 distributed:5 van:1 dimension:2 xn:2 opper:1 adaptive:11 ec:1 transaction:1 approximate:21 nov:1 keep:1 confirm:1 ml:1 varshney:1 reveals:1 b1:19 assumed:1 un:1 iterative:1 continuous:3 additionally:2 promising:2 bethe:1 ca:1 mse:8 excellent:2 diag:2 pk:2 main:2 montanari:5 dense:1 apr:1 s2:3 noise:21 linearly:1 nmse:3 neuronal:1 fig:2 probing:1 precision:8 sub:2 exponential:1 barbier:1 third:1 theorem:7 specific:1 sensing:6 nyu:3 r2:4 unser:1 intractable:1 exists:2 adding:1 g10:3 hui:1 nat:1 conditioned:3 kx:1 scalar:4 applies:2 satisfies:1 goal:2 donoho:1 lipschitz:4 change:1 specifically:4 uniformly:3 denoising:2 wiegerinck:1 called:2 pfister:1 ece:3 svd:2 mojtaba:1 osu:1 select:1 berg:1 support:1 arises:2 incorporate:3 dept:4 |
6,466 | 6,849 | Toward Goal-Driven Neural Network Models for the
Rodent Whisker-Trigeminal System
Chengxu Zhuang
Department of Psychology
Stanford University
Stanford, CA 94305
[email protected]
Mitra Hartmann
Departments of Biomedical Engineering
and Mechanical Engineering
Northwestern University
Evanston, IL 60208
[email protected]
Jonas Kubilius
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Brain and Cognition, KU Leuven, Belgium
[email protected]
Daniel Yamins
Departments of Psychology and Computer Science
Stanford Neurosciences Institute
Stanford University
Stanford, CA 94305
[email protected]
Abstract
In large part, rodents ?see? the world through their whiskers, a powerful tactile
sense enabled by a series of brain areas that form the whisker-trigeminal system.
Raw sensory data arrives in the form of mechanical input to the exquisitely sensitive,
actively-controllable whisker array, and is processed through a sequence of neural
circuits, eventually arriving in cortical regions that communicate with decisionmaking and memory areas. Although a long history of experimental studies has
characterized many aspects of these processing stages, the computational operations
of the whisker-trigeminal system remain largely unknown. In the present work,
we take a goal-driven deep neural network (DNN) approach to modeling these
computations. First, we construct a biophysically-realistic model of the rat whisker
array. We then generate a large dataset of whisker sweeps across a wide variety
of 3D objects in highly-varying poses, angles, and speeds. Next, we train DNNs
from several distinct architectural families to solve a shape recognition task in
this dataset. Each architectural family represents a structurally-distinct hypothesis
for processing in the whisker-trigeminal system, corresponding to different ways
in which spatial and temporal information can be integrated. We find that most
networks perform poorly on the challenging shape recognition task, but that specific
architectures from several families can achieve reasonable performance levels.
Finally, we show that Representational Dissimilarity Matrices (RDMs), a tool for
comparing population codes between neural systems, can separate these higherperforming networks with data of a type that could plausibly be collected in a
neurophysiological or imaging experiment. Our results are a proof-of-concept that
DNN models of the whisker-trigeminal system are potentially within reach.
1
Introduction
The sensory systems of brains do remarkable work in extracting behaviorally useful information from
noisy and complex raw sense data. Vision systems process intensities from retinal photoreceptor
arrays, auditory systems interpret the amplitudes and frequencies of hair-cell displacements, and
somatosensory systems integrate data from direct physical interactions. [28] Although these systems
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Cortex
a)
2- 3
/
Trigeminal
Ganglion
Thalamus
Matched to real
morphology
b)
??
S1
2-3
4
4
s
Sa
Sb
Sb
6
6
S2
??
c)
d)
??
Sweeps
?Cube?
?Chair?
...
?Duck?
Input Shapes
Task-Optimized
Neural Network
Architecture(s)
Artificial Vibrissal Array
Shape Category
Recognition
Output
Figure 1: Goal-Driven Approach to Modeling Barrel Cortex: a. Rodents have highly sensitive whisker
(vibrissal) arrays that provide input data about the environment. Mechanical signals from the vibrissae are
relayed by primary sensory neurons of the trigeminal ganglion to the trigeminal nuclei, the original of multiple
parallel pathways to S1 and S2. (Figure modified from [8].) This system is a prime target for modeling because
it is likely to be richly representational, but its computational underpinnings are largely unknown. Our long-term
approach to modeling the whisker-trigeminal system is goal-driven: using an artificial whisker-array input
device built using extensive biophysical measurements (b.), we seek to optimize neural networks of various
architectures (c.) to solve ethologically-relevant shape recognition tasks (d.), and then measure the extent to
which these networks predict fine-grained response patterns in real neural recordings.
differ radically in their input modalities, total number of neurons, and specific neuronal microcircuits,
they share two fundamental characteristics. First, they are hierarchical sensory cascades, albeit
with extensive feedback, consisting of sequential processing stages that together produce a complex
transformation of the input data. Second, they operate in inherently highly-structured spatiotemporal
domains, and are generally organized in maps that reflect this structure [11].
Extensive experimental work in the rodent whisker-trigeminal system has provided insights into how
these principles help rodents use their whiskers (also known as vibrissae) to tactually explore objects
in their environment. Similar to hierarchical processing in the visual system (e.g., from V1 to V2, V4
and IT [11, 12]), processing in the somatosensory system is also known to be hierarchical[27, 17, 18].
For example, in the whisker trigeminal system, information from the whiskers is relayed from primary
sensory neurons in the trigeminal ganglion to multiple trigeminal nuclei; these nuclei are the origin
of several parallel pathways conveying information to the thalamus [36, 24] and then to primary and
secondary somatosensory cortex (S1 and S2) [4]. However, although the rodent somatosensory system
has been the subject of extensive experimental efforts[2, 26, 20, 32], there have been comparatively
few attempts at computational modeling of this important sensory system.
Recent work has shown that deep neural networks (DNNs), whose architectures inherently contain
hierarchy and spatial structure, can be effective models of neural processing in vision[34, 21] and
audition[19]. Motivated by these successes, in this work we illustrate initial steps toward using
DNNs to model rodent somatosensory systems. Our driving hypothesis is that the vibrissal-trigeminal
system is optimized to use whisker-based sensor data to solve somatosensory shape-recognition
tasks in complex, variable real-world environments. The underlying idea of this approach is thus to
use goal-driven modeling (Fig 1), in which the DNN parameters ? both discrete and continuous
? are optimized for performance on a challenging ethologically-relevant task[35]. Insofar as shape
recognition is a strong constraint on network parameters, optimized neural networks resulting from
such a task may be an effective model of real trigeminal-system neural response patterns.
This idea is conceptually straightforward, but implementing it involves surmounting several challenges. Unlike vision or audition, where signals from the retina or cochlea can for many purposes
be approximated by a simple structure (namely, a uniform data array representing light or sound
intensities and frequencies), the equivalent mapping from stimulus (e.g. object in a scene) to sensor
input in the whisker system is much less direct. Thus, a biophysically-realistic embodied model of
the whisker array is a critical first component of any model of the vibrissal system. Once the sensor
array is available, a second key problem is building a neural network that can accept whisker data
input and use it to solve relevant tasks. Aside from the question of the neural network design itself,
2
d)
o
0
...
...
o
90
o
180
270
o
None
bottom
Speed
31 Whiskers in
Rough 5 x 7
Formation
vs.
Scale
middle
Position
top
Scale + Speed
Pairwise
Linear
Springs
c)
Rotation
Pairwise
Torsional
Springs
b)
Rotation + Scale
Fixed-Position ?Follicle?
Measuring Forces & Torques
Classification Performance
a)
Variations Excluded in Train/Test
Figure 2: Dynamic Three-Dimensional Whisker Model: a. Each whisker element is composed of a set of
cuboid links. The follicle cuboid has a fixed location, and is attached to movable cuboids making up the rest of
the whisker. Motion is constrained by linear and torsional springs between each pair of cuboids. The number of
cuboid links and spring equilibrium displacements are chosen to match known whisker length and curvature [31],
while damping and spring stiffness parameters are chosen to ensure mechanically plausible whisker motion
trajectories. b. We constructed a 31-whisker array, arranged in a rough 5x7 grid (with 4 missing elements) on an
ellipsoid representing the rodent?s mystacial pad. Whisker number and placement was matched to the known
anatomy of the rat [31]. c. During dataset construction, the array is brought into contact with each object at three
vertical heights, and four 90? -separated angles, for a total of 12 sweeps. The object?s size, initial orientation
angle, as well as sweep speed, vary randomly between each group of 12 sweeps. Forces and torques are recorded
at the three cuboids closest to the follicle, for a total of 18 measurements per whisker at each timepoint. d. Basic
validation of performance of binary linear classifier trained on raw sensor output to distinguish between two
shapes (in this case, a duck versus a teddy bear). The classifier was trained/tested on several equal-sized datasets
in which variation on one or more latent variable axes has been suppressed. ?None? indicates that all variations
are present. Dotted line represents chance performance (50%).
knowing what the ?relevant tasks? are for training a rodent whisker system, in a way that is sufficiently
concrete to be practically actionable, is a significant unknown, given the very limited amount of
ethologically-relevant behavioral data on rodent sensory capacities[32, 22, 25, 1, 9]. Collecting neural
data of sufficient coverage and resolution to quantitatively evaluate one or more task-optimized neural
network models represents a third major challenge. In this work, we show initial steps toward the
first two of these problems (sensor modeling and neural network design/training).
2
Modeling the Whisker Array Sensor
In order to provide our neural networks inputs similar to those of the rodent vibrissal system, we
constructed a physically-realistic three-dimensional (3D) model of the rodent vibrissal array (Fig. 2).
To help ensure biological realism, we used an anatomical model of the rat head and whisker array that
quantifies whisker number, length, and intrinsic curvature as well as relative position and orientation
on the rat?s face [31]. We wanted the mechanics of each whisker to be reasonably accurate, but at
the same time, also needed simulations to be fast enough to generate a large training dataset. We
therefore used the Bullet [33], an open-source real-time physics engine used in many video games.
Statics. Individual whiskers were each modeled as chains of ?cuboid? links with a square crosssection and length of 2mm. The number of links in each whisker was chosen to ensure that the total
whisker length matched that of the corresponding real whisker (Fig. 2 a). The first (most proximal)
link of each simulated whisker corresponded to the follicle at the whisker base, where the whisker
inserts into the rodent?s face. Each whisker follicle was fixed to a single location in 3D space. The
links of the whisker are given first-order linear and rotational damping factors to ensure that unforced
motions dissipate over time. To simplify the model, the damping factors were assumed to be the same
across all links of a given whisker, but different from whisker to whisker. Each pair of links within
a whisker was connected with linear and torsional first-order springs; these springs both have two
parameters (equilibrium displacement and stiffness). The equilibrium displacements of each spring
were chosen to ensure that the whisker?s overall static shape matched the measured curvature for the
corresponding real whisker. Although we did not specifically seek to match the detailed biophysics
of the whisker mechanics (e.g. the fact that the stiffness of the whisker increases with the 4th power
of its radius), we assumed that the stiffness of the springs spanning a given length were linearly
correlated to the distance between the starting position of the spring and the base, roughly capturing
the fact that the whisker is thicker and stiffer at the bottom [13].
The full simulated whisker array consisted of 31 simulated whiskers, ranging in length from 8mm
to 60mm (Fig. 2b). The fixed locations of the follicles of the simulated whiskers were placed on
a curved ellipsoid surface modeling the rat?s mystacial pad (cheek), with the relative locations of
3
the follicles on this surface obtained from the morphological model [31], forming roughly a 5 ? 7
grid-like pattern with four vacant positions.
Dynamics. Whisker dynamics are generated by collisions with moving three-dimensional rigid
bodies, also modeled as Bullet physics objects. The motion of a simulated whisker in reaction to
external forces from a collision is constrained only by the fixed spatial location of the follicle, and
by the damped dynamics of the springs at each node of the whisker. However, although the spring
equilibrium displacements are determined by static measurements as described above, the damping
factors and spring stiffnesses cannot be fully determined from these data. If we had detailed dynamic
trajectories for all whiskers during realistic motions (e.g. [29]), we would have used this data to
determine these parameters, but such data are not yet available.
In the absence of empirical trajectories, we used a heuristic method to determine damping and
stiffness parameters, maximizing the ?mechanical plausibility? of whisker behavior. Specifically, we
constructed a battery of scenarios in which forces were applied to each whisker for a fixed duration.
These scenarios included pushing the whisker tip towards its base (axial loading), as well as pushing
the whisker parallel or perpendicular to its intrinsic curvature (transverse loading in or out of the plane
of intrinsic curvature). For each scenario and each potential setting of the unknown parameters, we
simulated the whisker?s recovery after the force was removed, measuring the maximum displacement
between the whisker base and tip caused by the force prior to recovery (d), the total time to recovery
(T ), the average arc length travelled by each cuboid during recovery (S), and the average translational
speed of each cuboid during recovery (v). We used metaparameter optimization [3] to automatically
identify stiffness and damping parameters that simultaneously minimized the time and complexity of
the recovery trajectory, while also allowing the whisker to be flexible. Specifically, we minimized the
loss function 0.025S + d + 20T ? 2v, where the coefficients were set to make terms of comparable
magnitude. The optimization was performed for every whisker independently, as whisker length and
curvature interacts nonlinearly with its recovery dynamics.
3
A Large-Scale Whisker Sweep Dataset
Using the whisker array, we generated a dataset of whisker responses to a variety of objects.
Sweep Configuration. The dataset consists of series of simulated sweeps, mimicking one action in
which the rat runs its whiskers past an object while holding its whiskers fixed (no active whisking).
During each sweep, a single 3D object moves through the whisker array from front to back (rostral to
caudal) at a constant speed. Each sweep lasts a total of one second, and data is sampled at 110Hz.
Sweep scenarios vary both in terms of the identity of the object presented, as well as the position,
angle, scale (defined as the length of longest axis), and speed at which it is presented. To simulate
observed rat whisking behavior in which animals often sample an object at several vertical locations
(head pitches) [14], sweeps are performed at three different heights along the vertical axis and at each
of four positions around the object (0? , 90? , 180? , and 270? around the vertical axis), for a total of 12
sweeps per object/latent variable setting (Fig. 2c).
Latent variables settings are sampled randomly and independently on each group of sweeps, with
object rotation sampled uniformly within the space of all 3D rotations, object scale sampled uniformly
between 25-135mm, and sweep speed sampled randomly between 77-154mm/s. Once these variables
are chosen, the object is placed at a position that is chosen uniformly in a 20 ? 8 ? 20mm3 volume
centered in front of the whisker array at the chosen vertical height, and is moved along the ray toward
the center of the whisker array at the chosen speed. The position of the object may be adjusted to avoid
collisions with the fixed whisker base ellipsoid during the sweep. See supplementary information for
details.
The data collected during a sweep includes, for each whisker, the forces and torques from all springs
connecting to the three cuboids most proximate to the base of the whisker. This choice reflects the idea
that mechanoreceptors are distributed along the entire length of the follicle at the whisker base [10].
The collected data comprises a matrix of shape 110 ? 31 ? 3 ? 2 ? 3, with dimensions respectively
corresponding to: the 110 time samples; the 31 spatially distinct whiskers; the 3 recorded cuboids;
the forces and torques from each cuboid; and the three directional components of force/torque.
Object Set. The objects used in each sweep are chosen from a subset of the ShapeNet [6] dataset,
which contains over 50,000 3D objects, each with a distinct geometry, belonging to 55 categories.
Because the 55 ShapeNet categories are at a variety of levels of within-category semantic similarity,
we refined the original 55 categories into a taxonomy of 117 (sub)categories that we felt had a more
4
Time (110)
Whiskers (31) Forces and torques (18)
Forces and torques (18)
)
Whiskers (31)
Time (110)
isk
Wh
Time (110)
(
...
c) Spatial - Temporal
(
b) Temporal - Spatial
Forces and torques (18)
ers
(31
)
a) Spatiotemporal
x31
d) Recurrent Skip/Feedback
)
Whiskers (31) Forces and torques (18)
x110
Figure 3: Families of DNN Architectures tested: a. ?Spatiotemporal? models include spatiotemporal
integration at all stages. Convolution is performed on both spatial and temporal data dimensions, followed
by one or several fully connected layers. b. ?Temporal-Spatial? networks in which temporal integration is
performed separately before spatial integration. Temporal integration consists of one-dimensional convolution
over the temporal dimension, separately for each whisker. In spatial integration stages, outputs from each
whisker are registered to their natural two-dimensional (2D) spatial grid and spatial convolution performed. c. In
?Spatial-Temporal? networks, spatial convolution is performed first, replicated with shared weights across time
points; this is then followed by temporal convolution. d. Recurrent networks do not explicitly contain separate
units to handle different discrete timepoints, relying instead on the states of the units to encode memory traces.
These networks can have local recurrence (e.g. simple addition or more complicated motifs like LSTMs or
GRUs), as well as long-range skip and feedback connections.
uniform amount of within-category shape similarity. The distribution of number of ShapeNet objects
is highly non-uniform across categories, so we randomly subsampled objects from large categories.
This procedure ensured that all categories contained approximately the same number of objects. Our
final object set included 9,981 objects in 117 categories, ranging between 41 and 91 object exemplars
per category (mean=85.3, median=91, std=10.2, see supplementary material for more details). To
create the final dataset, for every object, 26 independent samples of rotation, scaling, and speed were
drawn and the corresponding group of 12 sweeps created. Out of these 26 sweep groups, 24 were
added to a training subset, while the remainder were reserved for testing.
Basic Sensor Validation. To confirm that the whisker array was minimally functional before
proceeding to more complex models, we produced smaller versions of our dataset in which sweeps
were sampled densely for two objects (a bear and a duck). We also produced multiple easier versions
of this dataset in which variation along one or several latent variables was suppressed. We then
trained binary support vector machine (SVM) classifiers to report object identity in these datasets,
using only the raw sensor data as input, and testing classification accuracy on held-out sweeps (Fig.
2d). We found that with scale and object rotation variability suppressed (but with speed and position
variability retained), the sensor was able to nearly perfectly identify the objects. However, with all
sources of variability present, the SVM was just above chance in its performance, while combinations
of variability are more challenging for the sensor than others (details can be found in supplementary
information). Thus, we concluded that our virtual whisker array was basically functional, but that
unprocessed sensor data cannot be used to directly read out object shape in anything but the most
highly controlled circumstances. As in the case of vision, it is exactly this circumstance that calls for
a deep cascade of sensory processing stages.
4
Computational Architectures
We trained deep neural networks (DNNs) in a variety of different architectural families (Fig. 3). These
architectural families represent qualitatively different classes of hypotheses about the computations
performed by the stages of processing in the vibrissal-trigeminal system. The fundamental questions
explored by these hypotheses are how and where temporal and spatial information are integrated.
Within each architectural family, the differences between specific parameter settings represent nuanced
refinements of the larger hypothesis of that family. Parameter specifics include how many layers
of each type are in the network, how many units are allocated to each layer, what kernel sizes are
used at each layer, and so on. Biologically, these parameters may correspond to the number of brain
regions (areas) involved, how many neurons these regions have relative to each other, and neurons?
local spatiotemporal receptive field sizes [35].
Simultaneous Spatiotemporal Integration. In this family of networks (Fig. 3a), networks consisted
of convolution layers followed by one or more fully connected layers. Convolution is performed
5
simultaneously on both temporal and spatial dimensions of the input (and their corresponding
downstream dimensions). In other words, temporally-proximal responses from spatially-proximal
whiskers are combined together simultaneously, so that neurons in each successive layers have larger
receptive fields in both spatial and temporal dimensions at once. We evaluated both 2D convolution,
in which the spatial dimension is indexed linearly across the list of whiskers (first by vertical columns
and then by lateral row on the 5 ? 7 grid), as well as 3D convolution in which the two dimensions of
the 5 ? 7 spatial grid are explicitly represented. Data from the three vertical sweeps of the same object
were then combined to produce the final output, culminating in a standard softmax cross-entropy.
Separate Spatial and Temporal Integration. In these families, networks begin by integrating temporal and spatial information separately (Fig. 3b-c). One subclass of these networks are ?TemporalSpatial? (Fig. 3b), which first integrate temporal information for each individual whisker separately
and then combine the information from different whiskers in higher layers. Temporal processing
is implemented as 1-dimensional convolution over the temporal dimension. After several layers of
temporal-only processing (the number of which is a parameter), the outputs at each whisker are then
reshaped into vectors and combined into a 5 ? 7 whisker grid. Spatial convolutions are then applied
for several layers. Finally, as with the spatiotemporal network described above, features from three
sweeps are concatenated into a single fully connected layer which outputs softmax logits.
Conversely, ?Spatial-Temporal? networks (Fig. 3c) first use 2D convolution to integrate across
whiskers for some number of layers, with shared parameters between the copies of the network
for each timepoint. The temporal sequence of outputs is then combined, and several layers of 1D
convolution are then applied in the temporal domain. Both Temporal-Spatial and Spatial-Temporal
networks can be viewed as subclasses of 3D simultaneous spatiotemporal integration in which
initial and final portions of the network have kernel size 1 in the relevant dimensions. These two
network families can thus be thought of as two different strategies for allocating parameters between
dimensions, i.e. different possible biological circuit structures.
Recurrent Neural Networks with Skip and Feedback Connections. This family of networks (Fig.
3d) does not allocate units or parameters explicitly for the temporal dimension, and instead requires
temporal processing to occur via the temporal update evolution of the system. These networks
are built around a core feedforward 2D spatial convolution structure, with the addition of (i) local
recurrent connections, (ii) long-range feedforward skips between non-neighboring layers, and (iii)
long-range feedback connections. The most basic
updaterule for the dynamic trajectory of such a
i
network through (discrete) time is: Ht+1 = Fi ?j6=i Rtj + ?i Hti and Rti = Ai [Hti ], where Rti
and Hti are the output and hidden state of layer i at time t respectively, ?i are decay constants, ?
represents concatenation across the channel dimension with appropriate resizing to align dimensions,
Fi is the standard neural network update function (e.g. 2-D convolution), and Ai is activation function
at layer i. The learned parameters of this type of network include the values of the parameters of Fi ,
which comprises both the feedforward and feedback weights from connections coming in to layer
i, as well as the decay constants ?i . More sophisticated dynamics can be incorporated by replacing
the simple additive rule above with a local recurrent structure such as Long Short-Term Memory
(LSTM) [15] or Gated Recurrent Networks (GRUs) [7].
5
Results
Model Performance: Our strategy in identifying potential models of the whisker-trigeminal system
is to explore many specific architectures within each architecture family, evaluating each specific
architecture both in terms of its ability to solve the shape recognition task in our training dataset, and
its efficiency (number of parameters and number of overall units). Because we evaluate networks on
held-out validation data, it is not inherently unfair to compare results from networks different numbers
of parameters, but for simplicity we generally evaluated models with similar numbers of parameters:
exceptions are noted where they occur. As we evaluated many individual structures within each
family, a list of the specific models and parameters are given in the supplementary materials.
Our results (Fig. 4) can be summarized with following conclusions:
? Many specific network choices within all families do a poor job at the task, achieving just-abovechance performance.
? However, within each family, certain specific choices of parameters lead to much better network
performance. Overall, the best performance was obtained for the Temporal-Spatial model, with
6
(in millions)
Number of Units
21.5
0.4
25.0
23.4
1.2
0.4
24.7
1.5
25.0 22.6
22.1
23.9
iners
electr
onics
conta
cars
airpla
nes
boats
s
hom
e
appli
ance
s
s
chair
table
RNN_fdb
RNN
RNN_gru
RNN_byp
RNN_lstm
Spatial-Temporal
TS_few
b)
Temporal-Spatial
S_deep
S_more
S_4c2f
Spatiotemporal (S)
S_3c2f
S_3D
S_2c2f
S_few
S_1c2f
S_2c1f
S_3c0f
S_1c0f
S_2c0f
10
S_rand
Accuracy
(percent correct)
30
S_3c1f
50
a)
23.0 25.5 25.5 23.7 27.2
22.1 22.3
53.1 22.2
83.4 24.0
11.8
27.9
Figure 4: Performance results. a. Each bar in this figure represents one model. The positive y-axis is
performance measured in percent correct (top1=dark bar, chance=0.85%, top5=light bar, chance=4.2%). The
negative y-axis indicates the number of units in networks, in millions of units. Small italic numbers indicate
number of model parameters, in millions. Model architecture family is indicated by color. "ncmf" means
n convolution and m fully connected layers. Detailed definition of individual model labels can be found in
supplementary material. b. Confusion Matrix for the highest-performing model (in the Temporal-Spatial family).
The objects are regrouped using methods described in supplementary material.
?
?
?
?
15.2% top-1 and 44.8% top-5 accuracy. Visualizing a confusion matrix for this network (Fig. 4)b
and other high-performing networks indicate that the errors they make are generally reasonable.
Training the filters was extremely important for performance; no architecture with random filters
performed above chance levels.
Architecture depth was an important factor in performance. Architectures with fewer than four
layers achieved substantially lower performance than somewhat deeper ones.
Number of model parameters was a somewhat important factor in performance within an architectural family, but only to a point, and not between architectural families. The Temporal-Spatial
architecture was able to outperform other classes while using significantly fewer parameters.
Recurrent networks with long-range feedback were able to perform nearly as well as the TemporalSpatial model with equivalent numbers of parameters, while using far fewer units. These long-range
feedbacks appeared critical to performance, with purely local recurrent architectures (including
LSTM and GRU) achieving significantly worse results.
Model Discrimination: The above results indicated that we had identified several high-performing
networks in quite distinct architecture families. In other words, the strong performance constraint
allows us to identify several specific candidate model networks for the biological system, reducing a
much larger set of mostly non-performing neural networks into a ?shortlist?. The key biologically
relevant follow-up question is then: how should we distinguish between the elements in the shortlist?
That is, what reliable signatures of the differences between these architectures could be extracted
from data obtainable from experiments that use today?s neurophysiological tools?
To address this question, we used Representational Dissimilarity Matrix (RDM) analysis [23]. For
a set of stimuli S, RDMs are |S| ? |S|-shaped correlation distance matrices taken over the feature
dimensions of a representation, e.g. matrices with ij-th entry RDM [i, j] = 1 ? corr(F [i], F [j]) for
stimuli i, j and corresponding feature output F [i], F [j]. The RDM characterizes the geometry of
stimulus representation in a way that is independent of the individual feature dimensions. RDMs
can thus be quantitatively compared between different feature representations of the same data. This
procedure been useful in establishing connections between deep neural networks and the ventral
visual stream, where it has been shown that the RDMs of features from different layers of neural
networks trained to solve categorization tasks match RDMs computed from visual brain areas at
different positions along the ventral visual hierarchy [5, 34, 21]. RDMs are readily computable
from neural response pattern data samples, and are in general comparatively robust to variability
due to experimental randomness (e.g. electrode/voxel sampling). RDMs for real neural populations from the rodent whisker-trigeminal system could be obtained through a conceptually simple
electrophysiological recording experiment similar in spirit to those performed in macaque [34].
We obtained RDMs for several of our high-performing models, computing RDMs separately for each
model layer (Fig. 5a), averaging feature vectors over different sweeps of the same object before
7
a)
b)
. . .
Middle Layer
. . .
Late Layer
Principal Axis 1
Early Layer
0.00
0.16
0.00
0.48
0.00
1.35
Feedback RNN
inter-model
distance
within-model
variability
Temporal-Spatial
Principal Axis 2
Figure 5: Using RDMs to Discriminate Between High-Performing Models. a. Representational Dissimilarity Matrices (RDMs) for selected layers of a high-performing network from Fig. 4a, showing early, intermediate
and late model layers. Model feature vectors are averaged over classes in the dataset prior to RDM computation,
and RDMs are shown using the same ordering as in Fig. 4b. b. Two-dimensional MDS embedding of RDMs for
the feedback RNN (green squares) and Temporal-Spatial (red circles) model. Points correspond to layers, lines
are drawn between adjacent layers, with darker color indicating earlier layers. Multiple lines are models trained
from different initial conditions, allowing within-model noise estimate.
computing the correlations. This procedure lead to 9981 ? 9981-sized matrices (there were 9,981
distinct object in our dataset). We then computed distances between each layer of each model in
RDM space, as in (e.g.) [21]. To determine if differences in this space between models and/or
layers were significant, we computed RDMs for multiple instances of each model trained with
different initial conditions, and compared the between-model to within-model distances. We found
that while the top layers of models partially converged (likely because they were all trained on the
same task), intermediate layers diverged substantially between models, by amounts larger than either
the initial-condition-induced variability within a model layer or the distance between nearby layers of
the same model (Fig. 5b). This observation is important from an experimental design point of view
because it shows that different model architectures differ substantially on a well-validated metric that
may be experimentally feasible to measure.
6
Conclusion
We have introduced a model of the rodent whisker array informed by biophysical data, and used it to
generate a large high-variability synthetic sweep dataset. While the raw sensor data is sufficiently
powerful to separate objects at low amounts of variability, at higher variation levels deeper nonlinear neural networks are required to extract object identity. We found further that while many
particular network architectures, especially shallow ones, fail to solve the shape recognition task,
reasonable performance levels can be obtained for specific architectures within each distinct network
structural family tested. We then showed that a population-level measurement that is in principle
experimentally obtainable can distinguish between these higher-performing networks. To summarize,
we have shown that a goal-driven DNN approach to modeling the whisker-trigeminal system is
feasible. Code for all results, including the whisker model and neural networks, is publicly available
at https://github.com/neuroailab/whisker_model.
We emphasize that the present work is proof-of-concept rather than a model of the real nervous
system. A number of critical issues must be overcome before our true goal ? a full integration of
computational modeling with experimental data ? becomes possible. First, although our sensor
model was biophysically informed, it does not include active whisking, and the mechanical signals at
the whisker bases are approximate [29, 16].
An equally important problem is that the goal that we set for our network, i.e. shape discrimination
between 117 human-recognizable object classes, is not directly ethologically relevant to rodents. The
primary reason for this task choice was practical: ShapeNet is a readily available and high-variability
source of 3D objects. If we had instead used a small, manually constructed, set of highly simplified
objects that we hoped were more ?rat-relevant?, it is likely that our task would have been too simple
to constrain neural networks at the scale of the real whisker-trigeminal system. Extrapolating from
modeling of the visual system, training a deep net on 1000 image categories yields a feature basis that
can readily distinguish between previously-unobserved categories [34, 5, 30]. Similarly, we suggest
that the large and variable object set used here may provide a meaningful constraint on network
8
structure, as the specific object geometries may be less important then having a wide spectrum of
such geometries. However, a key next priority is systematically building an appropriately large and
variable set of objects, textures or other class boundaries that more realistically model the tasks that a
rodent faces. The specific results obtained (e.g. which families are better than others, and the exact
structure of learned representations) are likely to change significantly when these improvements are
made.
In concert with these improvements, we plan to collect neural data in several areas within the
whisker-trigeminal system, enabling us to make direct comparisons between model outputs and
neural responses with metrics such as the RDM. There are few existing experimentally validated
signatures of the computations in the whisker-trigeminal system. Ideally, we will validate one or a
small number of the specific model architectures described above by identifying a detailed mapping
of model internal layers to brain-area specific response patterns. A core experimental issue is the
magnitude of real experimental noise in trigeminal-system RDMs. We will need to show that this
noise does not swamp inter-model distances (as shown in Fig. 5b), enabling us to reliably identify
which model(s) are better predictors of the neural data. Though real neural RDM noise cannot yet be
estimated, the intermodel RDM distances that we can compute computationally will be useful for
informing experimental design decisions (e.g. trial count, stimulus set size, &c).
In the longer term, we expect to use detailed encoding models of the whisker-trigeminal system as
a platform for investigating issues of representation learning and sensory-based decision making
in the rodent. A particularly attractive option is to go beyond fixed class discrimination problems
and situate a synthetic whisker system on a mobile animal in a navigational environment where
it will be faced with a variety of actively-controlled discrete and continuous estimation problems.
In this context, we hope to replace our currently supervised loss function with a more naturalistic
reinforcement-learning based goal. By doing this work with a rich sensory domain in rodents, we
seek to leverage the sophisticated neuroscience tools available in these systems to go beyond what
might be possible in other model systems.
7
Acknowledgement
This project has sponsored in part by hardware donation from the NVIDIA Corporation, a James S.
McDonnell Foundation Award (No. 220020469) and an NSF Robust Intelligence grant (No. 1703161)
to DLKY, the European Union?s Horizon 2020 research and innovation programme (No. 705498) to
JK, and NSF awards (IOS-0846088 and IOS-1558068) to MJZH.
References
[1] Ehsan Arabzadeh, Erik Zorzin, and Mathew E. Diamond. Neuronal encoding of texture in the whisker
sensory pathway. PLoS Biology, 3(1), 2005.
[2] Michael Armstrong-James, KEVIN Fox, and Ashis Das-Gupta. Flow of excitation within rat barrel cortex
on striking a single vibrissa. Journal of neurophysiology, 68(4):1345?1358, 1992.
[3] James Bergstra, Dan Yamins, and David D Cox. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in Science Conference, pages
13?20. Citeseer, 2013.
[4] Laurens WJ Bosman, Arthur R Houweling, Cullen B Owens, Nouk Tanke, Olesya T Shevchouk, Negah
Rahmati, Wouter HT Teunissen, Chiheng Ju, Wei Gong, Sebastiaan KE Koekkoek, et al. Anatomical
pathways involved in generating and sensing rhythmic whisker movements. Frontiers in integrative
neuroscience, 5:53, 2011.
[5] Charles F Cadieu, Ha Hong, Daniel LK Yamins, Nicolas Pinto, Diego Ardila, Ethan A Solomon, Najib J
Majaj, and James J DiCarlo. Deep neural networks rival the representation of primate it cortex for core
visual object recognition. PLoS computational biology, 10(12):e1003963, 2014.
[6] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio
Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An
Information-Rich 3D Model Repository. ArXiv, 2015.
[7] Kyunghyun Cho, Bart Van Merri?nboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of
neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
[8] Martin Deschenes and Nadia Urbain. Vibrissal afferents from trigeminus to cortices. Scholarpedia,
4(5):7454, 2009.
9
[9] Mathew E Diamond, Moritz von Heimendahl, Per Magne Knutsen, David Kleinfeld, and Ehud Ahissar.
?Where? and ?what? in the whisker sensorimotor system. Nat Rev Neurosci, 9(8):601?612, 2008.
[10] Satomi Ebara, Kenzo Kumamoto, Tadao Matsuura, Joseph E Mazurkiewicz, and Frank L Rice. Similarities
and differences in the innervation of mystacial vibrissal follicle?sinus complexes in the rat and cat: a
confocal microscopic study. Journal of Comparative Neurology, 449(2):103?119, 2002.
[11] Daniel J Felleman and David C Van Essen. Distributed hierarchical processing in the primate cerebral
cortex. Cerebral cortex, 1(1):1?47, 1991.
[12] Melvyn A. Goodale and A. David Milner. Separate visual pathways for perception and action. Trends in
Neurosciences, 15(1):20?25, 1992.
[13] M. Hartmann. Vibrissa mechanical properties. Scholarpedia, 10(5):6636, 2015. revision #151934.
[14] Jennifer A Hobbs, R Blythe Towal, and Mitra JZ Hartmann. Spatiotemporal patterns of contact across the
rat vibrissal array during exploratory behavior. Frontiers in behavioral neuroscience, 9, 2015.
[15] Sepp Hochreiter and Jurgen J?rgen Schmidhuber. Long short-term memory. Neural Computation,
9(8):1?32, 1997.
[16] Lucie A. Huet and Mitra J Z Hartmann. Simulations of a Vibrissa Slipping along a Straight Edge and an
Analysis of Frictional Effects during Whisking. IEEE Transactions on Haptics, 9(2):158?169, 2016.
[17] Koji Inui, Xiaohong Wang, Yohei Tamura, Yoshiki Kaneoke, and Ryusuke Kakigi. Serial processing in the
human somatosensory system. Cerebral Cortex, 14(8):851?857, 2004.
[18] Yoshiaki Iwamura. Hierarchical somatosensory processing. Current Opinion in Neurobiology, 8(4):522?
528, 1998.
[19] A *Kell, D *Yamins, S Norman-Haignere, and J McDermott. Functional organization of auditory cortex
revealed by neural networks optimized for auditory tasks. In Society for Neuroscience, 2015.
[20] Jason ND Kerr, Christiaan PJ De Kock, David S Greenberg, Randy M Bruno, Bert Sakmann, and Fritjof
Helmchen. Spatial organization of neuronal population responses in layer 2/3 of rat barrel cortex. Journal
of neuroscience, 27(48):13316?13328, 2007.
[21] Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. Deep supervised, but not unsupervised, models
may explain it cortical representation. PLoS Comput Biol, 10(11):e1003915, 2014.
[22] Per Magne Knutsen, Maciej Pietr, and Ehud Ahissar. Haptic object localization in the vibrissal system:
behavior and performance. The Journal of neuroscience : the official journal of the Society for Neuroscience,
26(33):8451?64, 2006.
[23] Nikolaus Kriegeskorte, Marieke Mur, and Peter a. Bandettini. Representational similarity analysis connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2(November):4,
2008.
[24] Jeffrey D Moore, Nicole Mercer Lindsay, Martin Desch?nes, and David Kleinfeld. Vibrissa self-motion
and touch are reliably encoded along the same somatosensory pathway from brainstem through thalamus.
PLoS Biol, 13(9):e1002253, 2015.
[25] Daniel H. O?Connor, Simon P. Peron, Daniel Huber, and Karel Svoboda. Neural activity in barrel cortex
underlying vibrissa-based object localization in mice. Neuron, 67(6):1048?1061, 2010.
[26] Carl CH Petersen, Amiram Grinvald, and Bert Sakmann. Spatiotemporal dynamics of sensory responses in
layer 2/3 of rat barrel cortex measured in vivo by voltage-sensitive dye imaging combined with whole-cell
voltage recordings and neuron reconstructions. Journal of neuroscience, 23(4):1298?1309, 2003.
[27] T P Pons, P E Garraghty, David P Friedman, and Mortimer Mishkin. Physiological evidence for serial
processing in somatosensory cortex. Science (New York, N.Y.), 237(4813):417?420, 1987.
[28] Dale Purves, George J Augustine, David Fitzpatrick, Lawrence C Katz, Anthony-Samuel LaMantia,
James O McNamara, and S Mark Williams. Neuroscience. Sunderland, MA: Sinauer Associates, 3, 2001.
[29] Brian W Quist, Vlad Seghete, Lucie A Huet, Todd D Murphey, and Mitra J Z Hartmann. Modeling Forces
and Moments at the Base of a Rat Vibrissa during Noncontact Whisking and Whisking against an Object.
J Neurosci, 34(30):9828?9844, 2014.
10
[30] Ali S Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an
astounding baseline for recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW),
2014 IEEE Conference on, pages 512?519. IEEE, 2014.
[31] R. Blythe Towal, Brian W. Quist, Venkatesh Gopal, Joseph H. Solomon, and Mitra J Z Hartmann. The
morphology of the rat vibrissal array: A model for quantifying spatiotemporal patterns of whisker-object
contact. PLoS Computational Biology, 7(4), 2011.
[32] Moritz Von Heimendahl, Pavel M Itskov, Ehsan Arabzadeh, and Mathew E Diamond. Neuronal activity in
rat barrel cortex underlying texture discrimination. PLoS Biol, 5(11):e305, 2007.
[33] Wikipedia. Bullet (software) ? wikipedia, the free encyclopedia, 2016. [Online; accessed 19-October2016].
[34] Daniel L K Yamins, Ha Hong, Charles F Cadieu, Ethan a Solomon, Darren Seibert, and James J DiCarlo.
Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings
of the National Academy of Sciences of the United States of America, 111(23):8619?24, jun 2014.
[35] Daniel LK Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory
cortex. Nature neuroscience, 19(3):356?365, 2016.
[36] Chunxiu Yu, Dori Derdikman, Sebastian Haidarliu, and Ehud Ahissar. Parallel thalamic pathways for
whisking and touch signals in the rat. PLoS Biol, 4(5):e124, 2006.
11
| 6849 |@word neurophysiology:1 trial:1 cox:1 version:2 repository:1 cnn:1 kriegeskorte:2 middle:2 nd:1 houweling:1 loading:2 open:1 integrative:1 simulation:2 seek:3 evaluating:1 pavel:1 citeseer:1 moment:1 initial:7 configuration:1 series:2 contains:1 united:1 nonlinearly:1 daniel:7 past:1 existing:1 reaction:1 current:1 comparing:1 com:1 activation:1 yet:2 must:1 readily:3 additive:1 realistic:4 shape:15 wanted:1 extrapolating:1 concert:1 update:3 sponsored:1 discrimination:4 aside:1 bart:1 selected:1 electr:1 device:1 fewer:3 nervous:1 intelligence:1 v:1 plane:1 realism:1 core:3 short:2 shortlist:2 node:1 location:6 successive:1 relayed:2 accessed:1 height:3 along:7 constructed:4 direct:3 jonas:1 consists:2 pathway:7 combine:1 ray:1 recognizable:1 behavioral:2 rostral:1 dan:1 pairwise:2 inter:2 angel:1 huber:1 roughly:2 behavior:4 mechanic:2 morphology:2 brain:7 torque:9 relying:1 manolis:1 automatically:1 innervation:1 becomes:1 revision:1 provided:1 matched:4 underlying:3 circuit:2 begin:1 project:1 barrel:6 what:5 derdikman:1 substantially:3 informed:2 unobserved:1 transformation:1 corporation:1 ahissar:3 temporal:34 every:2 collecting:1 subclass:2 thicker:1 exactly:1 ensured:1 classifier:3 evanston:1 unit:9 grant:1 before:4 positive:1 engineering:2 mitra:5 local:5 todd:1 io:2 encoding:2 torsional:3 establishing:1 lamantia:1 approximately:1 might:1 minimally:1 conversely:1 collect:1 rdms:15 challenging:3 limited:1 tactually:1 perpendicular:1 range:5 averaged:1 practical:1 testing:2 union:1 ance:1 sullivan:1 procedure:3 displacement:6 area:6 rnn:3 empirical:1 majaj:1 significantly:3 cascade:2 thought:1 word:2 integrating:1 petersen:1 suggest:1 naturalistic:1 cannot:3 context:1 optimize:1 equivalent:2 map:1 iwamura:1 missing:1 nicole:1 center:1 straightforward:1 sepp:1 starting:1 williams:1 independently:2 go:2 resolution:1 ke:1 simplicity:1 duration:1 recovery:7 identifying:2 zimo:1 rule:2 insight:1 array:24 unforced:1 enabled:1 embedding:1 handle:1 exploratory:1 swamp:1 population:4 variation:5 merri:1 target:1 diego:1 hierarchy:2 svoboda:1 construction:1 milner:1 carl:1 today:1 lindsay:1 hypothesis:5 origin:1 exact:1 associate:1 element:3 trend:1 recognition:11 approximated:1 jk:1 particularly:1 std:1 observed:1 bottom:2 preprint:1 wang:1 region:3 wj:1 connected:5 morphological:1 ordering:1 plo:7 removed:1 highest:1 movement:1 environment:4 complexity:1 ideally:1 battery:1 goodale:1 dynamic:9 signature:2 trained:8 ali:1 purely:1 localization:2 seyed:1 efficiency:1 basis:1 represented:1 america:1 various:1 cat:1 train:2 separated:1 distinct:7 fast:1 effective:2 artificial:2 corresponded:1 formation:1 kevin:1 refined:1 whose:1 heuristic:1 stanford:7 plausible:1 larger:4 supplementary:6 solve:7 quite:1 encoded:1 resizing:1 ability:1 encoder:1 reshaped:1 noisy:1 itself:1 final:4 najib:1 online:1 sequence:2 biophysical:2 net:1 depth:1 reconstruction:1 interaction:1 coming:1 remainder:1 neighboring:1 relevant:9 poorly:1 achieve:1 representational:5 academy:1 realistically:1 razavi:1 moved:1 validate:1 electrode:1 decisionmaking:1 produce:2 generating:1 comparative:1 categorization:1 object:49 help:2 illustrate:1 recurrent:8 donation:1 pose:1 gong:1 axial:1 exemplar:1 ij:1 jurgen:1 measured:3 sa:1 strong:2 job:1 implemented:1 coverage:1 skip:4 indicate:2 somatosensory:10 culminating:1 involves:1 differ:2 laurens:1 anatomy:1 radius:1 correct:2 filter:2 appli:1 centered:1 human:2 brainstem:1 opinion:1 material:4 virtual:1 implementing:1 dnns:4 timepoint:2 biological:3 brian:2 adjusted:1 insert:1 frontier:3 mm:5 practically:1 around:3 sufficiently:2 guibas:1 lawrence:1 mapping:2 equilibrium:4 cognition:1 predict:2 diverged:1 rgen:1 driving:1 fitzpatrick:1 regrouped:1 major:1 vary:2 early:2 ventral:2 belgium:1 purpose:1 estimation:1 label:1 currently:1 sensitive:3 create:1 helmchen:1 karel:1 reflects:1 tool:3 stefan:1 caudal:1 mit:1 rough:2 sensor:13 brought:1 gopal:1 hope:1 modified:1 rather:1 behaviorally:1 avoid:1 shelf:1 varying:1 mobile:1 voltage:2 azizpour:1 encode:1 ax:1 validated:2 improvement:2 longest:1 indicates:2 shapenet:5 baseline:1 sense:2 motif:1 rigid:1 sb:2 entire:1 integrated:2 accept:1 pad:2 hidden:1 sunderland:1 dnn:5 mimicking:1 overall:3 classification:2 translational:1 issue:3 hartmann:7 hossein:1 orientation:2 flexible:1 plan:1 platform:1 spatial:33 integration:9 constrained:2 softmax:2 field:2 construct:1 cube:1 equal:1 beach:1 sampling:1 manually:1 biology:3 once:3 having:1 nadia:1 cadieu:2 represents:5 hobbs:1 unsupervised:1 nearly:2 yu:2 minimized:2 yoshua:1 report:1 others:2 quantitatively:2 simplify:1 retina:1 stimulus:5 randomly:4 few:2 composed:1 simultaneously:3 densely:1 national:1 individual:5 subsampled:1 astounding:1 consisting:1 geometry:4 jeffrey:1 friedman:1 attempt:1 organization:2 wouter:1 essen:1 highly:6 arrives:1 light:2 damped:1 held:2 chain:1 allocating:1 accurate:1 underpinnings:1 animal:2 edge:1 arthur:1 shuran:1 movable:1 damping:6 fox:1 indexed:1 savarese:1 koji:1 circle:1 instance:1 column:1 modeling:13 top5:1 earlier:1 crosssection:1 measuring:2 pons:1 subset:2 entry:1 mcnamara:1 predictor:1 uniform:3 too:1 front:2 frictional:1 spatiotemporal:12 proximal:3 synthetic:2 combined:5 cho:1 st:1 grus:2 ju:1 lstm:2 fundamental:2 mechanically:1 physic:2 v4:1 off:1 tip:2 connecting:2 travelled:1 together:2 mouse:1 michael:1 concrete:1 von:2 reflect:1 recorded:2 solomon:3 huang:1 priority:1 worse:1 external:1 cognitive:1 audition:2 actively:2 li:2 bandettini:1 potential:2 de:1 retinal:1 bergstra:1 summarized:1 includes:1 coefficient:1 explicitly:3 caused:1 leonidas:1 scholarpedia:2 dissipate:1 performed:10 stream:1 view:1 jason:1 razavian:1 doing:1 characterizes:1 red:1 portion:1 option:1 parallel:4 complicated:1 purves:1 thalamic:1 simon:1 vivo:1 square:2 il:1 publicly:1 accuracy:3 largely:2 reserved:1 characteristic:1 correspond:2 conveying:1 yield:1 identify:4 directional:1 conceptually:2 biophysically:3 mishkin:1 raw:5 produced:2 basically:1 none:2 trajectory:5 j6:1 straight:1 converged:1 huet:2 randomness:1 simultaneous:2 explain:1 history:1 reach:1 sebastian:1 definition:1 against:1 sensorimotor:1 frequency:2 involved:2 james:7 matsuura:1 proof:2 static:3 sampled:6 auditory:3 dataset:15 richly:1 massachusetts:1 wh:1 vlad:1 color:2 car:1 electrophysiological:1 organized:1 amplitude:1 obtainable:2 sophisticated:2 back:1 higher:4 supervised:2 follow:1 response:10 wei:1 arranged:1 evaluated:3 microcircuit:1 though:1 just:2 biomedical:1 stage:6 correlation:2 yoshiki:1 lstms:1 touch:2 su:1 replacing:1 nonlinear:1 kleinfeld:2 indicated:2 bullet:3 nuanced:1 usa:1 effect:1 building:2 consisted:2 true:1 logits:1 contain:2 evolution:1 concept:2 kyunghyun:1 norman:1 moritz:2 excluded:1 moore:1 spatially:2 read:1 semantic:1 funkhouser:1 attractive:1 visualizing:1 adjacent:1 game:1 self:1 during:10 recurrence:1 noted:1 excitation:1 anything:1 samuel:1 rat:17 hong:2 mm3:1 confusion:2 felleman:1 motion:6 percent:2 ranging:2 image:1 fi:3 charles:2 wikipedia:2 rotation:6 functional:3 physical:1 simplified:1 attached:1 volume:1 cerebral:3 million:3 katz:1 interpret:1 mechanoreceptors:1 significant:2 measurement:4 metaparameter:1 connor:1 cambridge:1 ai:2 leuven:1 grid:6 similarly:1 bruno:1 maximizing:1 had:4 moving:1 stiffer:1 surface:2 cortex:17 similarity:4 base:9 align:1 longer:1 curvature:6 closest:1 hyperopt:1 recent:1 dye:1 ebara:1 khaligh:1 optimizing:1 driven:7 prime:1 scenario:4 schmidhuber:1 inui:1 certain:1 top1:1 randy:1 nvidia:1 binary:2 success:1 hanrahan:1 yi:1 mcdermott:1 slipping:1 george:1 somewhat:2 kock:1 determine:3 signal:4 ii:1 branch:1 multiple:5 sound:1 full:2 thalamus:3 match:3 characterized:1 plausibility:1 rti:2 long:10 cross:1 serial:2 equally:1 award:2 proximate:1 controlled:2 biophysics:1 pitch:1 basic:3 hair:1 circumstance:2 metric:2 sinus:1 vision:5 arxiv:3 physically:1 represent:2 kernel:2 cochlea:1 achieved:1 cell:2 hochreiter:1 tamura:1 addition:2 separately:5 fine:1 median:1 source:3 concluded:1 modality:1 appropriately:1 allocated:1 rest:1 unlike:1 operate:1 haptic:1 subject:1 hz:1 recording:3 induced:1 bahdanau:1 flow:1 spirit:1 call:1 extracting:1 structural:1 leverage:1 revealed:1 intermediate:2 insofar:1 enough:1 bengio:1 afferent:1 feedforward:3 iii:1 psychology:2 rdm:8 architecture:21 identified:1 variety:5 perfectly:1 follicle:10 idea:3 knowing:1 computable:1 unprocessed:1 motivated:1 towards:1 allocate:1 effort:1 song:1 tactile:1 peter:1 york:1 action:2 jz:1 deep:9 useful:3 generally:3 detailed:5 collision:3 amount:4 dark:1 rival:1 encyclopedia:1 hardware:1 processed:1 category:14 generate:3 http:1 outperform:1 nsf:2 dotted:1 estimated:1 neuroscience:14 per:5 anatomical:2 discrete:4 group:4 key:3 four:4 achieving:2 drawn:2 pj:1 ht:2 v1:1 imaging:2 downstream:1 run:1 angle:4 powerful:2 communicate:1 striking:1 family:23 cheek:1 reasonable:3 architectural:7 garraghty:1 quist:2 x31:1 decision:2 scaling:1 comparable:1 capturing:1 layer:38 hom:1 followed:3 distinguish:4 mathew:3 activity:2 occur:2 placement:1 constraint:3 constrain:1 scene:1 software:1 felt:1 nearby:1 x7:1 aspect:1 simulate:1 speed:11 chair:2 extremely:1 spring:14 performing:8 nboer:1 martin:2 department:4 structured:1 mcdonnell:1 poor:1 combination:1 belonging:1 remain:1 across:8 smaller:1 suppressed:3 joseph:2 rev:1 biologically:2 primate:2 shallow:1 making:2 s1:3 taken:1 computationally:1 previously:1 jennifer:1 count:1 fail:1 eventually:1 kerr:1 yamins:7 needed:1 available:5 operation:1 stiffness:7 hierarchical:6 v2:1 appropriate:1 nikolaus:2 original:2 actionable:1 thomas:1 top:4 ensure:5 include:4 pushing:2 concatenated:1 plausibly:1 especially:1 society:2 comparatively:2 contact:3 sweep:26 move:1 added:1 question:4 strategy:2 receptive:2 primary:4 md:1 interacts:1 italic:1 microscopic:1 distance:8 separate:5 link:8 lateral:1 concatenation:1 capacity:1 decoder:1 shaped:1 simulated:7 collected:3 extent:1 toward:4 spanning:1 reason:1 dzmitry:1 erik:1 length:10 dicarlo:3 modeled:2 code:2 ellipsoid:3 exquisitely:1 rotational:1 retained:1 innovation:1 mostly:1 taxonomy:1 holding:1 potentially:1 hao:1 trace:1 negative:1 frank:1 design:4 sakmann:2 reliably:2 unknown:4 perform:2 gated:1 diamond:3 ethologically:4 vertical:7 neuron:8 datasets:2 convolution:16 observation:1 enabling:2 allowing:2 november:1 curved:1 teddy:1 pat:1 arc:1 neurobiology:1 incorporated:1 variability:10 head:2 vibrissa:8 bert:2 intensity:2 blythe:2 transverse:1 david:8 venkatesh:1 introduced:1 gru:1 required:1 extensive:4 connection:6 e1003963:1 namely:1 mechanical:6 pair:2 engine:1 optimized:7 registered:1 learned:2 ethan:2 macaque:1 nip:1 address:1 beyond:2 bar:3 able:3 perception:1 pattern:8 appeared:1 challenge:2 summarize:1 navigational:1 built:2 green:1 memory:4 rtj:1 including:2 video:1 power:1 reliable:1 ardila:1 critical:3 force:14 itskov:1 natural:1 boat:1 representing:2 github:1 technology:1 zhuang:1 library:1 temporally:1 ne:2 lk:2 confocal:1 axis:7 created:1 jun:1 extract:1 augustine:1 embodied:1 faced:1 prior:2 carlsson:1 acknowledgement:1 python:2 relative:3 sinauer:1 fully:5 expect:1 loss:2 whisker:110 northwestern:2 bear:2 e1003915:1 versus:1 remarkable:1 validation:3 foundation:1 integrate:3 nucleus:3 sufficient:1 xiao:1 principle:2 mercer:1 systematically:1 share:1 translation:1 row:1 placed:2 last:1 copy:1 free:1 arriving:1 understand:1 deeper:2 institute:2 wide:2 mur:1 face:3 rhythmic:1 distributed:2 van:2 overcome:1 feedback:10 cortical:2 greenberg:1 world:2 rich:2 boundary:1 sensory:13 dimension:16 made:1 reinforcement:1 replicated:1 dale:1 qualitatively:1 refinement:1 far:1 voxel:1 josephine:1 transaction:1 situate:1 showed:1 approximate:1 programme:1 emphasize:1 confirm:1 cuboid:12 active:2 investigating:1 assumed:2 neurology:1 spectrum:1 continuous:2 latent:4 quantifies:1 table:1 channel:1 reasonably:1 nature:1 correlated:1 inherently:3 ca:3 controllable:1 robust:2 nicolas:1 ku:1 ehsan:2 complex:5 european:1 anthony:1 domain:3 official:1 ehud:3 did:1 da:1 linearly:2 neurosci:2 whole:1 s2:3 noise:4 hyperparameters:1 murphey:1 body:1 neuronal:4 fig:19 darker:1 structurally:1 position:11 sub:1 duck:3 grinvald:1 timepoints:1 comput:1 comprises:2 candidate:1 unfair:1 mahdi:1 third:1 late:2 hti:3 grained:1 cvprw:1 specific:15 showing:1 er:1 sensing:1 list:2 decay:2 explored:1 gupta:1 svm:2 evidence:1 bosman:1 intrinsic:3 physiological:1 workshop:1 albeit:1 sequential:1 corr:1 texture:3 dissimilarity:3 magnitude:2 hoped:1 nat:1 horizon:1 easier:1 rodent:19 entropy:1 savva:1 likely:4 peron:1 forming:1 ganglion:3 visual:8 neurophysiological:2 explore:2 contained:1 partially:1 chang:1 pinto:1 ch:1 radically:1 darren:1 chance:5 extracted:1 ma:2 rice:1 sized:2 identity:3 trigeminal:24 goal:10 viewed:1 seibert:1 informing:1 quantifying:1 owen:1 replace:1 fisher:1 experimentally:3 change:1 included:2 specifically:3 absence:1 reducing:1 shared:2 averaging:1 feasible:2 determined:2 principal:2 uniformly:3 total:7 silvio:1 secondary:1 discriminate:1 experimental:9 knutsen:2 photoreceptor:1 meaningful:1 exception:1 indicating:1 qixing:1 internal:1 mark:1 surmounting:1 support:1 jianxiong:1 evaluate:2 armstrong:1 tested:3 biol:4 isk:1 |
6,467 | 685 | Statistical Modeling of Cell-Assemblies
Activities in Associative Cortex of
Behaving Monkeys
Itay Gat and Naftali Tishby
Institute of Computer Science and
Center for Neural Computation
Hebrew University, Jerusalem 91904, Israel *
Abstract
So far there has been no general method for relating extracellular
electrophysiological measured activity of neurons in the associative
cortex to underlying network or "cognitive" states. We propose
to model such data using a multivariate Poisson Hidden Markov
Model. We demonstrate the application of this approach for temporal segmentation of the firing patterns, and for characterization
of the cortical responses to external stimuli. Using such a statistical model we can significantly discriminate two behavioral modes
of the monkey, and characterize them by the different firing patterns, as well as by the level of coherency of their multi-unit firing
activity.
Our study utilized measurements carried out on behaving Rhesus
monkeys by M. Abeles, E. Vaadia, and H. Bergman, of the Hadassa
Medical School of the Hebrew University.
1
Introduction
Hebb hypothesized in 1949 that the basic information processing unit in the cortex
is a cell-assembly which may include thousands of cells in a highly interconnected
network[l]. The cell-assembly hypothesis shifts the focus from the single cell to the
* {itay,tishby }@cs.huji.ac.il
945
946
Gat and Tishby
complete network activity. This view has led several laboratories to develop technology for simultaneous multi-cellular recording from a small region in the cortex[2, 3].
There remains, however, a large discrepancy between our ability to construct neuralnetwork models and their correspondence with such multi-cellular recordings. To
some extent this is due to the difficulty in observing simultaneous activity of any
significant number of individual cells in a living nerve tissue. Extracellular electrophysiological measurements have so far obtained simultaneous recordings from
just a few randomly selected cells (about 10), a negligibly small number compared
to the size of the hypothesized cell-assembly. It is quite remarkable therefore, that
such local measurements in the associative cortex have yielded so much information,
such as synfire chains [2], multi-cell firing correlation[6], and statistical correlation
between cell activity and external behavior. However, such observations have so
far relied mostly on the accumulated statistics of cell firing over a large number of
repeated experiments, to obtain any statistically significant effect. This is due to
the very low firing rates (about 10Hz) of individual cells in the associative cortex,
as can be seen in figure 1.
30~--------------------------~~------------------------~
O~--------------------------~----------------------------r
111 1'
II'
"'1
I
III
I
I
I
I
I
.1111
I
I I
t
I
I
,
I
1 ' 1 1 , 1111
?
I
I
I,
I
I
I
1111."
..
,.
I I
?
'1"
I ??
It'
t
?
,II , ????? , '
11111.'
,',
"
,
II
I
I
"I'
"
?
I
I, I
'"
tI,
I
I
I
?? I
"
?
I
I'
,.
""1 '
',.',',,"'1 "
I
"
.,
,
.',
?
?
'"
,
..
?
I
"'"
I
I
.,
I
I I
I
II
"
I '
I
'"
I I II
I
1"
II
I
I
I
"."
???
,.,
"I "
I
I
,
I
"'"
"
"
,.
I
" '
,
'I'
I
:"
"I
I
".
".
I ',
,
I
I
I
I
I
"
II
?
I
"
I'
",
I
II
II
"..11
I
? ','
110'"
"
I
'"1
I
"
I
I
?
'"'
,
1111
" . , ?? , ' "
I,,,
I II
II
.,.,
I
I I
, I It ,
I,,, . . , , , ? ?
"
"
,.11
"
"
I
I
???
,
,: , . , .
"."ltl
,I
I
'"'
?? ,
"
'"
, . , ' I'
"
I
I
"
,
"
I . " ?? ".. , "" ".
,11' ,
I.
., , "
I
'"
I,
,
II
I
111"'"
"
I I
I.
,
I
.,1
,.
?
, I .
,
III
"
"."
I
I
'"
I
I.
"
.,
I
I
II ? ?
I
"
I
?
I
II I
? '"
I
'1'
I , II
I.
II
, . ,
?? '"
"'
",
III"
"
??? , '
I ? ?, . . . .
,"\.'''' '." ,'"
I
?
"
?
'0'
I
I
,
I
It,
I
I
I .
I
III I
'I
I
I
........
I
I
?
II
"
I
"
I
I
II
"
?
I
., ? ? , , ??
",.
I
,.
I
,
I
"
,??
I
I
I
" " ""
"
"
.",
.1
I
I
I
.f! ' "
" " "
"
I
I
,
, . " .
,.
'"
II
"
?? , . I " , I ? 1"
I I
1,.,.'
I"
, ??
I
I
I
,
I
II
I
,"
I
.. ' , . ,
II
"
1111.,1", "'
? ? 11
.,'
I .",
111? ?
?? "
I,
I
,
..
.,',.,
"""
",.'
III
"
,
,
I
I
?
"""
I
?
.,
It
.,
I
?
?
'"~
..
.".
II
,",:,: : : ,: : ;:;i:~:;,;> ,:,~: : ",:,> :,~ i:',i: : ;'~', ,: ,.:, : , ,: : : :\:~ :"; :i;.:'?~ >: /~;:',:,: ',~': ,.,:':;~: '~;:C:'~' ::.: ~:"~;': : ':,
.. " :
I
-2000
,:~',"',"..,:: '~: ," ",',::,:1 ,',,: ',"",:~' ".':'"
I
?
:/:",:' ': ,:,:, ,: ':' ," ,',
,
'I
:':':;:~3,,-::-,:::::~::~: '::: :::; ,:':." ::': :::</'T:. ~':,::"~::",:'::':: ',:'::' ~: ,'J
01"
o
'~. ~:
"I " '
?
2000
MiIliSec
Figure 1: An example of firing times of a single unit. Shown are 48 repetitions of
the same trial, aligned by the external stimulus marker, and drawn horizontally one
on top of another. The accumulated histogram estimates the firing rate in 50msec
bins, and exhibits a clear increase of activity right after the stimulus.
Clearly, simultaneous measurements of the activity of 10 units contain more information than single unit firing and pairwise correlations. The goal of the present
study is to develop and evaluate a statistical method which can better capture the
multi- unit nature of this data, by treating it as a vector stochastic process. The
firing train of each of these units is conventionally modeled as a Poisson process
with a time-dependent average firing rate[2]. Estimating the firing rate parameter
requires careful averaging over a sliding window. The size of this window should be
long enough to include several spikes, and short enough to capture the variability.
Modeling Cell-Assemblies Activities in Associative Cortex of Behaving Monkeys
Within such a window the process is characterized by a vector of average rates, and
possibly higher order correlations between the units.
The next step, in this framework, is to collect such vector-frames into statistically
similar clusters, which should correspond to similar network activity, as reflected
by the firing of these units. Furthermore, we can facilitate the well-established
formulation of Hidden-Markov-Models[7] to estimate these "hidden" states of the
network activity, similarly to the application of such models to other stochastic
data, e.g. speech. The main advantage of this approach is its ability to characterize
statistically the multi-unit process, in an unsupervised manner, thus allowing for
finer discrimination of individual events. In this report we focus on the statistical
discrimination of two behavioral modes, and demonstrate not only their distinct
multi-unit firing patterns, but also the fact that the coherency level of the firing
activity in these two modes is significantly different.
2
Origin of the data
The data used for the present analysis was collected at the Hadassa Medical School,
by recording from a Rhesus monkey Macaca Mulatta who was trained to perform a
spatial delayed release task. In this task the monkey had to remember the location
from which a stimulus was given and after a delay of 1-32 seconds, respond by
touching that location. Correct responses were reinforced by a drop of juice. After
completion of the training period, the monkey was anesthetized and prepared for
recording of electrical activity in the frontal cortex. After the monkey recovered
from the surgery the activity of the cortex was recorded, while the monkey was
performing the previously learned routine. Thus the recording does not reflect
the learning process, but rather the cortical activity of the well trained monkey
while performing its task. During each of the recording sessions six microelectrodes
were used simultaneously. With the aid of two pattern detectors and four windowinscriminates, the activity of up to 11 single units (neurons) was concomitantly
recorded. The recorded data contains the firing times of these units, the behavioral
events of the monkey, and the electro-occulogram (EOG)[5, 2,4].
2.1
Behavioral modes
To understand the results reported here it is important to focus on the behavioral
aspect of these experiments. The monkey was trained to perform a spatial delayed
response task during which he had to alternate between two behavioral modes. The
monkey initiated the trial, by pressing a central key, and a fixation light was turned
on in front of it. Then after 3-6 seconds a visual stimulus was given either from the
left or from the right. The stimulus was presented for 100 millisec. After a delay
the fixation light was dimmed and the monkey had to touch the key from which the
visual stimulus came ("Go" mode), or keep his hand on the central key regardless
of the external stimulus ("No-Go" mode). For the correct behavior the monkey was
rewarded with a drop of juice. After 4 correct trials all the lights in front of the
monkey blinked (this is called "switch" henceforth), signaling the monkey to change
the behavioral mode - so that if started in the "Go" mode he now had to switch to
"No-Go" mode, or vice versa.
947
948
Gat and Tishby
There is a clear statistical indication, based on the accumulated firing histograms,
that the firing patterns are different in these two modes. One of our main experimental results so far is a more quantitative analysis of this observation, both in
terms of the firing patterns directly, and by using a new measure of the coherency
level of the firing activity.
5
1
,
.'.11.
J ? J . 11.1..
? ?. . II? . lit
.. t.'.., ... '
i.'.., '.. '
? .. ' ... ..lrll?.I?'
f' .',
r
.I~
'I.". ? . . . . . .
f
..
I'
r Illlmlt I H'
. !.'.!!.'~I.l .... I." .". '.f
? -' I. . I
. . . . ' . '.
LUI'. , U. J. '. '. [ '. "
1I11b, n
5
c5
.ll.. .~. .. '.. .'.. '. !'.
I I'
"- .... 1 . . '.
' .
~ ... I..
'11
.' .
,.,
11., ..11
... '
,1 I!
"1 '.. '
II
.. ' .
I'JI.II.'" . . ll'.?I'1I111nnHil . J
... . . . .
....
H. I , ~ II? ?11.. 11 ?'?111
U .1.1 . . .,
c5
.1 ..
I I I,.
.
II
I. I,
, . .II, . . ' . 1 . . '.) '.
'1' 1'1 II ill'lilli (,','lIlili I'li 11'11'1 f
II,'
.! ., .. I ,
, ...
'.!
1 ' .. . ,
f .. . , .
l ..' . '~I . '. ' .' .
,I .. '. ,.
!!J
!.' .' .
'I~ II , ~ 1?11 ?
?1'
.1
1,1 ..11'. .. '... \ . 1 )11. .1.1,' .. 1
I. , . .' . '. II "I'! I . II.
.I . .:11 .I .. 1, .1 , . 1!. '.
? .. llI.IlJlli:
I..,
,,1.1)1 . . . 1 l.I,I.I.i'lIHH"I?
. J .,-,.L'.I,ILf: ~""'I!. - - ??
I . .lIJ! ."? .... ',l'.II.
!II. .. 1 . 'L .I.I
1
:
1 . L . ., L.l1. 1,~ . - .:-. ,.. '. " _ _ . _, _ !,
'"
(, , " _ . .I.I. ' . ? . . . . . . . , " . . ? . _ ,!
l .. - - - - .. . _ . ' ' . _ . t ?
'.
_ . _ ' . . _ . _ .
Svvitch
5
5
1
:
q ..
' :
,.'.~II,li
, . , ,,
c5
. '1'1 .. 11
:
1 ' ..
('
5
"7
:
i
..... 'I Ll.'~'i1I1f1t
~II~! " .'! .11' ,-' .. , .'!:!:':::. ,'" " [II'.', ,1' ..
lIi"'ill'- 'lIil?lill ? 1t-1' , I.I:.IJ.'I;~;;I";;~I:'l.llI :I : ; 1 " .! . ~
.I;R.,.1~11f,r?lrll?
. I"",'I~.I
.; "
"i .... ,l;'-,I,..
!;I' : ~::, :,I
"I.
.
i . , ..
? ,II
I , ,ill . .IlII,: 1. 1 .. .. ' . ..
II ,.
i"
,'.1.1'.1. .11 J. ,,'"
"' 1 .. III.'.lJ.,.m?'H ii,'I1'"
I '"
IH ' ;
.. '.II.1.L .I,J)
" , /?fl ? ;' ?L,
? _. ?. . r - - .? ? to
.ILU 1.1 ??
? J.
'
11,'1 .. J . II ? ;,;;,;,;,,:';:
1.1.1
I,
! ", 'f "! '" .'?f
?
~
??
I,.
!. i. .-' ..
1~I.i
,
? ?
? -
. ,
_.
??
-.
? -
' .
"
- -
? ?
.' . !'..
I
I
.! . . _. -! .. .
r .l '. J..11. [I ..? ,! ...1 .'1 ,~I .'.
'.1 ... .' .. I. . ... ' .(11...1. . ~'.I. . I
.:
,-11 II' II . H '?1, I. UJI . I U? HI?
.. .. . . . . . 1. .
I" ',"
,I,.,'.I. :.I",l.,'
_.
? -
-. -
? ?
??
??
'-'I l.' .. ' .1.1. I L .I J
'
..
- I_ ...I
II
Svv'itch
Figure 2: Multi-unit firing trains and their statistical segmentation by the model.
Shown are 4 sec. of activity, in two trials, near the "switch". Estimated firing rates
for each channel are also plotted on top of the firing spikes. The upper example is
taken from the training data, while the lower is outside of the training set. Shown
are also the association probabilities for each of the 8 states of the model. The
monkey's cell-assembly clearly undergoes the state sequence "1", "5", "6", "5" in
both cases. Similar sequence was observed near the same marker in many (but not
all) other instances of the same event during that measurement day.
2.2
Method of analysis
As was indicated before, most of the statistical analysis so far was done by accumulating the firing patterns from many trials, aligned by external markers. This
supervised mode of analysis can be understood from figure 1, where 48 different
Modeling Cell-Assemblies Activities in Associative Cortex of Behaving Monkeys
"Go" firing trains of a single unit are aligned by the marker. There is a clear increase in the accumulated firing rate following the marker, indicating a response of
this unit to the stimulus. In contrast, we would like to obtain, in an unsupervised
self organizing manner, a statistical characterization of the multi-unit firing activity
around the marked stimuli, as well as in other unobserved cortical processes. We
claim to achieve this goal through characteristic sequences of Markov states.
3
Multivariate Poisson Hidden Markov Model
The following statistical assumptions underlie our model. Channel firing is distributed according to a Poisson distribution. The distances between spikes are
distributed exponentially and their number in each frame, n, depends only on the
mean firing rate A, through the distribution
e-AA n
(1)
I
.
n.
The estimation of the parameter A is performed in each channel, within a sliding
window of 500ms length, every lOOms. These overlapping windows introduce correlations between the frames, but generate less noisy, smoother, firing curves. These
curves are depicted on top of the spike trains for each unit in figure 2.
PA(n)
=
The multivariate Poisson process is taken as a Maximum Entropy distribution with
i.i.d. Poisson prior, subject to pairwise channel correlations as additional constraints, yielding the following parametric distribution
d
PA(nl,n2, ... ,nd) =
II PA,(ni) exp[ - LAij(ni -
Ad(nj - Aj) - AO]'
(2)
ij
The Aij are additional Lagrange multipliers, determined by the observed pairwise
correlation E[( ni - Ad( nj - Aj)), while AO ensures the normalization. In the analysis reported here the pairwise correlation term has not been implemented.
The statistical distance between a frame and the cluster centers is determined by
the probability that this frame is generated by the centroid distribution. This
probability is asymptotically fixed by the empirical information divergence (KL
distance) between the processes[8, 9]. For I-dimensional Poisson distributions the
divergence is simply given by
(3)
The uncorrelated multi-unit divergence is simply the sum of divergences for all the
units. Using this measure, we can train a multivariate Poisson Hidden Markov
Model, where each state is characterized by such a vector Poisson process. This is
a special case of a method called distributional clustering, recently developed in a
more general setup[IO].
The clustering provides us with the desired statistical segmentation of the data into
states. The probability of a frame, Xt, to belong to a given state, Sj, is determined
by the probability that the vector firing pattern is generated by the state centroid's
949
950
Gat and Tishby
distribution. Under our model assumptions this probability is a function solely of
the empirical divergences, Eq.(3), and is given by
(4)
where f3 determines the "cluster-hardness". These state probability curves are plotted in figure 2 in correspondence with the spike trains. The most probable state at
each instance determines the most likely segmentation of the data, and the frames
are labeled by this most probable state number. These labels are also shown on top
of the spike trains in figure 2.
4
Experimental results
We used about 6000 seconds of recordings done during a single day. It is important
to note that this was an exceptionally good day in terms of the measurement quality.
During that period the monkey performed 60 repetitions of his trained routine, in
sets of 4 trials of "Go" mode, followed by 4 trials in the "No-Go" mode. We selected
the 8 most active recorded units for our modeling. The training of the models was
done on the first 4000 seconds of recording, 2000 seconds for each mode, while the
rest was used for testing.
4.1
The nature of the segmentation
Any method can segment the data in some way, but the point is to obtain reliable
predictions using this segmentation. As always, there is some arbitrariness in the
choice of the number of states (or clusters), which ideally is determined by the
data. Here we tested only 8 and 15 states, and in most cases 8 were sufficient for
our purposes. Since we used "fuzzy", or "soft" clustering, each frame has some
probability of belonging to any of the clusters. Although in most cases the most
likely state is clearly defined, the complete picture is seen only from the complete
association distribution. Notice, e.g., in the lower segment of figure 2, where a most
likely state "7" "pops up" between states "6" and "5", but is clearly not significant,
as seen from the corresponding probability curve.
4.2
Characterization of events by state-sequences
The first test of the segmentation is whether it is correlated with the external
markers in any way. Since the markers were not used in any way during the training
of the model (clustering), such correlations is a valid test of consistency. Moreover,
one would like this correspondence to the markers to hold also outside of the training
data. An exhaustive statistical examination of this question has not been made,
as yet, but we could easily find many instances of similar state sequences near the
same external marker, both within and outside of the training data. In figure 2 we
bring a typical example to this effect. The next step is to train smallieft-to-right
Markov models to spot these events more reliably.
Modeling Cell-Assemblies Activities in Associative Cortex of Behaving Monkeys
Go-Mode
0
0
0
0
0
0
0
0
0
0
D 0
D
0
0
......
0
0
0
0
0
0
0
0
0
0
0
0
D
0
0
0
0
0
D
t:I
CJ
0
.
....
Il
t:I
-
0
;:I
n
U
1::1
t:I
0
0
0
No Go-Mode
0
Units
0
1::1
0
0
t:I
0
0
0
0
0
0
0
D
0
0
n
0
0
0
0
0
n
0
0
0
0
0
,...,
D
""
0
0
0
0
0
......
D 0
0
0
0
0
0 0
0 ~
0
0
D
0
0
0
0
0
0
0
D
n
0
Units
Figure 3: Average firing rates for each unit in each state, for the "Go" and ?NoGo" modes. Notice that while no single unit clearly discriminates the two modes,
their overall statistical discrimination is big enough that on average 100 frames are
enough to determine the correct mode, more than 95% of the time_
4.3
Statistical Inference of "Go" and "No-Go" modes
Next we examined the statistical difference between models trained on the "Go"
vs. "No-Go" modes. Here we obtained a highly significant difference in the cluster
centroid's distributions, as shown in figure 3. The average statistical divergence between different clusters within each mode were 9.18 and 9.52 (natural logarithm) ,in
?Go" and "No-Go" respectively, while in between those modes the divergence was
more than 35.
4.4
Behavioral mode and the network firing coherency
In addition to the clearly different cluster centers in the two modes, there is another
interesting and unexpected difference_ We would like to call this firing coherency
level, and it characterize the spread of the data around the cluster centers. The
average divergence between the frames and their most likely state is consistently
much higher in the "No-Go" mode than in the "Go" mode (figure 4). This is in
agreement with the assumption that correct performance of the ?No-Go" paradigm
requires little attention, and therefore the brain may engage in a variety of processes.
Acknowledglnents
Special thanks are due to Moshe Abeles for his continuous encouragement and
support, and for his important comments on the manuscript. We would also like
to thank Hagai Bergman, and Eilon Vaadia for sharing their data with us, and for
numerous stimulating and encouraging discussions of our approach. This research
was supported in part by a grant from the Unites States Israeli Binational Science
Foundation (BSF).
951
952
Gat and Tishby
10.---~-------.-------,--------.-------.-------,----,
8 clusters
9.5
Go.
NoGo ..
+ + + + + +
9
.. + .. + + +
~
~
8.5
?
???????? ???
is
~
0g
0Q
8
?? ??? ?
15 clusters
!!
+ + + + + +
til
7.5
+ + .. + + +
7
????????????
6.5~--~------~--
2
???? ? ?
____
________
~L
~
______
~
2
3
______
~
__
~
3
Trial Number
Figure 4: Firing coherency in the two behavioral modes at different clustering trials.
The "No-Go" average divergence to the cluster centers is systematically higher than
in the "Go" mode. The effect is shown for both 8 and 15 states, and is even more
profound with 8 states.
References
[1] D. O. Hebb, The Organization of Behavior, Wiley, New York (1949)
[2] M. Abeles, Corticonics, (Cambridge University Press, 1991)
[3] J. Kruger, Simultaneous Individual Recordings From Many Cerebral Neurons:
Techniques and Results, Rev. Phys. Biochem. Pharmacol.: 98:pp. 177-233
(1983)
[4] M. Abeles, E. Vaadia, H. Bergman, Firing patterns of single unit in the prefrontal cortex and neural-networks models., Network 1 (1990)
[5] M. Abeles, H. Bergman, E. Margalit and E. Vaadia, Spatio Temporal Firing Patterns in the Frontal Cortex of Behaving Monkeys., Hebrew University
preprint (1992)
[6] E. Vaadia, E. Ahissar, H. Bergman, and Y. Lavner, Correlated activity of
neurons: a neural code for higher brain functions in: J.Kruger (ed), Neural
Cooperativity pp. 249-279, (Springer-Verlag 1991).
[7] A. B. Poritz, Hidden Markov Models: A Guided tour,(ICASSP 1988 New York).
[8] T.M. Cover and J.A. Thomas, Information Theory, (Wiley, 1991).
[9] J. Ziv and N. Merhav, A Measure of Relative Entropy between Individual Sequences, Technion preprint (1992)
[10J N. Tishby and F. Pereira, Distributional Clustering, Hebrew University preprint
(1993).
| 685 |@word trial:9 nd:1 rhesus:2 contains:1 recovered:1 yet:1 treating:1 drop:2 ilii:1 discrimination:3 v:1 poritz:1 selected:2 short:1 provides:1 characterization:3 location:2 profound:1 fixation:2 behavioral:9 introduce:1 manner:2 pairwise:4 hardness:1 behavior:3 multi:10 brain:2 little:1 encouraging:1 window:5 estimating:1 underlying:1 moreover:1 israel:1 monkey:22 fuzzy:1 developed:1 unobserved:1 ahissar:1 nj:2 temporal:2 remember:1 quantitative:1 every:1 ti:1 unit:26 medical:2 underlie:1 grant:1 before:1 understood:1 local:1 io:1 initiated:1 firing:37 solely:1 examined:1 collect:1 statistically:3 testing:1 spot:1 signaling:1 empirical:2 significantly:2 eilon:1 ilf:1 accumulating:1 center:5 jerusalem:1 go:22 regardless:1 attention:1 bsf:1 his:4 engage:1 itay:2 hypothesis:1 bergman:5 agreement:1 origin:1 pa:3 utilized:1 dimmed:1 distributional:2 labeled:1 observed:2 negligibly:1 preprint:3 electrical:1 capture:2 thousand:1 region:1 ensures:1 discriminates:1 ideally:1 trained:5 segment:2 easily:1 icassp:1 train:8 distinct:1 cooperativity:1 svv:1 outside:3 exhaustive:1 quite:1 ability:2 statistic:1 noisy:1 associative:7 vaadia:5 advantage:1 pressing:1 indication:1 sequence:6 propose:1 interconnected:1 aligned:3 turned:1 organizing:1 achieve:1 macaca:1 cluster:12 itch:1 develop:2 ac:1 completion:1 measured:1 ij:2 school:2 eq:1 implemented:1 c:1 guided:1 correct:5 stochastic:2 bin:1 ao:2 probable:2 hagai:1 hold:1 around:2 exp:1 claim:1 purpose:1 estimation:1 label:1 repetition:2 vice:1 clearly:6 always:1 rather:1 release:1 focus:3 lill:1 consistently:1 contrast:1 centroid:3 inference:1 dependent:1 accumulated:4 lj:1 margalit:1 hidden:6 i1:1 overall:1 ill:3 ziv:1 spatial:2 special:2 construct:1 f3:1 corticonics:1 lit:1 unsupervised:2 discrepancy:1 report:1 stimulus:10 few:1 randomly:1 microelectrodes:1 simultaneously:1 divergence:9 individual:5 delayed:2 organization:1 highly:2 nl:1 yielding:1 light:3 chain:1 logarithm:1 concomitantly:1 desired:1 plotted:2 instance:3 modeling:5 soft:1 cover:1 tour:1 technion:1 delay:2 tishby:7 front:2 characterize:3 reported:2 abele:5 thanks:1 huji:1 reflect:1 recorded:4 central:2 possibly:1 prefrontal:1 henceforth:1 cognitive:1 external:7 lii:1 ilu:1 til:1 li:2 sec:1 depends:1 ad:2 performed:2 view:1 observing:1 relied:1 il:2 ni:3 who:1 characteristic:1 reinforced:1 correspond:1 lli:2 finer:1 tissue:1 simultaneous:5 detector:1 phys:1 sharing:1 ed:1 pp:2 electrophysiological:2 segmentation:7 cj:1 routine:2 nerve:1 manuscript:1 higher:4 day:3 supervised:1 reflected:1 response:4 formulation:1 done:3 furthermore:1 just:1 correlation:9 hand:1 touch:1 synfire:1 marker:9 overlapping:1 mode:30 undergoes:1 aj:2 quality:1 indicated:1 facilitate:1 effect:3 hypothesized:2 contain:1 multiplier:1 laboratory:1 ll:3 during:6 self:1 naftali:1 m:1 complete:3 demonstrate:2 l1:1 bring:1 laij:1 recently:1 mulatta:1 juice:2 ji:1 arbitrariness:1 ltl:1 binational:1 exponentially:1 cerebral:1 association:2 he:2 belong:1 relating:1 measurement:6 significant:4 versa:1 cambridge:1 encouragement:1 consistency:1 similarly:1 session:1 had:4 cortex:13 behaving:6 biochem:1 multivariate:4 touching:1 rewarded:1 verlag:1 came:1 neuralnetwork:1 seen:3 additional:2 determine:1 paradigm:1 period:2 living:1 ii:47 sliding:2 smoother:1 characterized:2 long:1 prediction:1 basic:1 poisson:9 histogram:2 normalization:1 cell:16 pharmacol:1 addition:1 rest:1 comment:1 recording:10 hz:1 subject:1 electro:1 call:1 near:3 iii:6 enough:4 switch:3 variety:1 shift:1 whether:1 six:1 speech:1 york:2 clear:3 kruger:2 prepared:1 generate:1 notice:2 coherency:6 estimated:1 key:3 four:1 drawn:1 asymptotically:1 nogo:2 sum:1 respond:1 fl:1 hi:1 followed:1 correspondence:3 yielded:1 activity:22 constraint:1 aspect:1 performing:2 extracellular:2 according:1 alternate:1 belonging:1 rev:1 taken:2 liil:1 remains:1 previously:1 thomas:1 top:4 clustering:6 include:2 assembly:8 surgery:1 question:1 moshe:1 spike:6 parametric:1 exhibit:1 distance:3 thank:1 extent:1 cellular:2 collected:1 length:1 code:1 modeled:1 hebrew:4 setup:1 mostly:1 merhav:1 reliably:1 perform:2 allowing:1 i_:1 upper:1 neuron:4 observation:2 markov:7 variability:1 frame:10 kl:1 learned:1 established:1 pop:1 israeli:1 pattern:10 reliable:1 event:5 difficulty:1 examination:1 natural:1 loom:1 technology:1 numerous:1 picture:1 started:1 carried:1 conventionally:1 lij:1 eog:1 prior:1 relative:1 interesting:1 remarkable:1 foundation:1 sufficient:1 systematically:1 uncorrelated:1 supported:1 aij:1 understand:1 institute:1 anesthetized:1 distributed:2 curve:4 cortical:3 valid:1 c5:3 made:1 far:5 sj:1 keep:1 active:1 spatio:1 continuous:1 uji:1 nature:2 channel:4 main:2 spread:1 big:1 n2:1 unites:1 repeated:1 hebb:2 aid:1 wiley:2 pereira:1 msec:1 xt:1 ih:1 gat:5 entropy:2 depicted:1 led:1 simply:2 likely:4 visual:2 horizontally:1 lagrange:1 unexpected:1 springer:1 aa:1 determines:2 stimulating:1 goal:2 lilli:1 marked:1 careful:1 exceptionally:1 change:1 determined:4 typical:1 lui:1 averaging:1 called:2 discriminate:1 experimental:2 indicating:1 support:1 frontal:2 evaluate:1 tested:1 correlated:2 |
6,468 | 6,850 | Accuracy First: Selecting a Differential Privacy Level
for Accuracy-Constrained ERM
Katrina Ligett
Caltech and Hebrew University
Seth Neel
University of Pennsylvania
Bo Waggoner
University of Pennsylvania
Aaron Roth
University of Pennsylvania
Zhiwei Steven Wu
Microsoft Research
Abstract
Traditional approaches to differential privacy assume a fixed privacy requirement
? for a computation, and attempt to maximize the accuracy of the computation
subject to the privacy constraint. As differential privacy is increasingly deployed in
practical settings, it may often be that there is instead a fixed accuracy requirement
for a given computation and the data analyst would like to maximize the privacy of
the computation subject to the accuracy constraint. This raises the question of how
to find and run a maximally private empirical risk minimizer subject to a given
accuracy requirement. We propose a general ?noise reduction? framework that
can apply to a variety of private empirical risk minimization (ERM) algorithms,
using them to ?search? the space of privacy levels to find the empirically strongest
one that meets the accuracy constraint, and incurring only logarithmic overhead
in the number of privacy levels searched. The privacy analysis of our algorithm
leads naturally to a version of differential privacy where the privacy parameters
are dependent on the data, which we term ex-post privacy, and which is related
to the recently introduced notion of privacy odometers. We also give an ex-post
privacy analysis of the classical AboveThreshold privacy tool, modifying it to allow
for queries chosen depending on the database. Finally, we apply our approach to
two common objective functions, regularized linear and logistic regression, and
empirically compare our noise reduction methods to (i) inverting the theoretical
utility guarantees of standard private ERM algorithms and (ii) a stronger, empirical
baseline based on binary search.1
1
Introduction and Related Work
Differential Privacy [7, 8] enjoys over a decade of study as a theoretical construct, and a much more
recent set of large-scale practical deployments, including by Google [10] and Apple [11]. As the large
theoretical literature is put into practice, we start to see disconnects between assumptions implicit
in the theory and the practical necessities of applications. In this paper we focus our attention on
one such assumption in the domain of private empirical risk minimization (ERM): that the data
analyst first chooses a privacy requirement, and then attempts to obtain the best accuracy guarantee
(or empirical performance) that she can, given the chosen privacy constraint. Existing theory is
tailored to this view: the data analyst can pick her privacy parameter ? via some exogenous process,
and either plug it into a ?utility theorem? to upper bound her accuracy loss, or simply deploy her
algorithm and (privately) evaluate its performance. There is a rich and substantial literature on private
convex ERM that takes this approach, weaving tight connections between standard mechanisms in
1
A full version of this paper appears on the arXiv preprint site: https://arxiv.org/abs/1705.10829.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
differential privacy and standard tools for empirical risk minimization. These methods for private
ERM include output and objective perturbation [5, 14, 18, 4], covariance perturbation [19], the
exponential mechanism [16, 2], and stochastic gradient descent [2, 21, 12, 6, 20].
While these existing algorithms take a privacy-first perspective, in practice, product requirements
may impose hard accuracy constraints, and privacy (while desirable) may not be the over-riding
concern. In such situations, things are reversed: the data analyst first fixes an accuracy requirement,
and then would like to find the smallest privacy parameter consistent with the accuracy constraint.
Here, we find a gap between theory and practice. The only theoretically sound method available is to
take a ?utility theorem? for an existing private ERM algorithm and solve for the smallest value of
? (the differential privacy parameter)?and other parameter values that need to be set?consistent
with her accuracy requirement, and then run the private ERM algorithm with the resulting ?. But
because utility theorems tend to be worst-case bounds, this approach will generally be extremely
conservative, leading to a much larger value of ? (and hence a much larger leakage of information)
than is necessary for the problem at hand. Alternately, the analyst could attempt an empirical search
for the smallest value of ? consistent with her accuracy goals. However, because this search is itself
a data-dependent computation, it incurs the overhead of additional privacy loss. Furthermore, it is
not a priori clear how to undertake such a search with nontrivial privacy guarantees for two reasons:
first, the worst case could involve a very long search which reveals a large amount of information,
and second, the selected privacy parameter is now itself a data-dependent quantity, and so it is not
sensible to claim a ?standard? guarantee of differential privacy for any finite value of ? ex-ante.
In this paper, we provide a principled variant of this second approach, which attempts to empirically
find the smallest value of ? consistent with an accuracy requirement. We give a meta-method that
can be applied to several interesting classes of private learning algorithms and introduces very little
privacy overhead as a result of the privacy-parameter search. Conceptually, our meta-method initially
computes a very private hypothesis, and then gradually subtracts noise (making the computation less
and less private) until a sufficient level of accuracy is achieved. One key technique that significantly
reduces privacy loss over naive search is the use of correlated noise generated by the method of [15],
which formalizes the conceptual idea of ?subtracting? noise without incurring additional privacy
overhead. In order to select the most private of these queries that meets the accuracy requirement, we
introduce a natural modification of the now-classic AboveThreshold algorithm [8], which iteratively
checks a sequence of queries on a dataset and privately releases the index of the first to approximately
exceed some fixed threshold. Its privacy cost increases only logarithmically with the number of
queries. We provide an analysis of AboveThreshold that holds even if the queries themselves are
the result of differentially private computations, showing that if AboveThreshold terminates after t
queries, one only pays the privacy costs of AboveThreshold plus the privacy cost of revealing those
first t private queries. When combined with the above-mentioned correlated noise technique of [15],
this gives an algorithm whose privacy loss is equal to that of the final hypothesis output ? the previous
ones coming ?for free? ? plus the privacy loss of AboveThreshold. Because the privacy guarantees
achieved by this approach are not fixed a priori, but rather are a function of the data, we introduce
and apply a new, corresponding privacy notion, which we term ex-post privacy, and which is closely
related to the recently introduced notion of ?privacy odometers? [17].
In Section 4, we empirically evaluate our noise reduction meta-method, which applies to any ERM
technique which can be described as a post-processing of the Laplace mechanism. This includes both
direct applications of the Laplace mechanism, like output perturbation [5]; and more sophisticated
methods like covariance perturbation [19], which perturbs the covariance matrix of the data and
then performs an optimization using the noisy data. Our experiments concentrate on `2 regularized
least-squares regression and `2 regularized logistic regression, and we apply our noise reduction
meta-method to both output perturbation and covariance perturbation. Our empirical results show
that the active, ex-post privacy approach massively outperforms inverting the theory curve, and also
improves on a baseline ??-doubling? approach.
2
2.1
Privacy Background and Tools
Differential Privacy and Ex-Post Privacy
Let X denote the data domain. We call two datasets D, D0 ? X ? neighbors (written as D ? D0 ) if
D can be derived from D0 by replacing a single data point with some other element of X .
2
Definition 2.1 (Differential Privacy [7]). Fix ? ? 0. A randomized algorithm A : X ? ? O is
?-differentially private if for every pair of neighboring data sets D ? D0 ? X ? , and for every event
S ? O:
Pr[A(D) ? S] ? exp(?) Pr[A(D0 ) ? S].
We call exp(?) the privacy risk factor.
It is possible to design computations that do not satisfy the differential privacy definition, but whose
outputs are private to an extent that can be quantified after the computation halts. For example,
consider an experiment that repeatedly runs an ?0 -differentially private algorithm, until a stopping
condition defined by the output of the algorithm itself is met. This experiment does not satisfy
?-differential privacy for any fixed value of ?, since there is no fixed maximum number of rounds
for which the experiment will run (for a fixed number of rounds, a simple composition theorem,
Theorem 2.5, shows that the ?-guarantees in a sequence of computations ?add up.?) However, if expost we see that the experiment has stopped after k rounds, the data can in some sense be assured an
?ex-post privacy loss? of only k?0 . Rogers et al. [17] initiated the study of privacy odometers, which
formalize this idea. They study privacy composition when the data analyst can choose the privacy
parameters of subsequent computations as a function of the outcomes of previous computations.
We apply a related idea here, for a different purpose. Our goal is to design one-shot algorithms that
always achieve a target accuracy but that may have variable privacy levels depending on their input.
Definition 2.2. Given a randomized algorithm A : X ? ? O, define the ex-post privacy loss2 of A
on outcome o to be
Pr [A(D) = o]
.
Loss(o) =
max log
D,D 0 :D?D 0
Pr [A(D0 ) = o]
We refer to exp (Loss(o)) as the ex-post privacy risk factor.
Definition 2.3 (Ex-Post Differential Privacy). Let E : O ? (R?0 ? {?}) be a function on the
outcome space of algorithm A : X ? ? O. Given an outcome o = A(D), we say that A satisfies
E(o)-ex-post differential privacy if for all o ? O, Loss(o) ? E(o).
Note that if E(o) ? ? for all o, A is ?-differentially private. Ex-post differential privacy has the
same semantics as differential privacy, once the output of the mechanism is known: it bounds the
log-likelihood ratio of the dataset being D vs. D0 , which controls how an adversary with an arbitrary
prior on the two cases can update her posterior.
2.2
Differential Privacy Tools
Differentially private computations enjoy two nice properties:
Theorem 2.4 (Post Processing [7]). Let A : X ? ? O be any ?-differentially private algorithm, and
let f : O ? O0 be any function. Then the algorithm f ? A : X ? ? O0 is also ?-differentially private.
Post-processing implies that, for example, every decision process based on the output of a differentially private algorithm is also differentially private.
Theorem 2.5 (Composition [7]). Let A1 : X ? ? O, A2 : X ? ? O0 be algorithms that are
?1 - and ?2 -differentially private, respectively. Then the algorithm A : X ? ? O ? O0 defined as
A(x) = (A1 (x), A2 (x)) is (?1 + ?2 )-differentially private.
The composition theorem holds even if the composition is adaptive?-see [9] for details.
The Laplace mechanism. The most basic subroutine we will use is the Laplace mechanism. The
Laplace Distribution centered at 0 with scale b is the distribution with probability density function
1 ? |z|
Lap (z|b) = 2b
e b . We say X ? Lap (b) when X has Laplace distribution with scale b. Let
?
d
f : X ? R be an arbitrary d-dimensional function. The `1 sensitivity of f is defined to be
?1 (f ) = maxD?D0 kf (D) ? f(D0 )k1
. The Laplace mechanism with parameter ? simply adds noise
?1 (f )
drawn independently from Lap
to each coordinate of f (x).
?
2
If A?s output is from a continuous distribution rather than discrete, we abuse notation and write Pr[A(D) =
o] to mean the probability density at output o.
3
Theorem 2.6 ([7]). The Laplace mechanism is ?-differentially private.
Gradual private release. Koufogiannis et al. [15] study how to gradually release private data using
the Laplace mechanism with an increasing sequence of ? values, with a privacy cost scaling only with
the privacy of the marginal distribution on the least private release, rather than the sum of the privacy
costs of independent releases. For intuition, the algorithm can be pictured as a continuous random
walk starting at some private data v with the property that the marginal distribution at each point in
time is Laplace centered at v, with variance increasing over time. Releasing the value of the random
walk at a fixed point in time gives a certain output distribution, for example, v?, with a certain privacy
guarantee ?. To produce v?0 whose ex-ante distribution has higher variance (is more private), one can
simply ?fast forward? the random walk from a starting point of v? to reach v?0 ; to produce a less private
v?0 , one can ?rewind.? The total privacy cost is max{?, ?0 } because, given the ?least private? point
(say v?), all ?more private? points can be derived as post-processings given by taking a random walk
of a certain length starting at v?. Note that were the Laplace random variables used for each release
independent, the composition theorem would require summing the ? values of all releases.
In our private algorithms, we will use their noise reduction mechanism as a building block to generate
a list of private hypotheses ?1 , . . . , ?T with gradually increasing ? values. Importantly, releasing any
prefix (?1 , . . . , ?t ) only incurs the privacy loss in ?t . More formally:
Algorithm 1 Noise Reduction [15]: NR(v, ?, {?t })
Input: private vector v, sensitivity parameter ?, list ?1 < ?2 < ? ? ? < ?T
Set v?T := v + Lap (?/?T )
. drawn i.i.d. for each coordinate
for t = T ? 1, T ? 2, . . . , 1 do
2
?t
: set v?t := v?t+1
With probability ?t+1
Else: set v?t := v?t+1 + Lap (?/?t )
Return v?1 , . . . , v?T
. drawn i.i.d. for each coordinate
Theorem 2.7 ([15]). Let f have `1 sensitivity ? and let v?1 , . . . , v?T be the output of Algorithm 1 on
v = f (D), ?, and the increasing list ?1 , . . . , ?T . Then for any t, the algorithm which outputs the
prefix (?
v1 , . . . , v?t ) is ?t -differentially private.
2.3
AboveThreshold with Private Queries
Our high-level approach to our eventual ERM problem will be as follows: Generate a sequence
of hypotheses ?1 , . . . , ?T , each with increasing accuracy and decreasing privacy; then test their
accuracy levels sequentially, outputting the first one whose accuracy is ?good enough.? The classical
AboveThreshold algorithm [8] takes in a dataset and a sequence of queries and privately outputs the
index of the first query to exceed a given threshold (with some error due to noise). We would like to
use AboveThreshold to perform these accuracy checks, but there is an important obstacle: for us, the
?queries? themselves depend on the private data.3 A standard composition analysis would involve first
privately publishing all the queries, then running AboveThreshold on these queries (which are now
public). Intuitively, though, it would be much better to generate and publish the queries one at a time,
until AboveThreshold halts, at which point one would not publish any more queries. The problem
with analyzing this approach is that, a-priori, we do not know when AboveThreshold will terminate;
to address this, we analyze the ex-post privacy guarantee of the algorithm.4
Let us say that an algorithm M (D) = (f1 , . . . , fT ) is (?1 , . . . , ?T )-prefix-private if for each t, the
function that runs M (D) and outputs just the prefix (f1 , . . . , ft ) is ?t -differentially private.
Lemma 2.8. Let M : X ? ? (X ? ? O)T be a (?1 , . . . , ?T )-prefix private algorithm that returns T
queries, and let each query output by M have `1 sensitivity at most ?. Then Algorithm 2 run on D,
?A , W , ?, and M is E-ex-post differentially private for E((t, ?)) = ?A + ?t for any t ? [T ].
3
In fact, there are many applications beyond our own in which the sequence of queries input to AboveThreshold might be the result of some private prior computation on the data, and where we would like to release both the
stopping index of AboveThreshold and the ?query object.? (In our case, the query objects will be parameterized
by learned hypotheses ?1 , . . . , ?T .)
4
This result does not follow from a straightforward application of privacy odometers from [17], because the
privacy analysis of algorithms like the noise reduction technique is not compositional.
4
Algorithm 2 InteractiveAboveThreshold: IAT(D, ?, W, ?, M )
Input: Dataset D, privacy
loss ?, threshold W , `1 sensitivity ?, algorithm M
? = W + Lap 2?
Let W
?
for each query t = 1, . . . , T do
Query ft ? M (D)t
? : then Output (t, ft ); Halt.
if ft (D) + Lap 4?
?W
?
Output (T , ?).
The proof, which is a variant on the proof of privacy for AboveThreshold [8], appears in the full
version, along with an accuracy theorem for IAT.
3
Noise-Reduction with Private ERM
In this section, we provide a general private ERM framework that allows us to approach the best
privacy guarantee achievable on the data given a target excess risk goal. Throughout the section,
we consider an input dataset D that consists of n row vectors X1 , X2 , . . . , Xn ? Rp and a column
y ? Rn . We will assume that each kXi k1 ? 1 and |yi | ? 1. Let di = (Xi , yi ) ? Rp+1 be the i-th
data record. Let ` be a loss function such that for any hypothesis ? and any data point (Xi , yi ) the loss
is `(?, (Xi , yi )). Given an input dataset D and a regularization parameter ?, the goal is to minimize
the following regularized empirical loss function over some feasible set C:
n
?
1X
`(?, (Xi , yi )) + k?k22 .
L(?, D) =
n i=1
2
Let ?? = argmin??C `(?, D). Given a target accuracy parameter ?, we wish to privately compute a
?p that satisfies L(?p , D) ? L(?? , D) + ?, while achieving the best ex-post privacy guarantee. For
simplicity, we will sometimes write L(?) for L(?, D).
One simple baseline approach is a ?doubling method?: Start with a small ? value, run an ?differentially private algorithm to compute a hypothesis ? and use the Laplace mechanism to estimate
the excess risk of ?; if the excess risk is lower than the target, output ?; otherwise double the value of
? and repeat the same process. (See the full version for details.) As a result, we pay for privacy loss
for every hypothesis we compute and every excess risk we estimate.
In comparison, our meta-method provides a more cost-effective way to select the privacy level. The
algorithm takes a more refined set of privacy levels ?1 < . . . < ?T as input and generates a sequence
of hypotheses ?1 , . . . , ?T such that the generation of each ?t is ?t -private. Then it releases the
hypotheses ?t in order, halting as soon as a released hypothesis meets the accuracy goal. Importantly,
there are two key components that reduce the privacy loss in our method:
1. We use Algorithm 1, the ?noise reduction? method of [15], for generating the sequence of
hypotheses: we first compute a very private and noisy ?1 , and then obtain the subsequent
hypotheses by gradually ?de-noising? ?1 . As a result, any prefix (?1 , . . . , ?k ) incurs a
privacy loss of only ?k (as opposed to (?1 + . . . + ?k ) if the hypotheses were independent).
2. When evaluating the excess risk of each hypothesis, we use Algorithm 2, InteractiveAboveThreshold, to determine if its excess risk exceeds the target threshold. This incurs
substantially less privacy loss than independently evaluating the excess risk of each hypothesis using the Laplace mechanism (and hence allows us to search a finer grid of values).
For the rest of this section, we will instantiate our method concretely for two ERM problems: ridge
regression and logistic regression. In particular, our noise-reduction method is based on two private
ERM algorithms: the recently introduced covariance perturbation technique [19] and the output
perturbation method [5].
5
3.1
Covariance Perturbation for Ridge Regression
In ridge regression, we consider the squared loss function: `((Xi , yi ), ?) = 21 (yi ? h?, Xi i)2 , and
hence empirical loss over the data set is defined as
1
?k?k22
ky ? X?k22 +
,
2n
2
where X denotes the (n ? p) matrix with row vectors X1 , . . . , Xn and y p
= (y1 , . . . , yn ). Since the
optimal solution for the unconstrained problem has `2 norm no more than 1/? (see the fullp
version
p
for a proof), we will focus on optimizing ? over the constrained set C = {a ? R | kak2 ? 1/?},
which will be useful for bounding the `1 sensitivity of the empirical loss.
L(?, D) =
Before we formally introduce the covariance perturbation algorithm due to [19], observe that the
optimal solution ?? can be computed as
?? = argmin L(?, D) = argmin
??C
??C
(?| (X | X)? ? 2hX | y, ?i) ?k?k22
+
.
2n
2
In other words, ?? only depends on the private data through X | y and X | X. To compute a private
hypothesis, the covariance perturbation method simply adds Laplace noise to each entry of X | y and
X | X (the covariance matrix), and solves the optimization based on the noisy matrix and vector. The
formal description of the algorithm and its guarantee are in Theorem 3.1. Our analysis differs from
the one in [19] in that their paper considers the ?local privacy? setting, and also adds Gaussian noise
whereas we use Laplace. The proof is deferred to the full version.
Theorem 3.1. Fix any ? > 0. For any input data set D, consider the mechanism M that computes
?p = argmin
??C
?k?k22
1 | |
(? (X X + B)? ? 2hX | y + b, ?i) +
,
2n
2
p?p
where B ? R
and b ? Rp?1 are random Laplace matrices such that each entry of B and b is
drawn from Lap (4/?). Then M satisfies ?-differential privacy and the output ?p satisfies
? p
4 2(2 p/? + p/?)
?
E [L(?p ) ? L(? )] ?
.
B,b
n?
In our algorithm C OV NR, we will apply the noise reduction method, Algorithm 1, to produce a
sequence of noisy versions of the private data (X | X, X | y): (Z 1 , z 1 ), . . . , (Z T , z T ), one for each
privacy level. Then for each (Z t , z t ), we will compute the private hypothesis by solving the noisy
version of the optimization problem in Equation (1). The full description of our algorithm C OV NR is
in Algorithm 3, and satisfies the following guarantee:
Theorem 3.2. The instantiation of C OV NR(D, {?1 , . . . , ?T }, ?, ?) outputs a hypothesis ?p that with
probability 1 ? ? satisfies L(?p ) ? L(?? ) ? ?. Moreover, it is E-ex-post differentially private, where
the privacy loss function E : (([T ] ? {?}) ? Rp ) ? (R?0 ? {?}) is defined as E((k, ?)) = ?0 + ?k
for any k 6=?, E((?, ?)) = ?, and
p
16( 1/? + 1)2 log(2T /?)
?0 =
n?
is the privacy loss incurred by IAT.
3.2
Output Perturbation for Logistic Regression
Next, we show how to combine the output perturbation method with noise reduction for the
ridge regression problem.5 In this setting, the input data consists of n labeled examples
(X1 , y1 ), . . . , (Xn , yn ), such that for each i, Xi ? Rp , kXi k1 ? 1, and yi ? {?1, 1}. The goal is to
train a linear classifier given by a weight vector ? for the examples from the two classes. We consider
the logistic loss function: `(?, (Xi , yi )) = log(1 + exp(?yi ?| Xi )), and the empirical loss is
n
1X
?k?k22
L(?, D) =
log(1 + exp(?yi ?| Xi )) +
.
n i=1
2
5
We study the ridge regression problem for concreteness. Our method works for any ERM problem with
strongly convex loss functions.
6
Algorithm 3 Covariance Perturbation with Noise-Reduction: C OV NR(D, {?1 , . . . , ?T }, ?, ?)
Input: private data set D = (X, y), accuracy parameter ?, privacy levels ?1 < ?2 < . . . < ?T ,
and failure probability ?
Instantiate InteractiveAboveThreshold:
A = IAT(D, ?0 , ??/2, ?, ?) with ?0 =
p
2
1/?
+
1)
/(n)
16?(log(2T /?))/? and ? =
(
p
Let C = {a ? Rp | kak2 ? 1/?} and ?? = argmin??C L(?)
Compute noisy data:
{Z t } = NR((X | X), 2, {?1 /2, . . . , ?T /2}),
{z t } = NR((X | Y ), 2, {?1 /2, . . . , ?T /2})
for t = 1, . . . , T : do
?t = argmin
??C
?k?k22
1
?| Z t ? ? 2hz t , ?i +
2n
2
(1)
Let f t (D) = L(?? , D) ? L(?t , D); Query A with query f t to check accuracy
if A returns (t, f t ) then Output (t, ?t )
. Accurate hypothesis found.
Output: (?, ?? )
The output perturbation method simply adds Laplace noise to perturb each coordinate of the optimal
solution ?? . The following is the formal guarantee of output perturbation. Our analysis deviates
slightly from the one in [5] since we are adding Laplace noise (see the full version).
?
2 p
Theorem 3.3. Fix any ? > 0. Let r = n?? . For any input dataset D, consider the mechanism that
first computes ?? = argmin??Rp L(?), then outputs ?p = ?? + b, where b is a random vector with its
entries drawn i.i.d. from Lap (r). Then M satisfies ?-differential privacy, and ?p has excess risk
?
2 2p
4p2
?
E [L(?p ) ? L(? )] ?
+ 2 2.
b
n??
n ??
Given the output perturbation method, we can simply apply the noise reduction method NR to the
optimal hypothesis ?? to generate a sequence of noisy hypotheses. We will again use InteractiveAboveThreshold to check the excess risk of the hypotheses. The full algorithm O UTPUT NR follows
the same structure in Algorithm 3, and we defer the formal description to the full version.
Theorem 3.4. The instantiation of O UTPUT NR(D, ?0 , {?1 , . . . , ?T }, ?, ?) is E-ex-post differentially
private and outputs a hypothesis ?p that with probability 1 ? ? satisfies L(?p ) ? L(?? ) ? ?, where
the privacy loss function E : (([T ] ? {?}) ? Rp ) ? (R?0 ? {?}) is defined as E((k, ?)) = ?0 + ?k
for any k 6=?, E((?, ?)) = ?, and
p
32 log(2T /?) 2 log 2/?
?0 ?
n?
is the privacy loss incurred by IAT.
Proof sketch of Theorems 3.2 and 3.4. The accuracy guarantees for both algorithms follow from an
accuracy guarantee of the IAT algorithm (a variant on the standard AboveThreshold bound) and
the fact that we output ?? if IAT identifies no accurate hypothesis. For the privacy guarantee, first
note that any prefix of the noisy hypotheses ?1 , . . . , ?t satisfies ?t -differential privacy because of
our instantiation of the Laplace mechanism (see the full version for the `1 sensitivity analysis) and
noise-reduction method NR. Then the ex-post privacy guarantee directly follows Lemma 2.8.
4
Experiments
To evaluate the methods described above, we conducted empirical evaluations in two settings. We
used ridge regression to predict (log) popularity of posts on Twitter in the dataset of [1], with p = 77
features and subsampled to n =100,000 data points. Logistic regression was applied to classifying
7
15
10
5
14
ex-post privacy loss ?
ex-post privacy loss ?
20
Comparison to theory approach
CovarPert theory
OutputPert theory
NoiseReduction
12
Comparison to theory approach
OutputPert theory
NoiseReduction
10
8
6
4
2
0
0.00
0.05
0.10
0.15
Input ? (excess error guarantee)
0
0.00
0.20
(a) Linear (ridge) regression,
vs theory approach.
Comparison to Doubling
Doubling
NoiseReduction
8
6
4
2
0
0.00
0.10
0.15
0.20
(b) Regularized logistic regression,
vs theory approach.
Comparison to Doubling
Doubling
NoiseReduction
3.5
3.0
ex-post privacy loss ?
ex-post privacy loss ?
10
0.05
Input ? (excess error guarantee)
2.5
2.0
1.5
1.0
0.5
0.0
0.05
0.10
0.15
Input ? (excess error guarantee)
0.20
0.00
(c) Linear (ridge) regression,
vs D OUBLING M ETHOD.
0.05
0.10
0.15
Input ? (excess error guarantee)
0.20
(d) Regularized logistic regression,
vs D OUBLING M ETHOD.
Figure 1: Ex-post privacy loss. (1a) and (1c), left, represent ridge regression on the Twitter dataset,
where Noise Reduction and D OUBLING M ETHOD both use Covariance Perturbation. (1b) and (1d),
right, represent logistic regression on the KDD-99 Cup dataset, where both Noise Reduction and
D OUBLING M ETHOD use Output Perturbation. The top plots compare Noise Reduction to the ?theory
approach?: running the algorithm once using the value of ? that guarantees the desired expected
error via a utility theorem. The bottom compares to the D OUBLING M ETHOD baseline. Note the top
plots are generous to the theory approach: the theory curves promise only expected error, whereas
Noise Reduction promises a high probability guarantee. Each point is an average of 80 trials (Twitter
dataset) or 40 trials (KDD-99 dataset).
network events as innocent or malicious in the KDD-99 Cup dataset [13], with 38 features and
subsampled to 100,000 points. Details of parameters and methods appear in the full version.6
In each case, we tested the algorithm?s average ex-post privacy loss for a range of input accuracy goals
?, fixing a modest failure probability ? = 0.1 (and we observed that excess risks were concentrated
well below ?/2, suggesting a pessimistic analysis). The results show our meta-method gives a
large improvement over the ?theory? approach of simply inverting utility theorems for private ERM
algorithms. (In fact, the utility theorem for the popular private stochastic gradient descent algorithm
does not even give meaningful guarantees for the ranges of parameters tested; one would need an
order of magnitude more data points, and even then the privacy losses are enormous, perhaps due to
loose constants in the analysis.)
To gauge the more modest improvement over D OUBLING M ETHOD, note that the variation in the
privacy risk factor e? can still be very large; for instance, in the ridge regression setting of ? = 0.05,
6
A full implementation of our algorithms appears at:
Accuracy-First-Differential-Privacy.
8
https://github.com/steven7woo/
Noise Reduction has e? ? 10.0 while D OUBLING M ETHOD has e? ? 495; at ? = 0.075, the privacy
risk factors are 4.65 and 56.6 respectively.
Interestingly, for our meta-method, the contribution to privacy loss from ?testing? hypotheses (the
InteractiveAboveThreshold technique) was significantly larger than that from ?generating? them
(NoiseReduction). One place where the InteractiveAboveThreshold analysis is loose is in using a
theoretical bound on the maximum norm of any hypothesis to compute the sensitivity of queries.
The actual norms of hypotheses tested was significantly lower which, if taken as guidance to the
practitioner in advance, would drastically improve the privacy guarantee of both adaptive methods.
5
Future Directions
Throughout this paper, we focus on ?-differential privacy, instead of the weaker (?, ?)-(approximate)
differential privacy. Part of the reason is that an analogue of Lemma 2.8 does not seem to hold for
(?, ?)-differentially private queries without further assumptions, as the necessity to union-bound over
the ? ?failure probability? that the privacy loss is bounded for each query can erase the ex-post gains.
We leave obtaining similar results for approximate differential privacy as an open problem. More
generally, we wish to extend our ex-post privacy framework to approximate differential privacy, or
to the stronger notion of concentrated differential privacy [3]. Such results will allow us to obtain
ex-post privacy guarantees for a much broader class of algorithms.
9
References
[1] The AMA Team at Laboratoire d?Informatique de Grenoble. Buzz prediction in online social
media, 2017.
[2] Raef Bassily, Adam D. Smith, and Abhradeep Thakurta. Private empirical risk minimization,
revisited. CoRR, abs/1405.7085, 2014.
[3] Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions,
and lower bounds. In Theory of Cryptography - 14th International Conference, TCC 2016-B,
Beijing, China, October 31 - November 3, 2016, Proceedings, Part I, pages 635?658, 2016.
[4] Kamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In Advances
in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual
Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada,
December 8-11, 2008, pages 289?296, 2008.
[5] Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical
risk minimization. Journal of Machine Learning Research, 12:1069?1109, 2011.
[6] John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Local privacy and statistical
minimax rates. In 51st Annual Allerton Conference on Communication, Control, and Computing,
Allerton 2013, Allerton Park & Retreat Center, Monticello, IL, USA, October 2-4, 2013, page
1592, 2013.
[7] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to
sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265?284.
Springer, 2006.
[8] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. FoundaR in Theoretical Computer Science, 9(3?4):211?407, 2014.
tions and Trends
[9] Cynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In
Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 51?60.
IEEE, 2010.
[10] Giulia Fanti, Vasyl Pihur, and ?lfar Erlingsson. Building a rappor with the unknown: Privacypreserving learning of associations and data dictionaries. Proceedings on Privacy Enhancing
Technologies (PoPETS), issue 3, 2016, 2016.
[11] Andy Greenberg. Apple?s ?differential privacy? is about collecting your data?but not your data.
Wired Magazine, 2016.
[12] Prateek Jain, Pravesh Kothari, and Abhradeep Thakurta. Differentially private online learning.
In COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh,
Scotland, pages 24.1?24.34, 2012.
[13] KDD?99. Kdd cup 1999 data, 1999.
[14] Daniel Kifer, Adam D. Smith, and Abhradeep Thakurta. Private convex optimization for
empirical risk minimization with applications to high-dimensional regression. In COLT 2012
- The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland,
pages 25.1?25.40, 2012.
[15] Fragkiskos Koufogiannis, Shuo Han, and George J. Pappas. Gradual release of sensitive data
under differential privacy. Journal of Privacy and Confidentiality, 7, 2017.
[16] Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In Foundations
of Computer Science, 2007. FOCS?07. 48th Annual IEEE Symposium on, pages 94?103. IEEE,
2007.
[17] Ryan M Rogers, Aaron Roth, Jonathan Ullman, and Salil Vadhan. Privacy odometers and
filters: Pay-as-you-go composition. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and
R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1921?1929.
Curran Associates, Inc., 2016.
10
[18] Benjamin I. P. Rubinstein, Peter L. Bartlett, Ling Huang, and Nina Taft. Learning in a large
function space: Privacy-preserving mechanisms for SVM learning. CoRR, abs/0911.5708, 2009.
[19] Adam Smith, Jalaj Upadhyay, and Abhradeep Thakurta. Is interaction necessary for distributed
private learning? IEEE Symposium on Security and Privacy, 2017.
[20] Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. Stochastic gradient descent with
differentially private updates. In IEEE Global Conference on Signal and Information Processing,
GlobalSIP 2013, Austin, TX, USA, December 3-5, 2013, pages 245?248, 2013.
[21] Oliver Williams and Frank McSherry. Probabilistic inference and differential privacy. In
Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural
Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010,
Vancouver, British Columbia, Canada., pages 2451?2459, 2010.
11
| 6850 |@word trial:2 private:68 version:12 achievable:1 stronger:2 norm:3 bun:1 open:1 gradual:2 covariance:11 pick:1 incurs:4 pihur:1 shot:1 reduction:20 necessity:2 selecting:1 daniel:1 interestingly:1 prefix:7 outperforms:1 existing:3 com:1 written:1 john:1 subsequent:2 kdd:5 plot:2 ligett:1 update:2 v:5 selected:1 instantiate:2 scotland:2 smith:4 record:1 provides:1 boosting:1 revisited:1 allerton:3 org:1 noisereduction:5 along:1 direct:1 differential:33 symposium:3 focs:2 consists:2 overhead:4 combine:1 introduce:3 privacy:127 theoretically:1 expected:2 themselves:2 decreasing:1 little:1 actual:1 increasing:5 erase:1 notation:1 moreover:1 bounded:1 medium:1 prateek:1 argmin:7 substantially:1 guarantee:26 formalizes:1 every:5 collecting:1 innocent:1 classifier:1 control:2 enjoy:1 yn:2 appear:1 before:1 local:2 analyzing:1 initiated:1 meet:3 approximately:1 odometer:5 abuse:1 plus:2 might:1 rothblum:1 china:1 quantified:1 deployment:1 range:2 confidentiality:1 practical:3 testing:1 practice:3 block:1 union:1 differs:1 empirical:16 significantly:3 revealing:1 word:1 noising:1 put:1 risk:21 roth:3 center:1 straightforward:1 attention:1 starting:3 independently:2 convex:3 go:1 williams:1 simplicity:1 importantly:2 classic:1 notion:4 coordinate:4 variation:1 laplace:19 target:5 deploy:1 magazine:1 curran:1 hypothesis:29 kunal:1 associate:1 logarithmically:1 element:1 trend:1 database:1 labeled:1 bottom:1 steven:1 ft:5 preprint:1 observed:1 worst:2 substantial:1 principled:1 mentioned:1 intuition:1 benjamin:1 salil:2 raise:1 tight:1 depend:1 ov:4 solving:1 seth:1 tx:1 erlingsson:1 train:1 informatique:1 fast:1 effective:1 jain:1 query:27 rubinstein:1 outcome:4 refined:1 whose:4 larger:3 solve:1 katrina:1 say:4 otherwise:1 raef:1 itself:3 noisy:8 final:1 online:2 sequence:10 propose:1 subtracting:1 outputting:1 product:1 coming:1 tcc:1 interaction:1 neighboring:1 chaudhuri:3 achieve:1 ama:1 description:3 differentially:22 ky:1 double:1 requirement:9 wired:1 produce:3 generating:2 leave:1 adam:4 object:2 tions:1 depending:2 oubling:7 fixing:1 ex:28 p2:1 solves:1 implies:1 met:1 concentrate:1 direction:1 closely:1 modifying:1 stochastic:3 filter:1 centered:2 rogers:2 public:1 require:1 iat:7 hx:2 taft:1 fix:4 f1:2 pessimistic:1 ryan:1 extension:1 zhiwei:1 hold:3 exp:5 algorithmic:1 predict:1 claim:1 neel:1 dictionary:1 generous:1 smallest:4 a2:2 released:1 purpose:1 pravesh:1 thakurta:4 sensitive:1 gauge:1 tool:4 minimization:6 always:1 gaussian:1 rather:3 broader:1 release:10 focus:3 derived:2 june:2 she:1 improvement:2 check:4 likelihood:1 baseline:4 sense:1 inference:1 twitter:3 dependent:3 stopping:2 initially:1 her:6 subroutine:1 semantics:1 issue:1 colt:2 priori:3 constrained:2 marginal:2 equal:1 construct:1 once:2 beach:1 park:1 future:1 grenoble:1 subsampled:2 microsoft:1 attempt:4 ab:3 dwork:3 evaluation:1 deferred:1 introduces:1 mcsherry:3 held:1 accurate:2 andy:1 oliver:1 monticello:1 necessary:2 modest:2 walk:4 desired:1 guidance:1 theoretical:5 stopped:1 instance:1 column:1 obstacle:1 cost:7 entry:3 shuang:1 conducted:1 kxi:2 chooses:1 combined:1 st:3 density:2 international:1 randomized:2 sensitivity:9 lee:1 probabilistic:1 michael:1 squared:1 again:1 opposed:1 choose:1 huang:1 guy:1 leading:1 return:3 kobbi:1 ullman:1 suggesting:1 halting:1 de:2 includes:1 disconnect:1 inc:1 satisfy:2 depends:1 view:1 exogenous:1 analyze:1 start:2 defer:1 ante:2 contribution:1 minimize:1 square:1 il:1 accuracy:31 variance:2 conceptually:1 globalsip:1 buzz:1 apple:2 finer:1 strongest:1 reach:1 monteleoni:2 definition:4 failure:3 naturally:1 proof:5 di:1 gain:1 dataset:13 popular:1 improves:1 formalize:1 sophisticated:1 appears:3 higher:1 follow:2 maximally:1 though:1 strongly:1 pappa:1 furthermore:1 just:1 implicit:1 until:3 hand:1 sketch:1 replacing:1 google:1 logistic:10 perhaps:1 riding:1 usa:3 building:2 k22:7 calibrating:1 hence:3 regularization:1 iteratively:1 round:3 ridge:10 performs:1 duchi:1 recently:3 common:1 empirically:4 sarwate:2 extend:1 association:1 refer:1 composition:8 cup:3 unconstrained:1 grid:1 sugiyama:1 han:1 add:5 posterior:1 own:1 recent:1 perspective:1 optimizing:1 massively:1 certain:3 meta:7 binary:1 maxd:1 meeting:1 yi:11 caltech:1 preserving:2 additional:2 george:1 impose:1 determine:1 maximize:2 signal:1 ii:1 full:11 desirable:1 sound:1 reduces:1 d0:9 exceeds:1 plug:1 long:2 post:31 halt:3 a1:2 prediction:1 variant:3 regression:21 basic:1 enhancing:1 publish:2 arxiv:2 sometimes:1 tailored:1 represent:2 achieved:2 abhradeep:4 background:1 whereas:2 else:1 laboratoire:1 malicious:1 releasing:2 rest:1 subject:3 tend:1 hz:1 privacypreserving:1 thing:1 december:3 anand:2 seem:1 jordan:1 call:2 practitioner:1 vadhan:2 exceed:2 enough:1 undertake:1 variety:1 pennsylvania:3 reduce:1 idea:3 o0:4 utility:7 bartlett:1 song:1 peter:1 abovethreshold:16 compositional:1 repeatedly:1 generally:2 useful:1 clear:1 involve:2 amount:1 concentrated:3 fanti:1 http:2 generate:4 popularity:1 discrete:1 write:2 promise:2 key:2 threshold:4 enormous:1 achieving:1 drawn:5 v1:1 concreteness:1 sum:1 beijing:1 run:7 talwar:1 parameterized:1 you:1 luxburg:1 place:1 throughout:2 guyon:1 wu:1 decision:1 scaling:1 bound:7 pay:3 simplification:1 annual:7 nontrivial:1 constraint:6 your:2 x2:1 generates:1 extremely:1 martin:1 terminates:1 slightly:1 increasingly:1 making:1 modification:1 intuitively:1 gradually:4 pr:5 erm:16 taken:1 equation:1 loose:2 mechanism:18 know:1 weaving:1 kifer:1 available:1 incurring:2 apply:7 observe:1 rp:8 thomas:1 denotes:1 running:2 include:1 top:2 publishing:1 k1:3 perturb:1 classical:2 leakage:1 objective:2 question:1 quantity:1 kak2:2 traditional:1 nr:11 gradient:3 reversed:1 perturbs:1 sensible:1 nissim:1 extent:1 considers:1 reason:2 nina:1 analyst:6 length:1 index:3 ratio:1 hebrew:1 october:2 frank:3 design:3 ethod:7 implementation:1 twenty:1 perform:1 unknown:1 upper:1 kothari:1 datasets:1 finite:1 descent:3 november:1 situation:1 communication:1 team:1 y1:2 rn:1 perturbation:19 arbitrary:2 canada:2 introduced:3 inverting:3 pair:1 connection:1 security:1 learned:1 nip:1 alternately:1 address:1 beyond:1 adversary:1 below:1 including:1 max:2 analogue:1 wainwright:1 event:2 natural:1 regularized:6 pictured:1 minimax:1 improve:1 github:1 technology:1 vasyl:1 identifies:1 naive:1 columbia:2 deviate:1 prior:2 literature:2 nice:1 kf:1 vancouver:2 loss:37 interesting:1 generation:1 foundar:1 foundation:3 retreat:1 incurred:2 sufficient:1 consistent:4 editor:1 classifying:1 row:2 claire:2 austin:1 repeat:1 free:1 soon:1 enjoys:1 drastically:1 formal:3 allow:2 weaker:1 neighbor:1 steinke:1 taking:1 edinburgh:2 distributed:1 curve:2 greenberg:1 xn:3 evaluating:2 rich:1 computes:3 forward:1 concretely:1 adaptive:2 subtracts:1 social:1 excess:14 approximate:3 global:1 active:1 reveals:1 sequentially:1 instantiation:3 conceptual:1 summing:1 xi:10 search:9 continuous:2 decade:1 terminate:1 ca:1 obtaining:1 utput:2 domain:2 assured:1 shuo:1 garnett:1 rappor:1 privately:5 bounding:1 noise:31 ling:1 cryptography:2 x1:3 site:1 bassily:1 deployed:1 wish:2 exponential:1 upadhyay:1 theorem:21 loss2:1 british:2 showing:1 cynthia:3 list:3 svm:1 concern:1 giulia:1 adding:1 corr:2 kamalika:3 magnitude:1 gap:1 logarithmic:1 lap:9 simply:7 bo:1 doubling:6 applies:1 springer:1 minimizer:1 satisfies:9 goal:7 eventual:1 feasible:1 hard:1 lemma:3 conservative:1 total:1 meaningful:1 aaron:3 select:2 formally:2 searched:1 mark:1 jonathan:1 evaluate:3 tested:3 correlated:2 |
6,469 | 6,851 | EX2: Exploration with Exemplar Models for Deep
Reinforcement Learning
Justin Fu?
John D. Co-Reyes?
Sergey Levine
University of California Berkeley
{justinfu,jcoreyes,svlevine}@eecs.berkeley.edu
Abstract
Deep reinforcement learning algorithms have been shown to learn complex tasks
using highly general policy classes. However, sparse reward problems remain a
significant challenge. Exploration methods based on novelty detection have been
particularly successful in such settings but typically require generative or predictive
models of the observations, which can be difficult to train when the observations
are very high-dimensional and complex, as in the case of raw images. We propose a
novelty detection algorithm for exploration that is based entirely on discriminatively
trained exemplar models, where classifiers are trained to discriminate each visited
state against all others. Intuitively, novel states are easier to distinguish against
other states seen during training. We show that this kind of discriminative modeling
corresponds to implicit density estimation, and that it can be combined with countbased exploration to produce competitive results on a range of popular benchmark
tasks, including state-of-the-art results on challenging egocentric observations in
the vizDoom benchmark.
1
Introduction
Recent work has shown that methods that combine reinforcement learning with rich function approximators, such as deep neural networks, can solve a range of complex tasks, from playing Atari
games (Mnih et al., 2015) to controlling simulated robots (Schulman et al., 2015). Although deep
reinforcement learning methods allow for complex policy representations, they do not by themselves
solve the exploration problem: when the reward signals are rare and sparse, such methods can struggle
to acquire meaningful policies. Standard exploration strategies, such as -greedy strategies (Mnih
et al., 2015) or Gaussian noise (Lillicrap et al., 2015), are undirected and do not explicitly seek out
interesting states. A promising avenue for more directed exploration is to explicitly estimate the
novelty of a state, using predictive models that generate future states (Schmidhuber, 1990; Stadie
et al., 2015; Achiam & Sastry, 2017) or model state densities (Bellemare et al., 2016; Tang et al., 2017;
Abel et al., 2016). Related concepts such as count-based bonuses have been shown to provide substantial speedups in classic reinforcement learning (Strehl & Littman, 2009; Kolter & Ng, 2009), and
several recent works have proposed information-theoretic or probabilistic approaches to exploration
based on this idea (Houthooft et al., 2016; Chentanez et al., 2005) by drawing on formal results in
simpler discrete or linear systems (Bubeck & Cesa-Bianchi, 2012). However, most novelty estimation
methods rely on building generative or predictive models that explicitly model the distribution over
the current or next observation. When the observations are complex and high-dimensional, such as in
the case of raw images, these models can be difficult to train, since generating and predicting images
and other high-dimensional objects is still an open problem, despite recent progress (Salimans et al.,
2016). Though successful results with generative novelty models have been reported with simple
synthetic images, such as in Atari games (Bellemare et al., 2016; Tang et al., 2017), we show in our
?
equal contribution.
experiments that such generative methods struggle with more complex and naturalistic observations,
such as the ego-centric image observations in the vizDoom benchmark.
How can we estimate the novelty of visited states, and thereby provide an intrinsic motivation signal
for reinforcement learning, without explicitly building generative or predictive models of the state or
observation? The key idea in our EX2 algorithm is to estimate novelty by considering how easy it is
for a discriminatively trained classifier to distinguish a given state from other states seen previously.
The intuition is that, if a state is easy to distinguish from other states, it is likely to be novel. To
this end, we propose to train exemplar models for each state that distinguish that state from all other
observed states. We present two key technical contributions that make this into a practical exploration
method. First, we describe how discriminatively trained exemplar models can be used for implicit
density estimation, allowing us to unify this intuition with the theoretically rigorous framework of
count-based exploration. Our experiments illustrate that, in simple domains, the implicitly estimated
densities provide good estimates of the underlying state densities without any explicit generative
training. Second, we show how to amortize the training of exemplar models to prevent the total
number of classifiers from growing with the number of states, making the approach practical and
scalable. Since our method does not require any explicit generative modeling, we can use it on a
range of complex image-based tasks, including Atari games and the vizDoom benchmark, which
has complex 3D visuals and extensive camera motion due to the egocentric viewpoint. Our results
show that EX2 matches the performance of generative novelty-based exploration methods on simpler
tasks, such as continuous control benchmarks and Atari, and greatly exceeds their performance on the
complex vizDoom domain, indicating the value of implicit density estimation over explicit generative
modeling for intrinsic motivation.
2
Related Work
In finite MDPs, exploration algorithms such as E 3 (Kearns & Singh, 2002) and R-max (Brafman &
Tennenholtz, 2002) offer theoretical optimality guarantees. However, these methods typically require
maintaining state-action visitation counts, which can make extending them to high dimensional and/or
continuous states very challenging. Exploring in such state spaces has typically involved strategies
such as introducing distance metrics over the state space (Pazis & Parr, 2013; Kakade et al., 2003),
and approximating the quantities used in classical exploration methods. Prior works have employed
approximations for the state-visitation count (Tang et al., 2017; Bellemare et al., 2016; Abel et al.,
2016), information gain, or prediction error based on a learned dynamics model (Houthooft et al.,
2016; Stadie et al., 2015; Achiam & Sastry, 2017). Bellemare et al. (2016) show that count-based
methods in some sense bound the bonuses produced by exploration incentives based on intrinsic
motivation, such as model uncertainty or information gain, making count-based or density-based
bonuses an appealing and simple option.
Other methods avoid tackling the exploration problem directly and use randomness over model
parameters to encourage novel behavior (Chapelle & Li, 2011). For example, bootstrapped DQN
(Osband et al., 2016) avoids the need to construct a generative model of the state by instead training
multiple, randomized value functions and performs exploration by sampling a value function, and
executing the greedy policy with respect to the value function. While such methods scale to complex
state spaces as well as standard deep RL algorithms, they do not provide explicit novelty-seeking
behavior, but rather a more structured random exploration behavior.
Another direction explored in prior work is to examine exploration in the context of hierarchical
models. An agent that can take temporally extended actions represented as action primitives or skills
can more easily explore the environment (Stolle & Precup, 2002). Hierarchical reinforcement learning
has traditionally tried to exploit temporal abstraction (Barto & Mahadevan, 2003) and relied on semiMarkov decision processes. A few recent works in deep RL have used hierarchies to explore in sparse
reward environments (Florensa et al., 2017; Heess et al., 2016). However, learning a hierarchy is
difficult and has generally required curriculum learning or manually designed subgoals (Kulkarni
et al., 2016). In this work, we discuss a general exploration strategy that is independent of the design
of the policy and applicable to any architecture, though our experiments focus specifically on deep
reinforcement learning scenarios, including image-based navigation, where the state representation is
not conducive to simple count-based metrics or generative models.
2
Concurrently with this work, Pathak et al. (2017) proposed to use discriminatively trained exploration
bonuses by learning state features which are trained to predict the action from state transition pairs.
Then given a state and action, their model predicts the features of the next state and the bonus is
calculated from the prediction error. In contrast to our method, this concurrent work does not attempt
to provide a probabilistic model of novelty and does not perform any sort of implicit density estimation.
Since their method learns an inverse dynamics model, it does not provide for any mechanism to
handle novel events that do not correlate with the agent?s actions, though it does succeed in avoiding
the need for generative modeling.
3
Preliminaries
In this paper, we consider a Markov decision process (MDP), defined by the tuple (S, A, T , R, ?, ?0 ).
S, A are the state and action spaces, respectively. The transition distribution T (s0 |a, s), initial
state distribution ?0 (s), and reward function R(s, a) are unknown in the reinforcement learning
(RL) setting and can only be queried through interaction with the MDP. The goal of reinforcement learning is to find the optimal policy ? ? that maximizes the expected sum of discounted
PT
rewards, ? ? = arg max? E? ?? [ t=0 ? t R(st , at )] , where, ? denotes a trajectory (s0 , a0 , ...sT , aT )
QT
and ?(? ) = ?0 (s0 ) t=0 ?(at |st )T (st+1 |st , at ). Our experiments evaluate episodic tasks with a
policy gradient RL algorithm, though extensions to infinite horizon settings or other algorithms, such
as Q-learning and actor-critic, are straightforward.
Count-based exploration algorithms maintain a state-action visitation count N (s, a), and encourage
the agent to visit rarely seen states, operating on the principle of optimism under uncertainty. This is
typically achieved by adding a reward
pbonus for visiting rare states. For example, MBIE-EB (Strehl
& Littman, 2009) uses a bonus of ?/ N (s, a), where ? is a constant, and BEB (Kolter & Ng, 2009)
uses a ?/(N (s, a) + |S|). In the finite state and action spaces, these methods are PAC-MDP (for
MBIE-EB) or PAC-BAMDP (for BEB), roughly meaning that the agent acts suboptimally for only a
polynomial number of steps. In domains where explicit counting is impractical, pseudo-counts can
be used based on a density estimate p(s, a), which typically is done using some sort of generatively
trained density estimation model (Bellemare et al., 2016). We will describe how we can estimate
densities using only discriminatively trained classifiers, followed by a discussion of how this implicit
estimator can be incorporated into a pseudo-count novelty bonus method.
4
Exemplar Models and Density Estimation
We begin by describing our discriminative model used to predict novelty of states visited during
training. We highlight a connection between this particular form of discriminative model and density
estimation, and in Section 5 describe how to use this model to generate reward bonuses.
4.1
Exemplar Models
To avoid the need for explicit generative models, our novelty estimation method uses exemplar
models. Given a dataset X = {x1 , ...xn }, an exemplar model consists of a set of n classifiers or
discriminators {Dx1 , ....Dxn }, one for each data point. Each individual discriminator Dxi is trained
to distinguish a single positive data point xi , the ?exemplar,? from the other points in the dataset
X. We borrow the term ?exemplar model? from Malisiewicz et al. (2011), which coined the term
?exemplar SVM? to refer to a particular linear model trained to classify each instance against all others.
However, to our knowledge, our work is the first to apply this idea to exploration for reinforcement
learning. In practice, we avoid the need to train n distinct classifiers by amortizing through a single
exemplar-conditioned network, as discussed in Section 6.
Let PX (x) denote the data distribution over X , and let Dx? (x) : X ? [0, 1] denote the discriminator
associated with exemplar x? . In order to obtain correct density estimates, as discussed in the next
section, we present each discriminator with a balanced dataset, where half of the data consists of the
exemplar x? and half comes from the background distribution PX (x). Each discriminator is then
trained to model a Bernoulli distribution Dx? (x) = P (x = x? |x) via maximum likelihood. Note
that the label x = x? is noisy because data that is extremely similar or identical to x? may also
occur in the background distribution PX (x), so the classifier does not always output 1. To obtain the
3
maximum likelihood solution, the discriminator is trained to optimize the following cross-entropy
objective
Dx? = arg max (E?x? [log D(x)] + EPX [log 1 ? D(x)]) .
(1)
D?D
We discuss practical amortized methods that avoid the need to train n discriminators in Section 6, but
to keep the derivation in this section simple, we consider independent discriminators for now.
4.2
Exemplar Models as Implicit Density Estimation
To show how the exemplar model can be used for implicit density estimation, we begin by considering
an infinitely powerful, optimal discriminator, for which we can make an explicit connection between
the discriminator and the underlying data distribution PX (x):
Proposition 1. (Optimal Discriminator) For a discrete distribution PX (x), the optimal discriminator
Dx? for exemplar x? satisfies
Dx? (x) =
?x? (x)
?x? (x) + PX (x)
and
Dx? (x? ) =
1
.
1 + PX (x? )
Proof. The proof is obtained by taking the derivative of the loss in Eq. (1) with respect to D(x),
setting it to zero, and solving for D(x).
It follows that, if the discriminator is optimal, we can recover the probability of a data point PX (x? )
by evaluating the discriminator at its own exemplar x? , according to
PX (x? ) =
1 ? Dx? (x? )
.
Dx? (x? )
(2)
For continuous domains, ?x? (x? ) ? ?, so D(x) ? 1. This means we are unable to recover
PX (x) via Eq. (2). However, we can smooth the delta by adding noise ? q() to the exemplar
x? during training, which allows us to recover exact density estimates by solving for PX (x). For
if we
= N (0, ? 2 I), theni the optimal discriminator evaluated at x? satisfies Dx? (x? ) =
hexample,
i let
h q?
?
d
d
1/ 2?? 2 / 1/ 2?? 2 + PX (x) . Even if we do not know the noise variance, we have
PX (x? ) ?
1 ? Dx? (x? )
.
Dx? (x? )
(3)
This proportionality holds for any noise q as long as (?x? ? q)(x? ) (where ? denotes convolution) is
the same for every x? . The reward bonus we describe in Section 5 is invariant to the normalization
factor, so proportional estimates are sufficient.
In practice, we can get density estimates that are better suited for exploration by introducing smoothing, which involves adding noise to the background distribution PX , to produce the estimator
Dx? (x) =
(?x?
(?x? ? q)(x)
.
? q)(x) + (PX ? q)(x? )
We then recover our density estimate as (PX ? q)(x? ). In the case when PX is a collection of delta
functions around data points, this is equivalent to kernel density estimation using the noise distribution
as a kernel. With Gaussian noise q = N (0, ? 2 I), this is equivalent to using an RBF kernel.
4.3
Latent Space Smoothing with Noisy Discriminators
In the previous section, we discussed how adding noise can provide for smoothed density estimates,
which is especially important in complex or continuous spaces, where all states might be distinguishable with a powerful enough discriminator. Unfortunately, for high-dimensional states, such as
images, adding noise directly to the state often does not produce meaningful new states, since the
distribution of states lies on a thin manifold, and any added noise will lift the noisy state off of this
manifold. In this section, we discuss how we can learn a smoothing distribution by injecting the noise
into a learned latent space, rather than adding it to the original states.
4
Formally, we introduce a latent variable z. We wish to train an encoder distribution q(z|x), and a
latent space classifier p(y|z) = D(z)y (1 ? D(z))1?y , where y = 1 when x = x? and y = 0 when
x 6= x? . We additionally regularize the noise distribution against a prior distribution p(z), which
in our case is a unit Gaussian. Letting pe(x) = 12 ?x? (x) + 12 pX (x) denote the balanced training
distribution from before, we can learn the latent space by maximizing the objective
max Epe[Eqz|x [log p(y|z)] ? DKL (q(z|x)||p(z))] .
py|z ,qz|x
(4)
Intuitively, this objective optimizes the noise distribution so as to maximize classification accuracy
while transmitting as little information through the latent space as possible. This causes z to only
capture the factors of variation in x that are most informative for distinguish points from the exemplar,
resulting in noise that stays on the state manifold. For example, in the Atari domain, latent space
noise might correspond to smoothing over the location of the player and moving objects on the screen,
in contrast to performing pixel-wise Gaussian smoothing.
R
R
Letting q(z|y = 1) = x ?x? (x)q(z|x)dx and q(z|y = 0) = x pX (x)q(z|x)dx denote the marginalized positive and negative densities over the latent space, we can characterize the optimal discriminator
and encoder distributions as follows. For any encoder q(z|x), the optimal discriminator D(z) satisfies:
p(y = 1|z) = D(z) =
q(z|y = 1)
,
q(z|y = 1) + q(z|y = 0)
and for any discriminator D(z), the optimal encoder distribution satisfies:
q(z|x) ? D(z)ysoft (x) (1 ? D(z))1?ysoft (x) p(z) ,
?x? (x)
where ysoft (x) = p(y = 1|x) = ?x? (x)+p
is the average label of x. These can be obtained by
X (x)
differentiating the objective, and the full derivation is included in Appendix A.1. Intuitively, q(z|x)
is equal to the prior p(z) by default, which carries no information about x. It then scales up the
probability on latent codes z where the discriminator is confident and correct. To recover a density
estimate, we estimate D(x) = Eq [D(z)] and apply Eq. (3) to obtain the density.
4.4
Smoothing from Suboptimal Discriminators
In our previous derivations, we assume an optimal, infinitely powerful discriminator which can
emit a different value D(x) for every input x. However, this is typically not possible except for
small, countable domains. A secondary but important source of density smoothing occurs when the
discriminator has difficulty distinguishing two states x and x0 . In this case, the discriminator will
average over the outputs of the infinitely powerful discriminator. This form of smoothing comes from
the inductive bias of the discriminator, which is difficult to quantify. In practice, we typically found
this effect to be beneficial for our model rather than harmful. An example of such smoothed density
estimates is shown in Figure 2. Due to this effect, adding noise is not strictly necessary to benefit
from smoothing, though it provides for significantly better control over the degree of smoothing.
5
EX2 : Exploration with Exemplar Models
We can now describe our exploration algorithm based on implicit density models. Pseudocode for a
batch policy search variant using the single exemplar model is shown in Algorithm 1. Online variants
for other RL algorithms, such as Q-learning, are also possible. In order to apply the ideas from
count-based exploration described in Section 3, we must approximate the state visitation counts
N (s) = nP (s), where P (s) is the distribution over states visited during training. Note that we can
easily use state-action counts N (s, a), but we omit the action for simplicity of notation. To generate
approximate samples from P (s), we use a replay buffer B, which is a first-in first-out (FIFO) queue
that holds previously visited states. Our exemplars are the states we wish to score, which are the states
in the current batch of trajectories. In an online algorithm, we would instead train a discriminator
after receiving every new observation one at a time, and compute the bonus in the same manner.
Given the output from discriminators trained to optimize Eq (1), we augment the reward with a
function of the ?novelty? of the state (where ? is a hyperparameter that can be tuned to the magnitude
of the task reward): R0 (s, a) = R(s, a) + ?f (Ds (s)).
5
Algorithm 1 EX2 for batch policy optimization
1: Initialize replay buffer B
2: for iteration i in {1, . . . , N} do
3:
Sample trajectories {?j } from policy ?i
4:
for state s in {? } do
5:
Sample a batch of negatives {s0k } from B.
6:
Train discriminator Ds to minimize Eq. (1) with positive s, and negatives {s0k }.
7:
Compute reward R0 (s, a) = R(s, a) + ?f (Ds (s))
8:
end for
9:
Improve ?i with respect to R0 (s, a) using any policy optimization method.
10:
B ? B ? {?i }
11: end for
In our experiments, we use the heuristic bonus ? log p(s), due to the fact that normalization constants
become absorbed
pby baselines used in typical RL algorithms. For discrete domains, we can also use a
count-based 1/ N (s) (Tang et al., 2017), where N (s) = nP (s), and n being the size of the replay
buffer B. A summary of EX2 for a generic batch reinforcement learner is shown in Algorithm 1.
6
Model Architecture
To process complex observations such as images, we implement our exemplar model using neural
networks, with convolutional models used for image-based domains. To reduce the computational
cost of training such large per-exemplar classifiers, we explore two methods for amortizing the
computation across multiple exemplars.
6.1
Amortized Multi-Exemplar Model
Instead of training a separate classifier for each exemplar, we can instead train a single model that is
conditioned on the exemplar x? . When using the latent space formulation, we condition the latent
space discriminator p(y|z) on an encoded version of x? given by q(z ? |x? ), resulting in a classifier
for the form p(y|z, z ? ) = D(z, z ? )y (1 ? D(z, z ? ))1?y . The advantage of this amortized model is
that it does not require us to train new discriminators from scratch at each iteration, and provides
some degree of generalization for density estimation at new states. A diagram of this architecture is
shown in Figure 1. The amortized architecture has the appearance of a comparison operator: it is
trained to output 0 when x? 6= x, and the optimal discriminator values covered in Section 4 when
x? = x, subject to the smoothing imposed by the latent space noise.
6.2
K-Exemplar Model
As long as the distribution of positive examples is known, we can recover density estimates via Eq. (3).
Thus, we can also consider a batch of exemplars x1 , ..., xK , and sample from this batch uniformly
during training. We refer to this model as the "K-Exemplar" model, which allows us to interpolate
smoothly between a more powerful model with one discriminator per state (K = 1) with a weaker
model that uses a single discriminator for all states (K = # states). A more detailed discussion of
this method is included in Appendix A.2. In our experiments, we batch adjacent states in a trajectory
into the same discriminator which corresponds to a form of temporal regularization that assumes that
adjacent states in time are similar. We also share the majority of layers between discriminators in the
neural networks similar to (Osband et al., 2016), and only allow the final linear layer to vary amongst
discriminators, which forces the shared layers to learn a joint feature representation, similarly to the
amortized model. An example architecture is shown in Figure 1.
6.3
Relationship to Generative Adverserial Networks (GANs)
Our exploration algorithm has an interesting interpretation related to GANs (Goodfellow et al.,
2014). The policy can be viewed as the generator of a GAN, and the exemplar model serves as the
discriminator, which is trying to classify states from the current batch of trajectories against previous
6
a) Amortized Architecture
b) K-Exemplar Architecture
Figure 1: A diagram of our a) amortized model architecture and b) the K-exemplar model architecture.
Noise is injected after the encoder module (a) or after the shared layers (b). Although possible, we do
not tie the encoders of (a) in our experiments.
states. Using the K-exemplar version of our algorithm, we can train a single discriminator for all
states in the current batch (rather than one for each state), which mirrors the GAN setup.
In GANs, the generator plays an adverserial game with the discriminator by attempting to produce
indistinguishable samples in order to fool the discriminator. However, in our algorithm, the generator
is rewarded for helping the discriminator rather than fooling it, so our algorithm plays a cooperative
game instead of an adverserial one. Instead, they are competing with the progression of time: as a
novel state becomes visited frequently, the replay buffer will become saturated with that state and it
will lose its novelty. This property is desirable in that it forces the policy to continually seek new
states from which to receive exploration bonuses.
7
Experimental Evaluation
The goal of our experimental evaluation is to compare the EX2 method to both a na?ve exploration
strategy and to recently proposed exploration schemes for deep reinforcement learning based on
explicit density modeling. We present results on both low-dimensional benchmark tasks used in
prior work, and on more complex vision-based tasks, where prior density-based exploration bonus
methods are difficult to apply. We use TRPO (Schulman et al., 2015) for policy optimization, because
it operates on both continuous and discrete action spaces, and due to its relative robustness to hyperparameter choices (Duan et al., 2016). Our code and additional supplementary material including
videos will be available at https://sites.google.com/view/ex2exploration.
Experimental Tasks Our experiments include three low-dimensional tasks intended to assess
whether EX2 can successfully perform implicit density estimation and computer exploration bonuses,
and four high-dimensional image-based tasks of varying difficulty intended to evaluate whether
implicit density estimation provides improvement in domains where generative modeling is difficult.
The first low-dimensional task is a continuous 2D maze with a sparse reward function that only
provides a reward when the agent is within a small radius of the goal. Because this task is 2D, we can
use it to directly visualize the state visitation densities and compare to an upper bound histogram
method for density estimation. The other two low-dimensional tasks are benchmark tasks from
the OpenAI gym benchmark suite, SparseHalfCheetah and SwimmerGather, which provide for a
comparison against prior work on generative exploration bonuses in the presence of sparse rewards.
For the vision-based tasks, we include three Atari games, as well as a much more difficult ego-centric
navigation task based on vizDoom (DoomMyWayHome+). The Atari games are included for easy
comparison with prior methods based on generative models, but do not provide especially challenging
visual observations, since the clean 2D visuals and relatively low visual diversity of these tasks makes
generative modeling easy. In fact, prior work on video prediction for Atari games easily achieves
accurate predictions hundreds of frames into the future (Oh et al., 2015), while video prediction
on natural images is challenging even a couple of frames into the future (Mathieu et al., 2015).
The vizDoom maze navigation task is intended to provide a comparison against prior methods with
substantially more challenging observations: the game features a first-person viewpoint, 3D visuals,
and partial observability, as well as the usual challenges associated with sparse rewards. We make
the task particularly difficult by initializing the agent in the furthest room from the goal location,
7
a) Exemplar
b) Empirical
c) Varying Smoothing
Figure 2: a, b) Illustration of estimated densities on the 2D
maze task produced by our model (a), compared to the empiri- Figure 3: Example task images.
cal discretized distribution (b). Our method provides reasonable, From top to bottom, left to right:
somewhat smoothed density estimates. c) Density estimates pro- Doom, map of the MyWayHome
duced with our implicit density estimator on a toy dataset (top task (goal is green, start is blue),
left), with increasing amounts of noise regularization.
Venture, HalfCheetah.
requiring it to navigate through 8 rooms before reaching the goal. Sample images taken from several
of these tasks are shown in Figure 3 and detailed task descriptions are given in Appendix A.3.
We compare the two variants of our method (K-exemplar and amortized) to standard random exploration, kernel density estimation (KDE) with RBF kernels, a method based on Bayesian neural
network generative models called VIME (Houthooft et al., 2016), and exploration bonuses based on
hashing of latent spaces learned via an autoencoder (Tang et al., 2017).
2D Maze On the 2D maze task, we can visually compare the estimated state density from our
exemplar model and the empirical state-visitation distribution sampled from the replay buffer, as
shown in Figure 2. Our model generates sensible density estimates that smooth out the true empirical
distribution. For exploration performance, shown in Table 1,TRPO with Gaussian exploration cannot
find the sparse reward goal, while both variants of our method perform similarly to VIME and KDE.
Since the dimensionality of the task is low, we also use a histogram-based method to estimate the
density, which provides an upper bound on the performance of count-based exploration on this task.
Continuous Control: SwimmerGather and SparseHalfCheetah SwimmerGather and SparseHalfCheetah are two challenging continuous control tasks proposed by Houthooft et al. (2016). Both
environments feature sparse reward and medium-dimensional observations (33 and 20 dimensions
respectively). SwimmerGather is a hierarchical task in which no previous algorithms using na?ve
exploration have made any progress. Our results demonstrate that, even on medium-dimensional
tasks where explicit generative models should perform well, our implicit density estimation approach
achieves competitive results. EX2 , VIME, and Hashing significantly outperform the na?ve TRPO
algorithm and KDE on SwimmerGather, and amortized EX2 outperforms all other methods on SparseHalfCheetah by a significant margin. This indicates that the implicit density estimates obtained by
our method provide for exploration bonuses that are competitive with a variety of explicit density
estimation techniques.
Image-Based Control: Atari and Doom In our final set of experiments, we test the ability of
our algorithm to scale to rich sensory inputs and high dimensional image-based state spaces. We
chose several Atari games that have sparse rewards and present an exploration challenge, as well as a
maze navigation benchmark based on vizDoom. Each domain presents a unique set of challenges.
The vizDoom domain contains the most realistic images, and the environment is viewed from an
egocentric perspective which makes building dynamics models difficult and increases the importance
of intelligent smoothing and generalization. The Atari games (Freeway, Frostbite, Venture) contain
simpler images from a third-person viewpoint, but often contain many moving, distractor objects
that a density model must generalize to. Freeway and Venture contain sparse reward, and Frostbite
contains a small amount of dense reward but attaining higher scores typically requires exploration.
Our results demonstrate that EX2 is able to generate coherent exploration behavior even highdimensional visual environments, matching the best-performing prior methods on the Atari games.
On the most challenging task, DoomMyWayHome+, our method greatly exceeds all of the prior
8
Task
K-Ex.(ours) Amor.(ours) VIME1 TRPO2 Hashing3
2D Maze
-104.2
-132.2
-135.5 -175.6
SparseHalfCheetah
3.56
173.2
98.0
0
0.5
SwimmerGather
0.228
0.240
0.196
0
0.258
Freeway (Atari)
33.3
16.5
33.5
Frostbite (Atari)
4901
2869
5214
Venture (Atari)
900
121
445
0.740
0.788
0.443
0.250
0.331
DoomMyWayHome
1
2
3
Houthooft et al. (2016)
Schulman et al. (2015)
Tang et al. (2017)
KDE
-117.5
0
0.098
0.195
Histogram
-69.6
-
Table 1: Mean scores (higher is better) of our algorithm (both K-exemplar and amortized) versus
VIME (Houthooft et al., 2016), baseline TRPO, Hashing, and kernel density estimation (KDE). Our
approach generally matches the performance of previous explicit density estimation methods, and
greatly exceeds their performance on the challenging DoomMyWayHome+ task, which features
camera motion, partial observability, and extremely sparse rewards. We did not run VIME or KExemplar on Atari games due to computational cost. Atari games are trained for 50 M time steps.
Learning curves are included in Appendix A.5
exploration techniques, and is able to guide the agent through multiple rooms to the goal. This result
indicates the benefit of implicit density estimation: while explicit density estimators can achieve good
results on simple, clean images in the Atari games, they begin to struggle with the more complex
egocentric observations in vizDoom, while our EX2 is able to provide reasonable density estimates
and achieves good results.
8
Conclusion and Future Work
We presented EX2 , a scalable exploration strategy based on training discriminative exemplar models
to assign novelty bonuses. We also demonstrate a novel connection between exemplar models and
density estimation, which motivates our algorithm as approximating pseudo-count exploration. This
density estimation technique also does not require reconstructing samples to train, unlike most
methods for training generative or energy-based models. Our empirical results show that EX2 tends
to achieve comparable results to the previous state-of-the-art for continuous control tasks on lowdimensional environments, and can scale gracefully to handle rich sensory inputs such as images.
Since our method avoids the need for generative modeling of complex image-based observations, it
exceeds the performance of prior generative methods on domains with more complex observation
functions, such as the egocentric Doom navigation task.
To understand the tradeoffs between discriminatively trained exemplar models and generative modeling, it helps to consider the behavior of the two methods when overfitting or underfitting. Both
methods will assign flat bonuses when underfitting and high bonuses to all new states when overfitting.
However, in the case of exemplar models, overfitting is easy with high dimensional observations,
especially in the amortized model where the network simply acts as a comparator. Underfitting is
also easy to achieve, simply by increasing the magnitude of the noise injected into the latent space.
Therefore, although both approach can suffer from overfitting and underfitting, the exemplar method
provides a single hyperparameter that interpolates between these extremes without changing the
model. An exciting avenue for future work would be to adjust this smoothing factor automatically,
based on the amount of available data. More generally, implicit density estimation with exemplar
models is likely to be of use in other density estimation applications, and exploring such applications
would another exciting direction for future work.
Acknowledgement We would like to thank Adam Stooke, Sandy Huang, and Haoran Tang for
providing efficient and parallelizable policy search code. We thank Joshua Achiam for help with
setting up benchmark tasks. This research was supported by NSF IIS-1614653, NSF IIS-1700696, an
ONR Young Investigator Program award, and Berkeley DeepDrive.
9
References
Abel, David, Agarwal, Alekh, Diaz, Fernando, Krishnamurthy, Akshay, and Schapire, Robert E.
Exploratory gradient boosting for reinforcement learning in complex domains. In Advances in
Neural Information Processing Systems (NIPS), 2016.
Achiam, Joshua and Sastry, Shankar. Surprise-based intrinsic motivation for deep reinforcement
learning. CoRR, abs/1703.01732, 2017.
Barto, Andrew G. and Mahadevan, Sridhar. Recent advances in hierarchical reinforcement learning.
Discrete Event Dynamic Systems, 13(1-2), 2003.
Bellemare, Marc G., Srinivasan, Sriram, Ostrovski, Georg, Schaul, Tom, Saxton, David, and Munos,
Remi. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems (NIPS), 2016.
Brafman, Ronen I. and Tennenholtz, Moshe. R-max ? a general polynomial time algorithm for
near-optimal reinforcement learning. Journal of Machine Learning Research (JMLR), 2002.
Bubeck, S?bastien and Cesa-Bianchi, Nicol?. Regret analysis of stochastic and nonstochastic
R in Machine Learning, 5, 2012.
multi-armed bandit problems. Foundations and Trends
Chapelle, O. and Li, Lihong. An empirical evaluation of thompson sampling. In Advances in Neural
Information Processing Systems (NIPS), 2011.
Chentanez, Nuttapong, Barto, Andrew G, and Singh, Satinder P. Intrinsically Motivated Reinforcement Learning. In Advances in Neural Information Processing Systems (NIPS). MIT Press,
2005.
Duan, Yan, Chen, Xi, Houthooft, Rein, Schulman, John, and Abbeel, Pieter. Benchmarking deep
reinforcement learning for continuous control. In International Conference on Machine Learning
(ICML), 2016.
Florensa, Carlos Campo, Duan, Yan, and Abbeel, Pieter. Stochastic neural networks for hierarchical
reinforcement learning. In International Conference on Learning Representations (ICLR), 2017.
Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil,
Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In Advances in Neural
Information Processing Systems (NIPS). 2014.
Heess, Nicolas, Wayne, Gregory, Tassa, Yuval, Lillicrap, Timothy P., Riedmiller, Martin A., and
Silver, David. Learning and transfer of modulated locomotor controllers. CoRR, abs/1610.05182,
2016.
Houthooft, Rein, Chen, Xi, Duan, Yan, Schulman, John, Turck, Filip De, and Abbeel, Pieter. Vime:
Variational information maximizing exploration. In Advances in Neural Information Processing
Systems (NIPS), 2016.
Kakade, Sham, Kearns, Michael, and Langford, John. Exploration in metric state spaces. In
International Conference on Machine Learning (ICML), 2003.
Kearns, Michael and Singh, Satinder. Near-optimal reinforcement learning in polynomial time.
Machine Learning, 2002.
Kolter, J. Zico and Ng, Andrew Y. Near-bayesian exploration in polynomial time. In International
Conference on Machine Learning (ICML), 2009.
Kulkarni, Tejas D, Narasimhan, Karthik, Saeedi, Ardavan, and Tenenbaum, Josh. Hierarchical deep
reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in
Neural Information Processing Systems (NIPS). 2016.
Lillicrap, Timothy P., Hunt, Jonathan J., Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval,
Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. In
International Conference on Learning Representations (ICLR), 2015.
10
Malisiewicz, Tomasz, Gupta, Abhinav, and Efros, Alexei A. Ensemble of exemplar-svms for object
detection and beyond. In International Conference on Computer Vision (ICCV), 2011.
Mathieu, Micha?l, Couprie, Camille, and LeCun, Yann. Deep multi-scale video prediction beyond
mean square error. CoRR, abs/1511.05440, 2015. URL http://arxiv.org/abs/1511.05440.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A., Veness, Joel, Bellemare,
Marc G., Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K., Ostrovski, Georg, Petersen, Stig,
Beattie, Charles, Sadik, Amir, Antonoglou, Ioannis, King, Helen, Kumaran, Dharshan, Wierstra,
Daan, Legg, Shane, and Hassabis, Demis. Human-level control through deep reinforcement
learning. Nature, 518(7540):529?533, 02 2015.
Oh, Junhyuk, Guo, Xiaoxiao, Lee, Honglak, Lewis, Richard, and Singh, Satinder. Action-conditional
video prediction using deep networks in atari games. In Advances in Neural Information Processing
Systems (NIPS), 2015.
Osband, Ian, Blundell, Charles, and Alexander Pritzel, Benjamin Van Roy. Deep exploration via
bootstrapped DQN. In Advances in Neural Information Processing Systems (NIPS), 2016.
Pathak, Deepak, Agrawal, Pulkit, Efros, Alexei A., and Darrell, Trevor. Curiosity-driven exploration
by self-supervised prediction. In International Conference on Machine Learning (ICML), 2017.
Pazis, Jason and Parr, Ronald. Pac optimal exploration in continuous space markov decision processes.
In AAAI Conference on Artificial Intelligence (AAAI), 2013.
Salimans, Tim, Goodfellow, Ian J., Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi.
Improved techniques for training gans. In Advances in Neural Information Processing Systems
(NIPS), 2016.
Schmidhuber, J?rgen. A possibility for implementing curiosity and boredom in model-building
neural controllers. In Proceedings of the First International Conference on Simulation of Adaptive
Behavior on From Animals to Animats, Cambridge, MA, USA, 1990. MIT Press. ISBN 0-26263138-5.
Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I., and Abbeel, Pieter. Trust region
policy optimization. In International Conference on Machine Learning (ICML), 2015.
Stadie, Bradly C., Levine, Sergey, and Abbeel, Pieter. Incentivizing exploration in reinforcement
learning with deep predictive models. CoRR, abs/1507.00814, 2015.
Stolle, Martin and Precup, Doina. Learning Options in Reinforcement Learning. Springer Berlin
Heidelberg, Berlin, Heidelberg, 2002. ISBN 978-3-540-45622-3. doi: 10.1007/3-540-45622-8_16.
Strehl, Alexander L. and Littman, Michael L. An analysis of model-based interval estimation for
markov decision processes. Journal of Computer and System Sciences, 2009.
Tang, Haoran, Houthooft, Rein, Foote, Davis, Stooke, Adam, Chen, Xi, Duan, Yan, Schulman, John,
Turck, Filip De, and Abbeel, Pieter. #exploration: A study of count-based exploration for deep
reinforcement learning. In Advances in Neural Information Processing Systems (NIPS), 2017.
11
| 6851 |@word version:2 polynomial:4 open:1 proportionality:1 pieter:6 seek:2 tried:1 simulation:1 thereby:1 carry:1 initial:1 generatively:1 contains:2 score:3 tuned:1 bootstrapped:2 ours:2 outperforms:1 current:4 com:1 nuttapong:1 tackling:1 dx:14 must:2 john:6 ronald:1 realistic:1 informative:1 designed:1 generative:25 greedy:2 half:2 intelligence:1 amir:1 alec:1 xk:1 provides:7 boosting:1 location:2 philipp:1 org:1 simpler:3 wierstra:2 become:2 pritzel:2 consists:2 combine:1 ex2:14 underfitting:4 manner:1 introduce:1 x0:1 theoretically:1 expected:1 roughly:1 themselves:1 examine:1 growing:1 multi:3 frequently:1 discretized:1 distractor:1 behavior:6 discounted:1 automatically:1 duan:5 little:1 armed:1 freeway:3 considering:2 increasing:2 becomes:1 begin:3 underlying:2 notation:1 bonus:20 medium:2 maximizes:1 kind:1 atari:19 substantially:1 dharshan:1 narasimhan:1 impractical:1 suite:1 guarantee:1 temporal:3 berkeley:3 pseudo:3 every:3 act:2 tie:1 zaremba:1 classifier:11 sherjil:1 control:9 unit:1 wayne:1 omit:1 zico:1 continually:1 positive:4 before:2 struggle:3 tends:1 despite:1 might:2 chose:1 eb:2 challenging:8 co:1 micha:1 hunt:1 range:3 malisiewicz:2 directed:1 practical:3 camera:2 unique:1 lecun:1 practice:3 regret:1 implement:1 demis:1 episodic:1 riedmiller:2 empirical:5 yan:4 significantly:2 empiri:1 matching:1 integrating:1 petersen:1 naturalistic:1 get:1 cannot:1 operator:1 cal:1 shankar:1 context:1 doom:3 bellemare:7 py:1 optimize:2 equivalent:2 imposed:1 map:1 maximizing:2 helen:1 primitive:1 straightforward:1 thompson:1 unify:1 simplicity:1 pouget:1 estimator:4 borrow:1 regularize:1 oh:2 countbased:1 classic:1 handle:2 exploratory:1 traditionally:1 variation:1 krishnamurthy:1 controlling:1 hierarchy:2 pt:1 play:2 exact:1 us:4 distinguishing:1 goodfellow:3 ego:2 amortized:11 trend:1 particularly:2 roy:1 predicts:1 cooperative:1 observed:1 levine:3 module:1 bottom:1 initializing:1 capture:1 region:1 amor:1 substantial:1 intuition:2 environment:6 balanced:2 abel:3 benjamin:1 reward:21 littman:3 saxton:1 warde:1 dynamic:4 trained:16 singh:4 solving:2 predictive:5 learner:1 easily:3 joint:1 represented:1 epe:1 derivation:3 train:12 distinct:1 describe:5 doi:1 artificial:1 vicki:1 lift:1 rein:3 jean:1 heuristic:1 encoded:1 solve:2 supplementary:1 drawing:1 vime:6 encoder:5 ability:1 noisy:3 final:2 online:2 advantage:1 agrawal:1 net:1 isbn:2 propose:2 lowdimensional:1 interaction:1 halfcheetah:1 achieve:3 schaul:1 description:1 venture:4 epx:1 darrell:1 extending:1 produce:4 generating:1 adam:2 executing:1 silver:3 object:4 help:2 illustrate:1 andrew:3 tim:1 exemplar:48 qt:1 progress:2 eq:7 bamdp:1 involves:1 come:2 quantify:1 direction:2 radius:1 correct:2 stochastic:2 exploration:56 human:1 material:1 implementing:1 require:5 assign:2 abbeel:6 generalization:2 preliminary:1 proposition:1 exploring:2 extension:1 strictly:1 hold:2 helping:1 around:1 stooke:2 visually:1 predict:2 visualize:1 parr:2 rgen:1 efros:2 vary:1 achieves:3 sandy:1 estimation:27 injecting:1 applicable:1 lose:1 label:2 visited:6 concurrent:1 successfully:1 mit:2 concurrently:1 gaussian:5 always:1 rather:5 beb:2 avoid:4 reaching:1 rusu:1 varying:2 barto:3 focus:1 improvement:1 legg:1 bernoulli:1 likelihood:2 indicates:2 greatly:3 contrast:2 adversarial:1 rigorous:1 baseline:2 sense:1 abstraction:2 typically:8 a0:1 bandit:1 pixel:1 arg:2 classification:1 augment:1 animal:1 art:2 smoothing:14 initialize:1 equal:2 construct:1 ng:3 sampling:2 manually:1 identical:1 koray:1 veness:1 icml:5 thin:1 future:6 others:2 np:2 intelligent:1 mirza:1 few:1 yoshua:1 richard:1 ve:3 interpolate:1 individual:1 intended:3 maintain:1 karthik:1 attempt:1 ab:5 detection:3 ostrovski:2 sriram:1 highly:1 mnih:3 alexei:2 possibility:1 evaluation:3 adjust:1 saturated:1 joel:1 navigation:5 extreme:1 farley:1 accurate:1 emit:1 fu:1 encourage:2 tuple:1 necessary:1 partial:2 pulkit:1 harmful:1 theoretical:1 instance:1 classify:2 modeling:9 cost:2 introducing:2 rare:2 hundred:1 successful:2 characterize:1 reported:1 encoders:1 eec:1 gregory:1 synthetic:1 combined:1 confident:1 st:5 density:56 person:2 randomized:1 international:9 stay:1 fifo:1 probabilistic:2 off:1 receiving:1 lee:1 michael:4 precup:2 transmitting:1 gans:4 na:3 aaai:2 cesa:2 stolle:2 huang:1 derivative:1 semimarkov:1 wojciech:1 li:2 toy:1 volodymyr:1 amortizing:2 diversity:1 attaining:1 haoran:2 de:2 ioannis:1 kolter:3 explicitly:4 doina:1 view:1 jason:1 competitive:3 relied:1 option:2 sort:2 recover:6 start:1 carlos:1 tomasz:1 contribution:2 minimize:1 square:1 ass:1 accuracy:1 convolutional:1 variance:1 swimmergather:6 ensemble:1 correspond:1 ronen:1 generalize:1 raw:2 bayesian:2 kavukcuoglu:1 produced:2 trajectory:5 randomness:1 parallelizable:1 trevor:1 xiaoxiao:1 against:7 energy:1 svlevine:1 involved:1 associated:2 dxi:1 proof:2 couple:1 gain:2 sampled:1 dataset:4 popular:1 intrinsically:1 knowledge:1 dimensionality:1 centric:2 hashing:3 higher:2 supervised:1 tom:2 improved:1 formulation:1 done:1 though:5 evaluated:1 implicit:15 langford:1 d:3 trust:1 mehdi:1 google:1 mdp:3 dqn:2 building:4 effect:2 lillicrap:3 concept:1 requiring:1 true:1 contain:3 inductive:1 regularization:2 usa:1 moritz:1 adjacent:2 indistinguishable:1 during:5 game:16 self:1 davis:1 pazis:2 trying:1 theoretic:1 demonstrate:3 performs:1 reyes:1 motion:2 adverserial:3 pro:1 image:21 meaning:1 wise:1 novel:6 recently:1 variational:1 charles:2 pseudocode:1 junhyuk:1 rl:6 tassa:2 subgoals:1 discussed:3 interpretation:1 significant:2 refer:2 honglak:1 cambridge:1 queried:1 chentanez:2 sastry:3 similarly:2 erez:1 lihong:1 chapelle:2 robot:1 actor:1 moving:2 operating:1 alekh:1 locomotor:1 own:1 recent:5 perspective:1 optimizes:1 driven:1 rewarded:1 schmidhuber:2 scenario:1 buffer:5 onr:1 approximators:1 joshua:2 seen:3 additional:1 somewhat:1 campo:1 employed:1 r0:3 novelty:16 maximize:1 fernando:1 signal:2 ii:2 multiple:3 full:1 desirable:1 sham:1 conducive:1 exceeds:4 technical:1 match:2 smooth:2 offer:1 cross:1 long:2 visit:1 dkl:1 award:1 prediction:8 scalable:2 variant:4 controller:2 vision:3 metric:3 arxiv:1 iteration:2 sergey:3 normalization:2 kernel:6 histogram:3 achieved:1 agarwal:1 receive:1 background:3 interval:1 diagram:2 source:1 unlike:1 shane:1 subject:1 undirected:1 dxn:1 jordan:1 near:3 counting:1 presence:1 mahadevan:2 easy:6 enough:1 bengio:1 variety:1 architecture:9 competing:1 suboptimal:1 nonstochastic:1 reduce:1 idea:4 observability:2 avenue:2 tradeoff:1 andreas:1 blundell:1 whether:2 motivated:1 optimism:1 url:1 osband:3 suffer:1 queue:1 interpolates:1 cause:1 action:13 deep:18 heess:3 generally:3 covered:1 detailed:2 fool:1 amount:3 tenenbaum:1 svms:1 generate:4 http:2 outperform:1 schapire:1 nsf:2 estimated:3 mbie:2 delta:2 per:2 blue:1 discrete:5 hyperparameter:3 diaz:1 incentive:1 georg:2 srinivasan:1 visitation:6 key:2 four:1 trpo:4 openai:1 changing:1 prevent:1 frostbite:3 clean:2 saeedi:1 egocentric:5 sum:1 houthooft:9 fooling:1 run:1 inverse:1 uncertainty:2 powerful:5 injected:2 reasonable:2 yann:1 decision:4 appendix:4 comparable:1 entirely:1 bound:3 layer:4 followed:1 distinguish:6 courville:1 occur:1 alex:1 flat:1 generates:1 optimality:1 extremely:2 performing:2 attempting:1 px:19 relatively:1 martin:3 speedup:1 structured:1 according:1 remain:1 beneficial:1 across:1 reconstructing:1 kakade:2 appealing:1 making:2 intuitively:3 invariant:1 iccv:1 ardavan:1 taken:1 previously:2 bing:1 discus:3 count:19 mechanism:1 describing:1 know:1 letting:2 antonoglou:1 end:3 serf:1 available:2 apply:4 progression:1 hierarchical:6 salimans:2 generic:1 stig:1 batch:10 robustness:1 gym:1 hassabis:1 original:1 denotes:2 assumes:1 include:2 top:2 gan:2 maintaining:1 marginalized:1 unifying:1 exploit:1 coined:1 especially:3 approximating:2 classical:1 seeking:1 objective:4 turck:2 added:1 quantity:1 occurs:1 moshe:1 strategy:6 usual:1 visiting:1 gradient:2 amongst:1 iclr:2 distance:1 unable:1 separate:1 simulated:1 thank:2 majority:1 sensible:1 gracefully:1 fidjeland:1 berlin:2 manifold:3 furthest:1 ozair:1 suboptimally:1 code:3 relationship:1 illustration:1 providing:1 acquire:1 difficult:9 unfortunately:1 setup:1 robert:1 kde:5 negative:3 design:1 countable:1 motivates:1 policy:16 unknown:1 perform:4 bianchi:2 allowing:1 upper:2 observation:17 convolution:1 markov:3 kumaran:1 benchmark:10 finite:2 daan:2 extended:1 incorporated:1 frame:2 smoothed:3 camille:1 duced:1 david:6 pair:1 required:1 extensive:1 connection:3 discriminator:43 california:1 coherent:1 learned:3 nip:11 justin:1 tennenholtz:2 able:3 beyond:2 curiosity:2 challenge:4 program:1 achiam:4 including:4 max:5 s0k:2 video:5 green:1 event:2 pathak:2 difficulty:2 rely:1 force:2 predicting:1 natural:1 curriculum:1 scheme:1 improve:1 mdps:1 abhinav:1 temporally:1 mathieu:2 autoencoder:1 prior:13 schulman:7 acknowledgement:1 nicol:1 relative:1 graf:1 loss:1 discriminatively:6 highlight:1 florensa:2 interesting:2 proportional:1 versus:1 generator:3 foundation:1 agent:7 degree:2 sufficient:1 s0:3 principle:1 viewpoint:3 exciting:2 playing:1 critic:1 strehl:3 share:1 summary:1 brafman:2 supported:1 formal:1 allow:2 bias:1 weaker:1 foote:1 guide:1 understand:1 taking:1 akshay:1 differentiating:1 munos:1 sparse:11 deepak:1 benefit:2 van:1 curve:1 dimension:1 calculated:1 xn:1 transition:2 avoids:2 rich:3 evaluating:1 visuals:3 default:1 collection:1 reinforcement:27 maze:7 made:1 sensory:2 boredom:1 adaptive:1 correlate:1 approximate:2 skill:1 implicitly:1 keep:1 satinder:3 overfitting:4 filip:2 discriminative:4 xi:5 continuous:12 latent:14 search:2 table:2 additionally:1 promising:1 learn:4 qz:1 transfer:1 nicolas:2 nature:1 heidelberg:2 complex:17 domain:13 marc:2 bradly:1 did:1 dense:1 motivation:6 noise:20 pby:1 sridhar:1 x1:2 xu:1 site:1 benchmarking:1 screen:1 andrei:1 amortize:1 stadie:3 explicit:12 wish:2 lie:1 replay:5 pe:1 jmlr:1 third:1 learns:1 young:1 tang:8 ian:3 vizdoom:9 incentivizing:1 bastien:1 navigate:1 pac:3 dx1:1 explored:1 abadie:1 svm:1 gupta:1 intrinsic:6 adding:7 corr:4 importance:1 mirror:1 magnitude:2 conditioned:2 horizon:1 margin:1 chen:4 easier:1 surprise:1 suited:1 entropy:1 smoothly:1 remi:1 timothy:2 distinguishable:1 simply:2 likely:2 bubeck:2 explore:3 infinitely:3 absorbed:1 appearance:1 visual:3 josh:1 sadik:1 radford:1 springer:1 corresponds:2 satisfies:4 lewis:1 ma:1 succeed:1 comparator:1 tejas:1 goal:8 viewed:2 king:1 conditional:1 rbf:2 cheung:1 room:3 shared:2 couprie:1 included:4 specifically:1 infinite:1 except:1 typical:1 uniformly:1 operates:1 yuval:2 kearns:3 beattie:1 total:1 called:1 discriminate:1 secondary:1 experimental:3 player:1 meaningful:2 indicating:1 rarely:1 formally:1 highdimensional:1 aaron:1 animats:1 guo:1 modulated:1 jonathan:1 alexander:3 avoiding:1 kulkarni:2 investigator:1 evaluate:2 scratch:1 ex:1 |
6,470 | 6,852 | Multitask Spectral Learning of Weighted Automata
Guillaume Rabusseau ?
McGill University
Borja Balle ?
Amazon Research Cambridge
Joelle Pineau?
McGill University
Abstract
We consider the problem of estimating multiple related functions computed by
weighted automata (WFA). We first present a natural notion of relatedness between
WFAs by considering to which extent several WFAs can share a common underlying representation. We then introduce the novel model of vector-valued WFA which
conveniently helps us formalize this notion of relatedness. Finally, we propose a
spectral learning algorithm for vector-valued WFAs to tackle the multitask learning
problem. By jointly learning multiple tasks in the form of a vector-valued WFA,
our algorithm enforces the discovery of a representation space shared between
tasks. The benefits of the proposed multitask approach are theoretically motivated
and showcased through experiments on both synthetic and real world datasets.
1
Introduction
One common task in machine learning consists in estimating an unknown function f : X ? Y from
a training sample of input-output data {(xi , yi )}N
i=1 where each yi ' f (xi ) is a (possibly noisy)
estimate of f (xi ). In multitask learning, the learner is given several such learning tasks f1 , ? ? ? , fm .
It has been shown, both experimentally and theoretically, that learning related tasks simultaneously
can lead to better performances relative to learning each task independently (see e.g. [1, 7], and
references therein). Multitask learning has proven particularly useful when few data points are
available for each task, or when it is difficult or costly to collect data for a target task while much data
is available for related tasks (see e.g. [28] for an example in healthcare). In this paper, we propose a
multitask learning algorithm for the case where the input space X consists of sequence data.
Many tasks in natural language processing, computational biology, or reinforcement learning, rely on
estimating functions mapping sequences of observations to real numbers: e.g. inferring probability
distributions over sentences in language modeling or learning the dynamics of a model of the
environment in reinforcement learning. In this case, the function f to infer from training data is
defined over the set ?? of strings built on a finite alphabet ?. Weighted finite automata (WFA) are
finite state machines that allow one to succinctly represent such functions. In particular, WFAs
can compute any probability distribution defined by a hidden Markov model (HMM) [13] and can
model the transition and observation behavior of partially observable Markov decision processes [26].
A recent line of work has led to the development of spectral methods for learning HMMs [17],
WFAs [2, 4] and related models, offering an alternative to EM based algorithms with the benefits of
being computationally efficient and providing consistent estimators. Spectral learning algorithms
have led to competitive results in the fields of natural language processing [12, 3] and robotics [8].
We consider the problem of multitask learning for WFAs. As a motivational example, consider
a natural language modeling task where one needs to make predictions in different contexts (e.g.
online chat vs. newspaper articles) and has access to datasets in each of them; it is natural to expect
that basic grammar is shared across the datasets and that one could benefit from simultaneously
?
[email protected]
[email protected]
?
[email protected]
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
learning these tasks. The notion of relatedness between tasks can be expressed in different ways;
one common assumption in multitask learning is that the multiple tasks share a common underlying
representation [6, 11]. In this paper, we present a natural notion of shared representation between
functions defined over strings and we propose a learning algorithm that encourages the discovery
of this shared representation. Intuitively, our notion of relatedness captures to which extent several
functions can be computed by WFAs sharing a joint forward feature map. In order to formalize
this notion of relatedness, we introduce the novel model of vector-valued WFA (vv-WFA) which
generalizes WFAs to vector-valued functions and offer a natural framework to formalize the multitask
learning problem. Given m tasks f1 , ? ? ? , fm : ?? ? R, we consider the function f~ = [f1 , ? ? ? , fm ] :
?? ? Rm whose output for a given input string x is the m-dimensional vector having entries fi (x)
for i = 1, ? ? ? , m. We show that the notion of minimal vv-WFA computing f~ exactly captures our
notion of relatedness between tasks and we prove that the dimension of such a minimal representation
is equal to the rank of a flattening of the Hankel tensor of f~ (Theorem 3). Leveraging this result,
we design a spectral learning algorithm for vv-WFAs which constitutes a sound multitask learning
algorithm for WFAs: by learning f~ in the form of a vv-WFA, rather than independently learning a
WFA for each task fi , we implicitly enforce the discovery of a joint feature space shared among all
tasks. After giving a theoretical insight on the benefits of this multitask approach (by leveraging a
recent result on asymmetric bounds for singular subspace estimation [9]), we conclude by showcasing
these benefits with experiments on both synthetic and real world data.
Related work. Multitask learning for sequence data has previously received limited attention. In [16],
mixtures of Markov chains are used to model dynamic user profiles. Tackling the multitask problem
with nonparametric Bayesian methods is investigated in [15] to model related time series with Beta
processes and in [23] to discover relationships between related datasets using nested Dirichlet process
and infinite HMMs. Extending recurrent neural networks to the multitask setting has also recently
received some interest (see e.g. [21, 22]). To the best of our knowledge, this paper constitutes the
first attempt to tackle the multitask problem for the class of functions computed by general WFAs.
2
Preliminaries
We first present notions on weighted automata, spectral learning of weighted automata and tensors.
We start by introducing some notation. We denote by ?? the set of strings on a finite alphabet
?. The empty string is denoted by ? and the length of a string x by |x|. For any integer k we let
[k] = {1, 2, ? ? ? , k}. We use lower case bold letters for vectors (e.g. v ? Rd1 ), upper case bold
letters for matrices (e.g. M ? Rd1 ?d2 ) and bold calligraphic letters for higher order tensors (e.g.
T ? Rd1 ?d2 ?d3 ). The ith row (resp. column) of a matrix M will be denoted by Mi,: (resp. M:,i ).
This notation is extended to slices of a tensor in the straightforward way. Given a matrix M ? Rd1 ?d2 ,
we denote by M? its Moore-Penrose pseudo-inverse and by vec(M) ? Rd1 d2 its vectorization.
Weighted finite automaton. A weighted finite automaton (WFA) with n states is a tuple A =
(?, {A? }??? , ?) where ?, ? ? Rn are the initial and final weights vectors respectively, and A? ?
Rn?n is the transition matrix for each symbol ? ? ?. A WFA computes a function fA : ?? ? R
defined for each word x = x1 x2 ? ? ? xk ? ?? by fA (x) = ?> Ax1 Ax2 ? ? ? Axk ?.
By letting Ax = Ax1 Ax2 ? ? ? Axk for any word x = x1 x2 ? ? ? xk ? ?? we will often use the shorter
notation fA (x) = ?> Ax ?. A WFA A with n states is minimal if its number of states is minimal, i.e.
any WFA B such that fA = fB has at least n states. A function f : ?? ? R is recognizable if it
can be computed by a WFA. In this case the rank of f is the number of states of a minimal WFA
computing f , if f is not recognizable we let rank(f ) = ?.
?
?
Hankel matrix. The Hankel matrix Hf ? R? ?? associated with a function f : ?? ? R is the
infinite matrix with entries (Hf )u,v = f (uv) for u, v ? ?? . The spectral learning algorithm for
WFAs relies on the following fundamental relation between the rank of f and the rank of Hf .
Theorem 1. [10, 14] For any function f : ?? ? R, rank(f ) = rank(Hf ).
Spectral learning. Showing that the rank of the Hankel matrix is upper bounded by the rank of f is
easy: given a WFA A = (?, {A? }??? , ?) with n states, we have the rank n factorization Hf = PS
?
?
where the matrices P ? R? ?n and S ? Rn?? are defined by Pu,: = ?> Au and S:,v = Av ? for
2
all u, v ? ?? . The converse is more tedious to show but its proof is constructive, in the sense that it
allows one to build a WFA computing f from any rank n factorization of Hf . This construction is
the cornerstone of the spectral learning algorithm and is given in the following corollary.
Corollary 2. [4, Lemma 4.1] Let f : ?? ? R be a recognizable function with rank n, let H ?
?
?
?
?
R? ?? be its Hankel matrix, and for each ? ? ? let H? ? R? ?? be defined by H?u,v = f (u?v)
for all u, v ? ?? .
?
?
Then, for any P ? R? ?n , S ? Rn?? such that H = PS, the WFA A = (?, {A? }??? , ?) where
?> = P?,: , ? = S:,? , and A? = P? H? S? is a minimal WFA for f .
In practice, finite sub-blocks of the Hankel matrices are used. Given finite sets of prefixes and suffixes
P, S ? ?? , let HP,S , {H?P,S }??? be the finite sub-blocks of H whose rows (resp. columns) are
indexed by prefixes in P (resp. suffixes in S). One can show that if P and S are such that ? ? P ? S
and rank(H) = rank(HP,S ), then the previous corollary still holds, i.e. a minimal WFA computing
f can be recovered from any rank n factorization of HP,S . The spectral method thus consists in
estimating the matrices HP,S , H?P,S from training data (using e.g. empirical frequencies if f is
stochastic), finding a low-rank factorization of HP,S (using e.g. SVD) and constructing a WFA
approximating f using Corollary 2.
Tensors. We make a sporadic use of tensors in this paper, we thus introduce the few necessary
definitions and notations; more details can be found in [18]. A 3rd order tensor T ? Rd1 ?d2 ?d3
can be seen as a multidimensional array (T i1 ,i2 ,i3 : i1 ? [d1 ], i2 ? [d2 ], , i3 ? [d3 ]). The mode-n
fibers of T are the vectors obtained by fixing all indices except the nth one, e.g. T :,i2 ,i3 ? Rd1 .
The nth mode flattening of T is the matrix having the mode-n fibers of T for columns and is
denoted by e.g. T (1) ? Rd1 ?d2 d3 . The mode-1 matrix product of a tensor T ? Rd1 ?d2 ?d3 and a
matrix X ? Rm?d1 is a tensor of size m ? d2 ? d3 denoted by T ?1 X and defined by the relation
Y = T ?1 X ? Y (1) = XT (1) ; the mode-n product for n = 2, 3 is defined similarly.
3
Vector-Valued WFAs for Multitask Learning
In this section, we present a notion of relatedness between WFAs that we formalize by introducing the
novel model of vector-valued weighted automaton. We then propose a multitask learning algorithm
for WFAs by designing a spectral learning algorithm for vector-valued WFAs.
A notion of relatedness between WFAs. The basic idea behind our approach emerges from interpreting the computation of a WFA as a linear model in some feature space. Indeed, the computation of
a WFA A = (?, {A? }??? , ?) with n states on a word x ? ?? can be seen as first mapping x to an
n-dimensional feature vector through a compositional feature map ? : ?? ? Rn , and then applying
a linear form in the feature space to obtain the final value fA (x) = h?(x), ?i. The feature map is
defined by ?(x)> = ?> Ax for all x ? ?? and it is compositional in the sense that for any x ? ??
and any ? ? ? we have ?(x?)> = ?(x)> A? . We will say that such a feature map is minimal if the
linear space V ? Rn spanned by the vectors {?(x)}x??? is of dimension n. Theorem 1 implies that
the dimension of V is actually equal to the rank of fA , showing that the notion of minimal feature
map naturally coincides with the notion of minimal WFA.
A notion of relatedness between WFAs naturally arises by considering to which extent two (or
more) WFAs can share a joint feature map ?. More precisely, consider two recognizable functions
f1 , f2 : ?? ? R of rank n1 and n2 respectively, with corresponding feature maps ?1 : ?? ? Rn1
and ?2 : ?? ? Rn2 . Then, a joint feature map for f1 and f2 always exists and is obtained by
considering the direct sum ?1 ? ?2 : ?? ? Rn1 +n2 that simply concatenates the feature vectors
?1 (x) and ?2 (x) for any x ? ?? . However, this feature map may not be minimal, i.e. there may exist
another joint feature map of dimension n < n1 + n2 . Intuitively, the smaller this minimal dimension
n is the more related the two tasks are, with the two extremes being on the one hand n = n1 + n2
where the two tasks are independent, and on the other hand e.g. n = n1 where one of the (minimal)
feature maps ?1 , ?2 is sufficient to predict both tasks.
Vector-valued WFA. We now introduce a computational model for vector-valued functions on strings
that will help formalize this notion of relatedness between WFAs.
3
Definition 1. A d-dimensional vector-valued weighted finite automaton (vv-WFA) with n states is a
tuple A = (?, {A? }??? , ?) where ? ? Rn is the initial weights vector, ? ? Rn?d is the matrix of
final weights, and A? ? Rn?n is the transition matrix for each symbol ? ? ?. A vv-WFA computes
a function f~A : ?? ? Rd defined by
f~A (x) = ?> Ax1 Ax2 ? ? ? Axk ?
for each word x = x1 x2 ? ? ? xk ? ?? .
We extend the notions of recognizability, minimality and rank of a WFA in the straightforward way:
a function f~ : ?? ? Rd is recognizable if it can be computed by a vv-WFA, a vv-WFA is minimal
if its number of states is minimal, and the rank of f~ is the number of states of a minimal vv-WFA
computing f~. A d-dimensional vv-WFA can be seen as a collection of d WFAs that all share their
initial vectors and transition matrices but have different final vectors. Alternatively, one could take a
dual approach and define vv-WFAs as a collection of WFAs sharing transitions and final vectors4 .
vv-WFAs and relatedness between WFAs. We now show how the vv-WFA model naturally captures
the notion of relatedness presented above. Recall that this notion intends to capture to which extent
two recognizable functions f1 , f2 : ?? ? R, of ranks n1 and n2 respectively, can share a joint
forward feature map ? : ?? ? Rn satisfying f1 (x) = h?(x), ? 1 i and f2 (x) = h?(x), ? 2 i for all
x ? ?? , for some ? 1 , ? 2 ? Rn . Consider the vector-valued function f~ = [f1 , f2 ] : ?? ? R2
defined by f~(x) = [f1 (x), f2 (x)] for all x ? ?? . It can easily be seen that the minimal dimension of
a shared forward feature map between f1 and f2 is exactly the rank of f~, i.e. the number of states
of a minimal vv-WFA computing f~. This notion of relatedness can be generalized to more than
two functions by considering f~ = [f1 , ? ? ? , fm ] for m different recognizable functions f1 , ? ? ? , fm
of respective ranks n1 , ? ? ? , nm . In this setting, it is easy to check that the rank of f~ lies between
max(n1 , ? ? ? , nm ) and n1 + ? ? ? + nm ; smaller values of this rank leads to a smaller dimension of the
minimal forward feature map and thus, intuitively, to more closely related tasks. We now formalize
this measure of relatedness between recognizable functions.
Definition 2. Given m recognizable functions f1 , ? ? ? , fm , we define their relatedness measure by
P
? (f1 , ? ? ? , fm ) = 1 ? (rank(f~) ? maxi rank(fi ))/ i rank(fi ) where f~ = [f1 , ? ? ? , fm ].
One can check that this measure of relatedness takes its values in (0, 1]. We say that tasks are
maximally related when their relatedness measure is 1 and independent when it is minimal. Observe
that the rank R of a vv-WFA does not give enough information to determine whether one set of tasks
is more related than another: the degree of relatedness depends on the relation between R and the
ranks of each individual task. The relatedness parameter ? circumvents this issue by measuring where
R stands between the maximum rank over the different tasks and the sum of their ranks.
Example 1. Let ? = {a, b, c} and let |x|? denotes the number of occurrences of ? in x for any
? ? ?. Consider the functions defined by f1 (x) = 0.5|x|a + 0.5|x|b , f2 (x) = 0.3|x|b ? 0.6|x|c and
f3 (x) = |x|c for all x ? ?? . It is easy to check that rank(f1 ) = rank(f2 ) = 4 and rank(f3 ) = 2.
Moreover, f2 and f3 are maximally related (indeed rank([f2 , f3 ]) = 4 = rank(f2 ) thus ? (f2 , f3 ) =
1), f1 and f3 are independent (indeed ? (f1 , f3 ) = 2/3 is minimal since rank([f1 , f3 ]) = 6 =
rank(f1 ) + rank(f3 )), and f1 and f2 are related but not maximally related (since 4 = rank(f1 ) =
rank(f2 ) < rank([f1 , f2 ]) = 6 < rank(f1 ) + rank(f2 ) = 8).
Spectral learning of vv-WFAs. We now design a spectral learning algorithm for vv-WFAs. Given
?
?
a function f~ : ?? ? Rd , we define its Hankel tensor H ? R? ?d?? by Hu,:,v = f~(uv) for all
u, v ? ?? . We first show in Theorem 3 (whose proof can be found in the supplementary material)
that the fundamental relation between the rank of a function and the rank of its Hankel matrix can
naturally be extended to the vector-valued case. Compared with Theorem 1, the Hankel matrix is now
replaced by the mode-1 flattening H(1) of the Hankel tensor (which can be obtained by concatenating
the matrices H:,i,: along the horizontal axis).
Theorem 3 (Vector-valued Fliess Theorem). Let f~ : ?? ? Rd and let H be its Hankel tensor. Then
rank(f~) = rank(H(1) ).
4
Both definitions performed similarly in multitask experiments on the dataset used in Section 5.2, we thus
chose multiple final vectors as a convention.
4
Similarly to the scalar-valued case, this theorem can be leveraged to design a spectral learning
algorithm for vv-WFAs. The following corollary (whose proof can be found in the supplementary
material) shows how a vv-WFA computing a recognizable function f~ : ?? ? Rd of rank n can be
recovered from any rank n factorization of its Hankel tensor.
?
?
Corollary 4. Let f~ : ?? ? Rd be a recognizable function with rank n, let H ? R? ?d?? be its
?
?
Hankel tensor, and for each ? ? ? let H? ? R? ?d?? be defined by H?u,:,v = f~(u?v) for all
u, v ? ?? .
?
?
Then, for any P ? R? ?n and S ? Rn?d?? such that H = S ?1 P, the vv-WFA A =
(?, {A? }??? , ?) defined by ?> = P?,: , ? = S :,:,? , and A? = P? H?(1) (S (1) )? is a minimal
vv-WFA computing f~.
Similarly to the scalar-valued case, one can check that the previous corollary also holds for any
finite sub-tensors HP,S , {H?P,S }??? of H indexed by prefixes and suffixes in P, S ? ?? , whenever
P and S are such that ? ? P ? S and rank(H(1) ) = rank((HP,S )(1) ); we will call such a basis
(P, S) complete. The spectral learning algorithm for vv-WFAs then consists in estimating these
Hankel tensors from training data and using Corollary 4 to recover a vv-WFA approximating the
? will not be of low rank and the
target function. Of course a noisy estimate of the Hankel tensor H
? = S ?1 P should only be performed approximately in order to counter the presence
factorization H
? (1) is obtained using truncated SVD.
of noise. In practice a low rank approximation of H
Multitask learning of WFAs. Let us now go back to the multitask learning problem and let
f1 , ? ? ? fm : ?? ? R be multiple functions we wish to infer in the form of WFAs. The spectral
learning algorithm for vv-WFAs naturally suggests a way to tackle this multitask problem: by learning
f~ = [f1 , ? ? ? , fm ] in the form of a vv-WFA, rather than independently learning a WFA for each task
fi , we implicitly enforce the discovery of a joint forward feature map shared among all tasks.
We will now see how a further step can be added to this learning scheme to enforce more robustness
to noise. The motivation for this additional step comes from the observation that even though a
d-dimensional vv-WFA A = (?, {A? }??? , ?) may be minimal, the corresponding scalar-valued
WFAs Ai = h?, {A? }??? , ?:,i i for i ? [d] may not be. Suppose for example that A1 is not minimal.
This implies that some part of its state space does not contribute to the function f1 but comes
from asking for a rich enough state representation that can predict other tasks as well. Moreover,
when one learns a vv-WFA from noisy estimates of the Hankel tensors, the rank R approximation
? (1) ' PS (1) somehow annihilates the noise contained in the space orthogonal to the top R singular
H
? (1) , but when the WFA A1 has rank R1 < R we intuitively see that there is still a
vectors of H
subspace of dimension R ? R1 containing only irrelevant features. In order to circumvent this issue,
we would like to project down the (scalar-valued) WFAs Ai down to their true dimensions, intuitively
enforcing each predictor to use as few features as possible for each task, and thus annihilating the
noise lying in the corresponding irrelevant subspaces. To achieve this we will make use of the
following proposition that explicits the projections needed to obtain minimal scalar-valued WFAs
from a given vv-WFA (the proof is given in the supplementary material).
Proposition 1. Let f~ : ?? ? Rd be a function computed by a minimal vv-WFA A =
(?, {A? }??? , ?) with n states and let P, S ? ?? be a complete basis for f~. For any i ? [d],
let fi : ?? ? R be defined by fi (x) = f~(x)i for all x ? ?? and let ni denote the rank of fi .
Let P ? RP?n be defined by Px,: = ?> Ax for all x ? P and, for i ? [d], let Hi ? RP?S be the
Hankel matrix of fi and let Hi = Ui Di Vi> be its thin SVD (i.e. Di ? Rni ?ni ).
Then, for any i ? [d], the WFA Ai = h?i , {A?i }??? }, ? i i defined by
> ?
>
?
>
? ?
?>
i = ? P Ui , ? i = Ui P?:,i and Ai = Ui PA P Ui for each ? ? ?,
is a minimal WFA computing fi .
? {H
? ? }??? of the Hankel tensors of a function f~ and estimates R of
Given noisy estimates H,
the rank of f~ and Ri of the ranks of the fi ?s, the first step of the learning algorithm consists in
? (1) ' U(DV> ) obtained by truncated SVD to get a
applying Corollary 4 to the factorization H
5
vv-WFA A approximating f~. Then, Proposition 1 can be used to project down each WFA Ai by
? :,i,: . The overall procedure for our Multi-Task
estimating Ui with the top Ri left singular vectors of H
Spectral Learning (MT-SL) is summarized in Algorithm 1 where lines 1-3 correspond to the vv-WFA
estimation while lines 4-7 correspond to projecting down the corresponding scalar-valued WFAs. To
further motivate the projection step, let us consider the case when m tasks are completely unrelated,
and each of them requires n states. Single-task learning would lead to a model with O |?|mn2
parameters, while the multi-task learning approach would return a larger model of size O |?|(mn)2 ;
the projection step eliminates such redundancy.
Algorithm 1 MT-SL: Spectral Learning of vector-valued WFA for multitask learning
? {H
? ? }??? of size P ? m ? S for the target function f~ =
Input: Empirical Hankel tensors H,
[f1 , ? ? ? , fm ] (where P, S are subsets of ?? both containing ?), a common rank R, and task
specific ranks Ri for i ? [m].
Output: WFAs Ai approximating fi for each i ? [d].
? (1) ' UDV> .
1: Compute the rank R truncated SVD H
?
2: Let A = (?, {A }??? , ?) be the vv-WFA defined by
? :,:,? ) and A? = U> H
? ? (H
? (1) )? U for each ? ? ?.
?> = U?,: , , ? = U> (H
(1)
3: for i = 1 to m do
? :,i,: ' Ui Di V> .
4:
Compute the rank Ri truncated SVD H
i
>
>
? >
5:
Let Ai = hUi U?, {Ui UA U Ui }??? , U>
i U?:,i i
6: end for
7: return A1 , ? ? ? , Am .
4
Theoretical Analysis
Computational complexity. The computational
cost of the classical spectral learning algorithm (SL)
is in O N + R|P||S| + R2 |P||?| where the first term corresponds to estimating the Hankel matrices from a sample of size N , the second one to the rank R truncated SVD, and the third one
to computing the transition
matrices A? . InP
comparison, the
computational cost of MT-SL is in
P
O mN + (mR + i Ri )|P||S| + (mR2 + i Ri2 )|P||?| , showing that the increase in complexity is essentially linear in the number of tasks m.
Robustness in subspace estimation. In order to give some theoretical insights on the potential
benefits of MT-SL, let us consider the simple case where the tasks are maximally related with
? 1, ? ? ? , H
? m ? RP?S be the empirical Hankel matrices
common rank R = R1 = ? ? ? = Rm . Let H
?
for the m tasks and let Ei = Hi ? Hi be the error terms, where Hi is the true Hankel matrix for the
? =H
? (1) ? R|P|?m|S| (resp. H = H(1) ) can be obtained by stacking
ith task. Then the flattening H
? i (resp. Hi ) along the horizontal axis. Consider the problem of learning the first task.
the matrices H
One key step of both SL and MT-SL resides in estimating the left singular subspace of H1 and H
respectively from their noisy estimates. When the tasks are maximally related, this space U is the
same for H and H1 , ? ? ? , Hm and we intuitively see that the benefits of MT-SL will stem from the
? should lead to a more accurate estimation of U than the one only relying on
fact that the SVD of H
?
? i have been stacked horizontally, the
H1 . It is also intuitive to see that since the Hankel matrices H
? However,
estimation of the right singular subspace might not benefit from performing SVD on H.
classical results on singular subspace estimation (see e.g. [29, 20]) provide uniform bounds for both
left and right singular subspaces (i.e. bounds on the maximum of the estimation errors for the left and
right spaces). To circumvent this issue, we use a recent result on rate optimal asymmetric perturbation
bounds for left and right singular spaces [9] to obtain the following theorem relating the ratio between
the dimensions of a matrix to the quality of the subspace estimation provided by SVD (the proof can
be found in the supplementary material).
? = M + E where E is a random noise term
Theorem 5. Let M ? Rd1 ?d2 be of rank R and let M
such that vec(E)
follows
a
uniform
distribution
on
the
unit sphere in Rd1 d2 . Let ?U , ?U? ? Rd1 ?d1
kEkF
6
be the matrices of the orthogonal projections onto the space spanned by the top R left singular
? respectively.
vectors of M and M
Let ? > 0, let ? = sR (M) be the smallest non-zero singular value of M and suppose that kEkF ?
?/2. Then, with probability at least 1 ? ?,
?s
?
2
(d
?
R)R
+
2
log(1/?)
kEk
kEk
1
F
F?
.
k?U ? ?U? kF ? 4 ?
+
d1 d2
?
?2
A few remarks on this theorem are in order. First, the Frobenius norm between the projection matrices
measures the distance between the two subspaces (it is in fact proportional to the classical sin-theta
distance between subspaces). Second, the assumption kEkF ? ?/2 corresponds to the magnitude of
F
< 1); this
the noise being small compared to the magnitude of M (and in particular it implies kEk
?
is a reasonable and common assumption in subspace identification problems, see e.g. [30]. Lastly,
as d2 grows the first term in the upper bound becomes irrelevant and the error is dominated by the
quadratic term, which decreases with kEkF faster than classical results. Intuitively this tells us that
there is a first regime where growing d2 (i.e. adding more tasks) is beneficial, until the point where
the quadratic term dominates (and where the bound becomes somehow independent of d2 ).
Going back to the power of MT-SL to leverage information from related tasks, let E ? R|P|?m|S| be
the matrix obtained by stacking the noise matrices Ei along the horizontal axis. If we assume that
the entries of the error terms Ei are i.i.d. from e.g. a normal distribution, we can apply the previous
? (1) and H(1) . One can check that in this case we have
proposition to the left singular subspaces of H
Pm
Pm
2
2
2
2
kEkF = i=1 kEi kF and ? = sR (H) ? i=1 sR (Hi )2 (since R = R1 = ? ? ? = Rm when
tasks are maximally related). Thus, if the norms of the noise terms Ei are roughly the same, and so
F
are the smallest non-zero singular values of the matrices Hi , we get kEk
? O (kE1 kF /sR (H1 )).
?
Hence, given enough tasks, the estimation error of the left singular subspace of H1 in the multitask
? (1) ) is intuitively in O kE1 k2 /sR (H1 )2 while it is only
setting (i.e. by performing SVD on H
F
? 1 , which shows the potential benefits of MT-SL.
in O (kE1 kF /sR (H1 )) when relying solely on H
Indeed, as the amount of training data increases the error in the estimated matrices decreases, thus
T = kE1 kF /sR (H1 ) goes to 0 and an error of order O T 2 decays faster than one of order O (T ).
5
Experiments
We evaluate the performance of the proposed multitask learning method (MT-SL) on both synthetic
and real world data. We use twoPperformance metrics: perplexity per character on a test set T , which
1
is defined by perp(h) = 2? M x?T log(h(x)) where M is the number of symbols in the test set and
h is the hypothesis, and word error rate (WER) which measures the proportion of mis-predicted
symbols averaged over all prefixes in the test set (when the most likely symbol is predicted). Both
experiments are in a stochastic setting, i.e. the functions to be learned are probability distributions,
and explore the regime where the learner has access to a small training sample drawn from the target
task, while larger training samples are available for related tasks. We compare MT-SL with the
classical spectral learning method (SL) for WFAs (note that SL has been extensively compared to
EM and n-gram in the literature, see e.g. [4] and [5] and references therein). For both methods the
prefix set P (resp. suffix set S) is chosen by taking the 1, 000 most frequent prefixes (resp. suffixes)
in the training data of the target task, and the values of the ranks are chosen using a validation set.
5.1
Synthetic Data
We first assess the validity of MT-SL on synthetic data. We randomly generated stochastic WFAs
using the process used for the PAutomaC competition [27] with symbol sparsity 0.4 and transition
sparsity 0.15, for an alphabet ? of size 10. We generated related WFAs5 sharing a joint feature
More precisely, we first generate a probabilistic automaton (PA) AS = (?S , {A?S }??? , ? S ) with dS states.
Then, for each task i = 1, ? ? ? , m we generate a second PA AT = (?T , {A?T }??? , ? T ) with dT states and a
random vector ? ? [0, 1]dS +dT . Both PAs are generated using the process described in [27]. The task fi is then
?
?
? with ?
? = ?/Z
obtained as the distribution computed by the
P stochastic WFA h?S ? ?T , {AS ? AT }??? , ?i
where the constant Z is chosen such that x??? fi (x) = 1.
5
7
dS = 10, dT = 5
6
6
perplexity
5
4
4
4.5
3
3
102
103
102
train size
0.45
0.40
0.35
102
103
4.0
3.5
3.0
2.5
103
train size
102
103
train size
0.65
0.55
0.50
word error rate
word error rate
5
dS = 10, dT = 10
word error rate
perplexity
7
true model
SL
MT-SL, 2 tasks
MT-SL, 4 tasks
MT-SL, 8 tasks
perplexity
dS = 10, dT = 0
0.50
0.45
0.40
0.60
0.55
0.50
0.45
102
103
102
103
Figure 1: Comparison (on synthetic data) between the spectral learning algorithm (SL) and our multitask
algorithm (MT-SL) for different numbers of tasks and different degrees of relatedness between the tasks: dS is
the dimension of the space shared by all tasks and dT the one of the task-specific space (see text for details).
space of dimension dS = 10 and each having a task specific feature space of dimension dT , i.e.
for m tasks f1 , ? ? ? , fm each WFA computing fi has rank dS + dT and the vv-WFA computing
f~ = [f1 , ? ? ? , fm ] has rank dS + mdT . We generated 3 sets of WFAs for different task specific
dimensions dT = 0, 5, 10. The learner had access to training samples of size 5, 000 drawn from each
related tasks f2 , ? ? ? , fm and a training sample of sizes ranging from 50 to 5, 000 drawn from the
target task f1 . Results on a test set of size 1, 000 averaged over 10 runs are reported in Figure 1.
For both evaluation measures, when the task specific dimension is small compared to the dimension
of the joint feature space, i.e. dT = 0, 5, MT-SL clearly outperforms SL that only relies on the
target task data. Moreover, increasing the number of related tasks tends to improve the performances
of MT-SL. However, when dS = dT = 10, MT-SL performs similarly in terms of perplexity and
WER, showing that the multitask approach offers no benefits when the tasks are too loosely related.
Additional experimental results for the case of totally unrelated tasks (dS = 0, dT = 10) as well
as comparisons with MT-SL without the projection step (i.e. without lines 4-7 of Algorithm 1) are
presented in the supplementary material.
5.2
Real Data
We evaluate MT-SL on 33 languages from the Universal Dependencies (U NIDEP) 1.4 treebank [24],
using the 17-tag universal Part of Speech (PoS) tagset. This dataset contains sentences from various
languages where each word is annotated with Google universal PoS tags [25], and thus can be seen as
a collection of samples drawn from 33 distributions over strings on an alphabet of size 17. For each
language, the available data is split between a training, a validation and a test set (80%, 10%, 10%).
For each language and for various sizes of training samples, we compare independently learning the
target task with SL against using MT-SL to exploit training data from related tasks. We tested two
ways of selecting the related tasks: (1) all other languages are used and (2) for each language we
selected the 4 closest languages w.r.t. the distance between the subspaces spanned by the top 50 left
singular vectors of their Hankel matrices6 .
We compare MT-SL against SL (using only the training data for the target task) and against a
naive baseline where all data from different tasks are bagged together and used as a training set for
SL (SL-bagging). We also include the results obtained using MT-SL without the projection step (MTSL-noproj). We report the average relative improvement of MT-SL, SL-bagging and MT-SL-noproj
w.r.t. SL over all languages in Table 1, e.g. for perplexity we report 100 ? (psl ? pmt )/psl where
psl (resp. pmt ) is the perplexity obtained by SL (resp. MT-SL) on the test set. We see that the
multitask approach leads to improved results for both metrics, that the benefits tend to be greater for
small training sizes, and that restricting the number of auxiliary tasks is overall beneficial. To give a
6
The common basis (P, S) for these Hankel matrices is chosen by taking the union of the 100 most frequent
prefixes and suffixes in each training sample.
8
Table 1: Average relative improvement with respect to single task spectral learning (SL) of the
multitask approach (with and without the projection step: MT-SL and MT-SL-noproj) and the
bagging baseline (SL-bagging) on the U NIDEP dataset.
(a) Perplexity average relative improvement (in %).
Training size
100
500
1000
5000
all available data
Related tasks: all other languages
MT-SL
7.0744 ( ?7.76) 3.6666 ( ?5.22) 3.2879 ( ?5.17) 3.4187 ( ?5.57)
MT-SL-noproj 2.9884 ( ?9.82) 2.2469 ( ?7.49) 0.8509 ( ?7.41) 1.1658 ( ?6.59)
SL-bagging ?19.00 ( ?29.1) ?13.32 ( ?22.4) ?10.65 ( ?19.7) ?5.371 ( ?14.6)
3.1574 ( ?5.48)
0.6958 ( ?6.38)
?2.630 ( ?13.0)
Related tasks: 4 closest languages
MT-SL
6.0069 ( ?6.76) 4.3670 ( ?5.83) 4.4049 ( ?5.50) 2.9689 ( ?5.87)
MT-SL-noproj 4.5732 ( ?8.78) 2.9421 ( ?7.83) 2.4549 ( ?7.15) 2.2166 ( ?6.82)
SL-bagging ?18.41 ( ?28.4) ?12.73 ( ?22.0) ?10.34 ( ?20.1) ?3.086 ( ?12.7)
2.8229 ( ?5.90)
2.1451 ( ?6.52)
0.1926 ( ?10.2)
(b) WER average relative improvement (in %).
Training size
100
500
1000
5000
all available data
Related tasks: all other languages
MT-SL
MT-SL-noproj
SL-bagging
1.4919 (?2.37)
?5.763 (?6.82)
?3.067 (?10.8)
1.3786 (?2.94)
?9.454 (?8.95)
?6.998 (?11.6)
1.2281 (?2.62)
?9.197 (?7.25)
?7.788 (?9.88)
1.4964 (?2.70)
?9.201 (?6.02)
?8.791 (?9.54)
1.4932 (?2.77)
?9.600 (?5.55)
?8.611 (?9.74)
Related tasks: 4 closest languages
MT-SL
MT-SL-noproj
SL-bagging
2.0883 (?3.26)
?4.139 (?5.10)
0.3372 (?7.80)
1.5175 (?2.87)
?5.841 (?6.29)
?3.045 (?8.12)
1.2961 (?2.57)
?5.399 (?6.26)
?3.822 (?7.33)
1.3080 (?2.55)
?5.526 (?4.93)
?4.350 (?6.90)
1.2160 (?2.31)
?5.556 (?4.90)
?3.588 (?7.06)
concrete example, on the Basque task with a training set of size 500, the WER was reduced from
? 76% for SL to ? 70% using all other languages as related tasks, and to ? 65% using the 4 closest
tasks (Finnish, Polish, Czech and Indonesian). Overall, both SL-bagging and MT-SL-noproj obtain
worst performance than MT-SL (though MT-SL-noproj still outperforms SL in terms are perplexity
while SL-bagging performs almost always worse than SL). Detailed results on all languages, along
with the list of closest languages used for method (2), are reported in the supplementary material.
6
Conclusion
We introduced the novel model of vector-valued WFA that allowed us to define a notion of relatedness
between recognizable functions and to design a multitask spectral learning algorithm for WFAs (MTSL). The benefits of MT-SL have been theoretically motivated and showcased on both synthetic and
real data experiments. In future works, we plan to apply MT-SL in the context of reinforcement
learning and to identify other areas of machine learning where vv-WFAs could prove to be useful. It
would also be interesting to investigate a weighted approach such as the one presented in [19] for
classical spectral learning; this could prove useful to handle the case where the amount of available
training data differs greatly between tasks.
Acknowledgments
G. Rabusseau acknowledges support of an IVADO postdoctoral fellowship. B. Balle completed this
work while at Lancaster University. We thank NSERC and CIFAR for their financial support.
9
References
[1] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning. In NIPS,
pages 41?48, 2007.
[2] Rapha?l Bailly, Fran?ois Denis, and Liva Ralaivola. Grammatical inference as a principal component
analysis problem. In ICML, pages 33?40, 2009.
[3] Borja Balle. Learning Finite-State Machines: Algorithmic and Statistical Aspects. PhD thesis, Universitat
Polit?cnica de Catalunya, 2013.
[4] Borja Balle, Xavier Carreras, Franco M Luque, and Ariadna Quattoni. Spectral learning of weighted
automata. Machine learning, 96(1-2):33?63, 2014.
[5] Borja Balle, William L. Hamilton, and Joelle Pineau. Methods of moments for learning stochastic
languages: Unified presentation and empirical comparison. In ICML, pages 1386?1394, 2014.
[6] Jonathan Baxter et al. A model of inductive bias learning. Journal of Artifical Intelligence Research,
12(149-198):3, 2000.
[7] Shai Ben-David and Reba Schuller. Exploiting task relatedness for multiple task learning. In Learning
Theory and Kernel Machines, pages 567?580. Springer, 2003.
[8] Byron Boots, Sajid M. Siddiqi, and Geoffrey J. Gordon. Closing the learning-planning loop with predictive
state representations. International Journal of Robotics Research, 30(7):954?966, 2011.
[9] T Tony Cai and Anru Zhang. Rate-optimal perturbation bounds for singular subspaces with applications to
high-dimensional statistics. arXiv preprint arXiv:1605.00353, 2016.
[10] Jack W. Carlyle and Azaria Paz. Realizations by stochastic finite automata. Journal of Computer and
System Sciences, 5(1):26?40, 1971.
[11] Rich Caruana. Multitask learning. In Learning to learn, pages 95?133. Springer, 1998.
[12] Shay B. Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle H. Ungar. Experiments with
spectral learning of latent-variable pcfgs. In NAACL-HLT, pages 148?157, 2013.
[13] Fran?ois Denis and Yann Esposito. On rational stochastic languages. Fundamenta Informaticae, 86(1,
2):41?77, 2008.
[14] Michel Fliess. Matrices de Hankel. Journal de Math?matiques Pures et Appliqu?es, 53(9):197?222, 1974.
[15] Emily Fox, Michael I Jordan, Erik B Sudderth, and Alan S Willsky. Sharing features among dynamical
systems with beta processes. In NIPS, pages 549?557, 2009.
[16] Mark A Girolami and Ata Kab?n. Simplicial mixtures of markov chains: Distributed modelling of dynamic
user profiles. In NIPS, volume 16, pages 9?16, 2003.
[17] Daniel J. Hsu, Sham M. Kakade, and Tong Zhang. A spectral algorithm for learning hidden markov models.
In COLT, 2009.
[18] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3):455?500,
2009.
[19] Alex Kulesza, Nan Jiang, and Satinder Singh. Low-rank spectral learning with weighted loss functions. In
AISTATS, 2015.
[20] Ren-Cang Li. Relative perturbation theory: II. eigenspace and singular subspace variations. SIAM Journal
on Matrix Analysis and Applications, 20(2):471?492, 1998.
[21] Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. Recurrent neural network for text classification with
multi-task learning. In IJCAI, pages 2873?2879, 2016.
[22] Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to
sequence learning. arXiv preprint arXiv:1511.06114, 2015.
[23] Kai Ni, Lawrence Carin, and David Dunson. Multi-task learning for sequential data via ihmms and the
nested dirichlet process. In ICML, pages 689?696, 2007.
[24] Joakim Nivre, Zeljko Agi?c, Lars Ahrenberg, et al. Universal dependencies 1.4, 2016. LINDAT/CLARIN
digital library at the Institute of Formal and Applied Linguistics, Charles University.
[25] Slav Petrov, Dipanjan Das, and Ryan McDonald. A universal part-of-speech tagset. arXiv preprint
arXiv:1104.2086, 2011.
[26] Michael Thon and Herbert Jaeger. Links between multiplicity automata, observable operator models and
predictive state representations: a unified learning framework. Journal of Machine Learning Research,
16:103?147, 2015.
[27] Sicco Verwer, R?mi Eyraud, and Colin De La Higuera. Results of the pautomac probabilistic automaton
learning competition. In ICGI, pages 243?248, 2012.
[28] Boyu Wang, Joelle Pineau, and Borja Balle. Multitask generalized eigenvalue program. In AAAI, pages
2115?2121, 2016.
[29] Per-?ke Wedin. Perturbation bounds in connection with singular value decomposition. BIT Numerical
Mathematics, 12(1):99?111, 1972.
[30] Laurent Zwald and Gilles Blanchard. On the convergence of eigenspaces in kernel principal component
analysis. In NIPS, pages 1649?1656, 2006.
10
| 6852 |@word multitask:31 mr2:1 norm:2 proportion:1 tedious:1 d2:15 hu:1 decomposition:2 moment:1 initial:3 liu:1 series:1 contains:1 selecting:1 daniel:1 offering:1 prefix:7 outperforms:2 recovered:2 tackling:1 liva:1 numerical:1 v:1 intelligence:1 selected:1 xk:3 ith:2 math:1 contribute:1 denis:2 theodoros:1 zhang:2 along:4 direct:1 beta:2 consists:5 prove:3 recognizable:12 introduce:4 theoretically:3 indeed:4 roughly:1 udv:1 planning:1 growing:1 multi:6 behavior:1 relying:2 considering:4 ua:1 becomes:2 provided:1 motivational:1 estimating:8 notation:4 underlying:2 discover:1 bounded:1 eigenspace:1 moreover:3 project:2 unrelated:2 increasing:1 string:8 unified:2 finding:1 indonesian:1 pseudo:1 multidimensional:1 tackle:3 exactly:2 rm:4 k2:1 uk:1 healthcare:1 unit:1 converse:1 hamilton:1 perp:1 tends:1 jiang:1 laurent:1 solely:1 approximately:1 might:1 chose:1 sajid:1 therein:2 au:1 collect:1 suggests:1 co:1 hmms:2 limited:1 factorization:7 pcfgs:1 averaged:2 acknowledgment:1 enforces:1 lyle:1 practice:2 block:2 union:1 differs:1 procedure:1 pontil:1 area:1 empirical:4 ax1:3 ri2:1 universal:5 projection:8 word:9 inp:1 get:2 onto:1 ralaivola:1 operator:1 context:2 applying:2 zwald:1 totally:1 map:15 dean:1 straightforward:2 attention:1 go:2 independently:4 automaton:14 emily:1 ke:1 amazon:2 estimator:1 insight:2 array:1 argyriou:1 spanned:3 d1:4 financial:1 handle:1 notion:20 variation:1 mcgill:4 target:9 resp:10 construction:1 user:2 suppose:2 kolda:1 designing:1 hypothesis:1 pa:4 satisfying:1 particularly:1 asymmetric:2 preprint:3 wang:1 capture:4 worst:1 verwer:1 intends:1 counter:1 decrease:2 environment:1 reba:1 ui:9 complexity:2 dynamic:3 motivate:1 singh:1 sicco:1 predictive:2 f2:18 learner:3 basis:3 completely:1 easily:1 joint:9 po:2 various:2 fiber:2 alphabet:4 stacked:1 train:3 massimiliano:1 mn2:1 tell:1 lancaster:1 whose:4 supplementary:6 valued:22 larger:2 say:2 kai:1 grammar:1 statistic:1 jointly:1 noisy:5 final:6 online:1 pmt:2 sequence:5 eigenvalue:1 cai:1 propose:4 product:2 frequent:2 loop:1 realization:1 achieve:1 intuitive:1 frobenius:1 competition:2 exploiting:1 ijcai:1 empty:1 p:3 extending:1 r1:4 sutskever:1 jaeger:1 convergence:1 ben:1 help:2 recurrent:2 fixing:1 agi:1 received:2 auxiliary:1 c:1 predicted:2 implies:3 come:2 convention:1 ois:2 girolami:1 closely:1 annotated:1 stochastic:7 bader:1 lars:1 material:6 ungar:1 f1:32 showcased:2 preliminary:1 proposition:4 ryan:1 hold:2 lying:1 normal:1 lawrence:1 mapping:2 predict:2 algorithmic:1 smallest:2 estimation:9 weighted:12 clearly:1 always:2 i3:3 rather:2 ke1:4 corollary:9 ax:4 improvement:4 rank:72 check:5 modelling:1 greatly:1 polish:1 baseline:2 sense:2 am:1 inference:1 suffix:6 hidden:2 relation:4 icgi:1 going:1 i1:2 issue:3 among:3 dual:1 appliqu:1 denoted:4 overall:3 classification:1 development:1 plan:1 colt:1 bagged:1 field:1 equal:2 evgeniou:1 having:3 beach:1 f3:9 thang:1 biology:1 icml:3 constitutes:2 thin:1 carin:1 future:1 report:2 gordon:1 few:4 randomly:1 simultaneously:2 individual:1 replaced:1 n1:8 william:1 attempt:1 interest:1 investigate:1 evaluation:1 mixture:2 extreme:1 wedin:1 behind:1 chain:2 accurate:1 tuple:2 necessary:1 shorter:1 respective:1 orthogonal:2 fox:1 indexed:2 eigenspaces:1 loosely:1 showcasing:1 theoretical:3 minimal:27 column:3 modeling:2 asking:1 measuring:1 caruana:1 cost:2 introducing:2 stacking:2 entry:3 subset:1 predictor:1 uniform:2 paz:1 too:1 universitat:1 reported:2 dependency:2 synthetic:7 st:1 rapha:1 fundamental:2 international:1 siam:2 minimality:1 probabilistic:2 michael:3 together:1 ilya:1 concrete:1 thesis:1 aaai:1 nm:3 rn1:2 containing:2 leveraged:1 possibly:1 huang:1 worse:1 lukasz:1 luong:1 return:2 michel:1 li:1 potential:2 de:4 rn2:1 bold:3 summarized:1 blanchard:1 depends:1 ax2:3 vi:1 performed:2 h1:7 competitive:1 start:1 hf:6 recover:1 shai:1 ass:1 ni:3 kek:4 correspond:2 identify:1 simplicial:1 bayesian:1 identification:1 ren:1 quattoni:1 sharing:4 whenever:1 hlt:1 definition:4 against:3 petrov:1 frequency:1 tamara:1 naturally:5 associated:1 mi:3 proof:5 di:3 rational:1 hsu:1 dataset:3 recall:1 knowledge:1 emerges:1 formalize:6 actually:1 back:2 higher:1 dt:12 nivre:1 maximally:6 improved:1 recognizability:1 though:2 lastly:1 until:1 d:11 hand:2 horizontal:3 ei:4 axk:3 google:1 somehow:2 chat:1 pineau:3 mode:6 quality:1 grows:1 usa:1 kab:1 validity:1 naacl:1 true:3 xavier:1 hence:1 inductive:1 moore:1 i2:3 annihilating:1 sin:1 encourages:1 higuera:1 coincides:1 wfa:57 generalized:2 flies:2 complete:2 mcdonald:1 performs:2 interpreting:1 ranging:1 jack:1 novel:4 fi:15 recently:1 matiques:1 common:8 charles:1 mt:41 cohen:1 volume:1 extend:1 relating:1 cambridge:1 vec:2 ai:7 rd:8 uv:2 pm:2 hp:7 similarly:5 closing:1 mathematics:1 language:21 had:1 access:3 pu:1 carreras:1 closest:5 joakim:1 recent:3 irrelevant:3 perplexity:9 calligraphic:1 joelle:3 yi:2 seen:5 herbert:1 additional:2 greater:1 mr:1 catalunya:1 determine:1 colin:1 ii:1 multiple:6 sound:1 sham:1 infer:2 stem:1 borja:5 alan:1 faster:2 constructive:1 offer:2 long:1 sphere:1 cifar:1 a1:3 prediction:1 basic:2 essentially:1 metric:2 arxiv:6 represent:1 kernel:2 robotics:2 rabusseau:3 fellowship:1 singular:17 sudderth:1 eliminates:1 sr:7 finnish:1 tend:1 byron:1 leveraging:2 jordan:1 integer:1 call:1 presence:1 leverage:1 split:1 easy:3 enough:3 baxter:1 fm:14 andreas:1 idea:1 psl:3 whether:1 motivated:2 speech:2 compositional:2 remark:1 xipeng:1 cornerstone:1 useful:3 detailed:1 amount:2 nonparametric:1 extensively:1 siddiqi:1 reduced:1 generate:2 sl:66 exist:1 estimated:1 per:2 anru:1 redundancy:1 key:1 drawn:4 d3:6 sporadic:1 sum:2 run:1 inverse:1 letter:3 wer:4 hankel:26 almost:1 reasonable:1 yann:1 fran:2 circumvents:1 decision:1 bit:1 esposito:1 bound:8 hi:8 nan:1 quadratic:2 precisely:2 alex:1 x2:3 ri:5 dominated:1 tag:2 aspect:1 franco:1 performing:2 px:1 slav:1 across:1 smaller:3 em:2 beneficial:2 character:1 kakade:1 quoc:1 intuitively:8 dv:1 projecting:1 multiplicity:1 computationally:1 previously:1 needed:1 letting:1 end:1 available:7 generalizes:1 apply:2 observe:1 spectral:28 enforce:3 occurrence:1 alternative:1 robustness:2 rp:3 bagging:10 denotes:1 dirichlet:2 top:4 include:1 completed:1 tony:1 linguistics:1 exploit:1 giving:1 build:1 approximating:4 classical:6 tensor:21 added:1 mdt:1 kaiser:1 fa:6 costly:1 subspace:17 distance:3 thank:1 link:1 hmm:1 mail:1 kekf:5 extent:4 wfas:42 enforcing:1 willsky:1 erik:1 length:1 index:1 relationship:1 providing:1 ratio:1 difficult:1 dunson:1 design:4 unknown:1 gilles:1 ihmms:1 upper:3 av:1 observation:3 boot:1 datasets:4 markov:5 finite:13 minh:1 truncated:5 extended:2 rn:12 perturbation:4 introduced:1 annihilates:1 david:2 sentence:2 connection:1 learned:1 czech:1 nip:5 dynamical:1 regime:2 sparsity:2 kulesza:1 program:1 built:1 max:1 power:1 natural:7 rely:1 circumvent:2 schuller:1 nth:2 mn:2 jpineau:1 scheme:1 improve:1 carlyle:1 theta:1 library:1 axis:3 acknowledges:1 hm:1 naive:1 text:2 review:1 literature:1 balle:6 discovery:4 kf:5 relative:6 loss:1 expect:1 interesting:1 proportional:1 proven:1 geoffrey:1 validation:2 digital:1 shay:1 degree:2 rni:1 sufficient:1 consistent:1 article:1 foster:1 treebank:1 share:5 row:2 karl:1 succinctly:1 course:1 ata:1 ariadna:1 bias:1 allow:1 vv:34 formal:1 institute:1 taking:2 distributed:1 benefit:12 slice:1 grammatical:1 dimension:16 gram:1 polit:1 world:3 transition:7 stand:1 computes:2 fb:1 forward:5 collection:3 reinforcement:3 rich:2 resides:1 dipanjan:1 kei:1 cang:1 newspaper:1 observable:2 relatedness:22 implicitly:2 satinder:1 conclude:1 brett:1 xi:3 alternatively:1 postdoctoral:1 vectorization:1 latent:1 table:2 learn:1 concatenates:1 ca:3 basque:1 investigated:1 constructing:1 da:1 flattening:4 aistats:1 motivation:1 noise:8 profile:2 n2:5 qiu:1 allowed:1 x1:3 tong:1 sub:3 inferring:1 wish:1 concatenating:1 lie:1 third:1 learns:1 theorem:11 down:4 xt:1 specific:5 showing:4 symbol:6 r2:2 maxi:1 decay:1 list:1 dominates:1 exists:1 restricting:1 adding:1 sequential:1 hui:1 phd:1 magnitude:2 stratos:1 rd1:12 led:2 simply:1 likely:1 explore:1 bailly:1 luque:1 penrose:1 conveniently:1 horizontally:1 expressed:1 contained:1 nserc:1 vinyals:1 partially:1 scalar:6 springer:2 nested:2 corresponds:2 relies:2 presentation:1 shared:8 experimentally:1 infinite:2 except:1 lemma:1 principal:2 svd:11 experimental:1 e:1 la:1 guillaume:2 support:2 mark:1 thon:1 arises:1 jonathan:1 collins:1 oriol:1 artifical:1 evaluate:2 tagset:2 tested:1 |
6,471 | 6,853 | Multi-way Interacting Regression via Factorization
Machines
XuanLong Nguyen
Department of Statistics
University of Michigan
[email protected]
Mikhail Yurochkin
Department of Statistics
University of Michigan
[email protected]
Nikolaos Vasiloglou
LogicBlox
[email protected]
Abstract
We propose a Bayesian regression method that accounts for multi-way interactions
of arbitrary orders among the predictor variables. Our model makes use of a
factorization mechanism for representing the regression coefficients of interactions
among the predictors, while the interaction selection is guided by a prior distribution
on random hypergraphs, a construction which generalizes the Finite Feature Model.
We present a posterior inference algorithm based on Gibbs sampling, and establish
posterior consistency of our regression model. Our method is evaluated with
extensive experiments on simulated data and demonstrated to be able to identify
meaningful interactions in applications in genetics and retail demand forecasting.1
1
Introduction
A fundamental challenge in supervised learning, particularly in regression, is the need for learning
functions which produce accurate prediction of the response, while retaining the explanatory power
for the role of the predictor variables in the model. The standard linear regression method is favored
for the latter requirement, but it fails the former when there are complex interactions among the
predictor variables in determining the response. The challenge becomes even more pronounced in a
high-dimensional setting ? there are exponentially many potential interactions among the predictors,
for which it is simply not computationally feasible to resort to standard variable selection techniques
(cf. Fan & Lv (2010)).
There are numerous examples where accounting for the predictors? interactions is of interest, including problems of identifying epistasis (gene-gene) and gene-environment interactions in genetics
(Cordell, 2009), modeling problems in political science (Brambor et al., 2006) and economics (Ai &
Norton, 2003). In the business analytics of retail demand forecasting, a strong prediction model that
also accurately accounts for the interactions of relevant predictors such as seasons, product types,
geography, promotions, etc. plays a critical role in the decision making of marketing design.
A simple way to address the aforementioned issue in the regression problem is to simply restrict
our attention to lower order interactions (i.e. 2- or 3-way) among predictor variables. This can be
achieved, for instance, via a support vector machine (SVM) using polynomial kernels (Cristianini &
Shawe-Taylor, 2000), which pre-determine the maximum order of predictor interactions. In practice,
for computational reasons the degree of the polynomial kernel tends to be small. Factorization
machines (Rendle, 2010) can be viewed as an extension of SVM to sparse settings where most
1
Code is available at https://github.com/moonfolk/MiFM.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
interactions are observed only infrequently, subject to a constraint that the interaction order (a.k.a.
interaction depth) is given. Neither SVM nor FM can perform any selection of predictor interactions,
but several authors have extended the SVM by combining it with `1 penalty for the purpose of feature
selection (Zhu et al., 2004) and gradient boosting for FM (Cheng et al., 2014) to select interacting
features. It is also an option to perform linear regression on as many interactions as we can and
combine it with regularization procedures for selection (e.g. LASSO (Tibshirani, 1996) or Elastic
net (Zou & Hastie, 2005)). It is noted that such methods are still not computationally feasible for
accounting for interactions that involve a large number of predictor variables.
In this work we propose a regression method capable of adaptive selection of multi-way interactions of
arbitrary order (MiFM for short), while avoiding the combinatorial complexity growth encountered by
the methods described above. MiFM extends the basic factorization mechanism for representing the
regression coefficients of interactions among the predictors, while the interaction selection is guided
by a prior distribution on random hypergraphs. The prior, which does not insist on the upper bound on
the order of interactions among the predictor variables, is motivated from but also generalizes Finite
Feature Model, a parametric form of the well-known Indian Buffet process (IBP) (Ghahramani &
Griffiths, 2005). We introduce a notion of the hypergraph of interactions and show how a parametric
distribution over binary matrices can be utilized to express interactions of unbounded order. In
addition, our generalized construction allows us to exert extra control on the tail behavior of the
interaction order. IBP was initially used for infinite latent feature modeling and later utilized in the
modeling of a variety of domains (see a review paper by Griffiths & Ghahramani (2011)).
In developing MiFM, our contributions are the following: (i) we introduce a Bayesian multi-linear
regression model, which aims to account for the multi-way interactions among predictor variables;
part of our model construction includes a prior specification on the hypergraph of interactions ? in
particular we show how our prior can be used to model the incidence matrix of interactions in several
ways; (ii) we propose a procedure to estimate coefficients of arbitrary interactions structure; (iii)
we establish posterior consistency of the resulting MiFM model, i.e., the property that the posterior
distribution on the true regression function represented by the MiFM model contracts toward the truth
under some conditions, without requiring an upper bound on the order of the predictor interactions;
and (iv) we present a comprehensive simulation study of our model and analyze its performance
for retail demand forecasting and case-control genetics datasets with epistasis. The unique strength
of the MiFM method is the ability to recover meaningful interactions among the predictors while
maintaining a competitive prediction quality compared to existing methods that target prediction only.
The paper proceeds as follows. Section 2 introduces the problem of modeling interactions in
regression, and gives a brief background on the Factorization Machines. Sections 3 and 4 carry out
the contributions outlined above. Section 5 presents results of the experiments. We conclude with a
discussion in Section 6.
2
Background and related work
Our starting point is a model which regresses a response variable y ? R to observed covariates
(predictor variables) x ? RD by a non-linear functional relationship. In particular, we consider a
multi-linear structure to account for the interactions among the covariates in the model:
E(Y |x) = w0 +
D
X
wi xi +
i=1
J
X
j=1
?j
Y
xi .
(1)
i?Zj
Here, wi for i = 0, . . . , D are bias and linear weights as in the standard linear regression model, J
is the number of multi-way interactions where Zj , ?j for j = 1, . . . , J represent the interactions,
i.e., sets of indices of interacting covariates and the corresponding interaction weights, respectively.
Fitting such a model is very challenging even if dimension D is of magnitude of a dozen, since there
are 2D ? 1 possible interactions to choose from in addition to other parameters. The goal of our work
is to perform interaction selection and estimate corresponding weights. Before doing so, let us first
discuss a model that puts a priori assumptions on the number and the structure of interactions.
2
2.1
Factorization Machines
Factorization Machines (FM) (Rendle, 2010)
is a special case of the general interactions model
Pd
SJ
Sd
D
defined in Eq. (1). Let J =
and
Z := j=1 Zj = l=2 {(i1 , . . . , il )|i1 < . . . <
l=2 l
il ; i1 , . . . , il ? {1, . . . , D}}. I.e., restricting the set of interactions to 2, . . . , d-way, so (1) becomes:
E(Y |x) = w0 +
D
X
wi xi +
i=1
d X
D
X
D
X
...
l=2 i1 =1
il =il?1 +1
?i1 ,...,il
l
Y
x it ,
(2)
t=1
where coefficients ?j := ?i1 ,...,il quantify the interactions. In order to reduce model complexity and
handle sparse data more effectively, Rendle (2010) suggested to factorize interaction weights using
Pk Ql
(l)
PARAFAC (Harshman, 1970): ?i1 ,...,il := f l=1 t=1 vit ,f , where V (l) ? RD?kl , kl ? N and
kl D for l = 2, . . . , d. Advantages of the FM over SVM are discussed in details by Rendle (2010).
FMs turn out to be successful in the recommendation systems setups, since they utilize various context
information (Rendle et al., 2011; Nguyen et al., 2014). Parameter estimation is typically achieved
via stochastic gradient descent technique, or in the case of Bayesian FM (Freudenthaler et al., 2011)
via MCMC. In practice only d = 2 or d = 3 are typically used, since the number of interactions and
hence the computational complexity grow exponentially. We are interested in methods that can adapt
to fewer interactions but of arbitrarily varying orders.
3
MiFM: Multi-way Factorization Machine
We start by defining a mathematical object that can encode sets of interacting variables Z1 , . . . , ZJ
of Eq. (1) and selecting an appropriate prior to model it.
3.1
Modeling hypergraph of interactions
Multi-way interactions are naturally represented by hypergraphs, which are defined as follows.
Definition 1. Given D vertices indexed by S = {1, . . . , D}, let Z = {Z1 , . . . , ZJ } be the set of J
subsets of S. Then we say that G = (S, Z) is a hypergraph with D vertices and J hyperedges.
A hypergraph can be equivalently represented as an incidence binary matrix. Therefore, with
a bit abuse of notation, we recast Z as the matrix of interactions, i.e., Z ? {0, 1}D?J , where
Zi1 j = Zi2 j = 1 iff i1 and i2 are part of a hyperedge indexed by column/interaction j.
Placing a prior on multi-way interactions is the same as specifying the prior distribution on the space
of binary matrices. We will at first adopt the Finite Feature Model (FFM) prior (Ghahramani &
iid
Griffiths, 2005), which is based on the Beta-Bernoulli construction: ?j |?1 , ?2 ? Beta(?1 , ?2 ) and
iid
Zij |?j ? Bernoulli(?j ). This simple prior has the attractive feature of treating the variables involved
in each interaction (hyperedge) in an symmetric fashion and admits exchangeabilility among the
variables inside interactions. In Section 4 we will present an extension of FFM which allows to
incorporate extra information about the distribution of the interaction degrees and explain the choice
of the parametric construction.
3.2
Modeling regression with multi-way interactions
Now that we know how to model unknown interactions of arbitrary order, we combine it with the
Bayesian FM to arrive at a complete specification of MiFM, the Multi-way interacting Factorization
Machine. Starting with the specification for hyperparameters:
? ? ?(?1 /2, ?1 /2),
?k ? ?(?0 /2, ?0 /2),
? ? ?(?0 /2, ?0 /2),
? ? N (?0 , 1/?0 ),
?k ? N (?0 , 1/?0 ) for k = 1, . . . , K.
Interactions and their weights:
wi |?, ? ? N (?, 1/?) for i = 0, . . . , D, Z ? FFM(?1 , ?2 ),
vik |?k , ?k ? N (?k , 1/?k ) for i = 1, . . . , D; k = 1, . . . , K.
3
Likelihood specification given data pairs (yn , xn = (xn1 , . . . , xnD ))N
n=1 :
PD
PJ PK Q
yn |? ? N (y(xn , ?), ?), where y(x, ?) := w0 + i=1 wi xi + j=1 k=1 i?Zj xi vik , (3)
for n = 1, . . . , N, and ? = {Z, V, ?, w0,...,D }. Note that while the specification above utilizes
Gaussian distributions, the main innovation of MiFM is the idea to utilize incidence matrix of the
hypergraph of interactions Z with a low rank matrix V to model the mean response as in Eq. 1.
Therefore, within the MiFM framework, different distributional choices can be made according to the
problem at hand ? e.g. Poisson likelihood and Gamma priors for count data or logistic regression
PD
for classification. Additionally, if selection of linear terms is desired, i=1 wi xi can be removed
from the model since FFM can select linear interactions besides higher order ones.
3.3
MiFM for Categorical Variables
In numerous real world scenarios such as retail demand forecasting, recommender systems, genotype
structures, most predictor variables may be categorical (e.g. color, season). Categorical variables
with multiple attributes are often handled by so-called ?one-hot encoding?, via vectors of binary
variables (e.g., IS_blue; IS_red), which must be mutually exclusive. The FFM cannot immediately be
applied to such structures since it assigns positive probability to interactions between attributes of the
same category. To this end, we model interactions between categories in Z, while with V we model
coefficients of interactions between attributes. For example, for an interaction between ?product
type? and ?season? in Z, V will have individual coefficients for ?jacket-summer? and ?jacket-winter?
leading to a more refined predictive model of jackets sales (see examples in Section 5.2).
We proceed to describe MiFM for the case of categorical variables as follows. Let U be the
number of categories and du be the set of attributes for the category u, for u = 1, . . . , U . Then
PU
FU
D = u=1 card(du ) is the number of binary variables in the one-hot encoding and u=1 du =
{1, . . . , D}. In this representation the input data of predictors is X, a N ? U matrix, where xnu is
an active attribute of category u of observation n. Coefficients matrix V ? RD?K and interactions
Z ? {0, 1}U ?J . All priors and hyperpriors are as before, while the mean response (3) is replaced by:
y(x, ?) := w0 +
U
X
wxu +
u=1
K X
J Y
X
vxu k .
(4)
k=1 j=1 u?Zj
Note that this model specification is easy to combine with continuous variables, allowing MiFM to
handle data with different variable types.
3.4
Posterior Consistency of the MiFM
In this section we shall establish posterior consistency of MiFM model, namely: the posterior
distribution ? of the conditional distribution P (Y |X), given the training N -data pairs, contracts in a
weak sense toward the truth as sample size N increases.
D
Suppose that the data pairs (xn , yn )N
n=1 ? R ? R are i.i.d. samples from the joint distribution
?
P (X, Y ), according to which the marginal distribution for X and the conditional distribution of Y
given X admit density functions f ? (x) and f ? (y|x), respectively, with respect to Lebesgue measure.
In particular, f ? (y|x) is defined by
Y = yn |X = xn , ?? ? N (y(xn , ?? ), ?), where ?? = {?1? , . . . , ?J? , Z1? , . . . , ZJ? },
y(x, ?? ) :=
J
X
j=1
?j?
Y
xi , and xn ? RD , yn ? R, ?j? ? R, Zj? ? {1, . . . , D}
(5)
i?Zj?
for n = 1, . . . , N, j = 1, . . . , J. In the above ?? represents the true parameter for the conditional
density f ? (y|x) that generates data sample yn given xn , for n = 1, . . . , N . A key step in establishing
posterior consistency for the MiFM (here we omit linear terms since, as mentioned earlier, they can be
absorbed into the interaction structure) is to show that our PARAFAC type structure can approximate
arbitrarily well the true coefficients ?1? , . . . , ?J? for the model given by (1).
Lemma 1. Given natural number J ? 1, ?j ? R \ {0} and Zj ? {1, . . . , D} for j = 1, . . . J, exists
PK Q
K0 < J such that for all K ? K0 system of polynomial equations ?j = k=1 i?Zj vik , j =
1, . . . , m has at least one solution in terms of v11 , . . . , vDK .
4
The upper bound K0 = J ? 1 is only required when all interactions are of the depth D ? 1. This is
typically not expected to be the case in practice, therefore smaller values of K are often sufficient.
By conditioning on the training data pairs (xn , yn ) to account for the likelihood induced by the
PARAFAC representation, the statistician obtains the posterior distribution on the parameters of
interest, namely, ? := (Z, V ), which in turn induces the posterior distribution on the conditional
density, to be denoted by f (y|x), according to the MiFM model (3) without linear terms. The main
result of this section is to show that under some conditions this posterior distribution ? will place
most of its mass on the true conditional density f ? (y|x) as N ? ?. To state the theorem precisely,
we need to adopt a suitable notion of weak topology on the space of conditional densities, namely the
set of f (y|x), which is induced by the weak topology on the space of joint densities on X, Y , that
is the set of f (x, y) = f ? (x)f (y|x), where f ? (x) is the true (but unknown) marginal density on X
(see Ghosal et al. (1999), Sec. 2 for a formal definition).
Theorem 1. Given any true conditional density f ? (y|x) given by (5), and assuming that the support
of f ? (x) is bounded, there is a constant K0 < J such that by setting K ? K0 , the following
statement holds: for any weak neighborhood U of f ? (y|x), under the MiFM model, the posterior
?
probability ?(U |(Xn , Yn )N
n=1 ) ? 1 with P -probability one, as N ? ?.
The proof?s sketch for this theorem is given in the Supplement.
4
Prior constructions for interactions: FFM revisited and extended
The adoption of the FFM prior on the hypergraph of interactions carries a distinct behavior in
contrast to the typical Latent Feature modeling setting. In a standard Latent Feature modeling setting
(Griffiths & Ghahramani, 2011), each row of Z describes one of the data points in terms of its feature
representation; controlling row sums is desired to induce sparsity of the features. By contrast, for us
a column of Z is identified with an interaction; its sum represents the interaction depth, which we
want to control a priori.
Interaction selection using MCMC sampler One interesting issue of practical consequence arises
in the aggregation of the MCMC samples (details of the sampler are in the Supplement). When
aggregating MCMC samples in the context of latent feature modeling one would always obtain exactly
J latent features. However, in interaction modeling, different samples might have no interactions in
common (i.e. no exactly matching columns), meaning that support of the resulting posterior estimate
can have up to min{2D ? 1, IJ} unique interactions, where I is the number of MCMC samples.
In practice, we can obtain marginal distributions of all interactions across MCMC samples and use
those marginals for selection. One approach is to pick J interactions with highest marginals and
another is to consider interactions with marginal above some threshold (e.g. 0.5). We will resort to
the second approach in our experiments in Section 5 as it seems to be in more agreement with the
concept of "selection". Lastly, we note that while a data instance may a priori possess unbounded
number of features, the number of possible interactions in the data is bounded by 2D ? 1, therefore
taking J ? ? might not be appropriate. In any case, we do not want to encourage the number
of interactions to be too high for regression modeling, which would lead to overfitting. The above
considerations led us to opt for a parametric prior such as the FFM for interactions structure Z, as
opposed to going fully nonparametric. J can then be chosen using model selection procedures (e.g.
cross validation), or simply taken as the model input parameter.
Generalized construction and induced distribution of interactions depths We now proceed to
introduce a richer family of prior distributions on hypergraphs of which the FFM is one instance.
Our construction is motivated by the induced distribution on the column sums and the conditional
probability updates that arise in the original FFM. Recall that under the FFM prior, interactions
are a priori independent. Fix an interaction j, for the remainder of this section let Zi denote
the indicator of whether variable i is present in interaction j or not (subscript j is dropped from
Zij to simplify notation). Let Mi = Z1 + . . . + Zi denote the number of variables among the
first i present in the corresponding interaction. By the Beta-Bernoulli conjugacy, one obtains
Mi?1 +?1
P(Zi = 1|Z1 , . . . , Zi?1 ) = i?1+?
. This highlights the ?rich-gets-richer? effect of the FFM
1 +?2
prior, which encourages the existence of very deep interactions while most other interactions have
very small depths. In some situations we may prefer a relatively larger number of interactions of
depths in the medium range.
5
An intuitive but somewhat naive alternative sampling process is to allow a variable to be included
into an interaction according to its present "shallowness" quantified by (i ? 1 ? Mi?1 ) (instead
of Mi?1 in the FFM). It can be verified that this construction will lead to a distribution of interactions which concentrates most its mass around D/2; moreover, exchangeability among Zi
would be lost. To maintain exchangeability, we define the sampling process for the sequence
Z = (Z1 , . . . , ZD ) ? {0, 1}D as follows: let ?(?) be a random uniform permutation of {1, . . . , D}
and let ?1 = ? ?1 (1), . . . , ?D = ? ?1 (D). Note that ?1 , . . . , ?D are discrete random variables and
P(?k = i) = 1/D for any i, k = 1, . . . , D. For i = 1, . . . , D, set
P(Z?i = 1|Z?1 , . . . , Z?i?1 ) =
?Mi?1 +(1??)(i?1?Mi?1 )+?1
,
i?1+?1 +?2
P(Z?i = 0|Z?1 , . . . , Z?i?1 ) =
(1??)Mi?1 +?(i?1?Mi?1 )+?2
,
i?1+?1 +?2
(6)
where ?1 > 0, ?2 > 0, ? ? [0, 1] are given parameters and Mi = Z?1 + . . . + Z?i . The collection of
Z generated by this process shall be called to follow FFM? . When ? = 1 we recover the original
FFM prior. When ? = 0, we get the other extremal behavior mentioned at the beginning of the
paragraph. Allowing ? ? [0, 1] yields a richer spectrum spanning the two distinct extremal behaviors.
Details of the process and some of its properties are given in the Supplement. Here we briefly
describe how FFM? a priori ensures "poor gets richer" behavior and offers extra flexibility in
modeling interaction depths compared to the original FFM. The depth of an interaction of D variables
is described by the distribution of MD . Consider the conditionals obtained for a Gibbs sampler
where index of a variable to be updated is random and based on P(?D = i|Z) (it is simply 1/D
for FFM1 ). Suppose we want to assess how likely it is to add a variable into an existing interaction
P
(k+1)
= 1, ?D = i|Z (k) ), where k + 1 is the next iteration of
via the expression i:Z (k) =0 P(Zi
i
(k)
the Gibbs sampler?s conditional update. This probability is a function of MD ; for small values
(k)
of MD it quantifies the tendency for the "poor gets richer" behavior. For the FFM1 it is given by
(k)
(k)
D?MD
D
MD +?1
D?1+?1 +?2 .
0.25
0.15
alpha=1.0
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.10
0.10
0.10
0.10
0.05
?
0.05
?
0.05
?
?
?
?
?
?
0.05
?
0.05
0.25
0.10
0.50
0.15
alpha=0.9
0.20
0.25
0.20
0.25
alpha=0.7
0.15
alpha=0.5
0.20
0.25
0.20
0.25
alpha=0.0
0.15
0.75
0.0
0.5
0.7
0.9
1.0
0.20
alpha
?
0.15
1.00
In Fig. 1(a) we show that FFM1 ?s behavior is opposite of "poor gets richer",
while ? ? 0.7 appears to ensure the desired property. Next, in Fig.1 (b-f) we show the distribution of
MD for various ?, which exhibits a broader spectrum of behavior.
?
?
?
?
?
20
Current interaction depth
30
0
10
20
30
mean = 15.0, variance = 2.6
0
10
20
30
0
mean = 13.5, variance = 7.4
10
20
30
mean = 11.9, variance = 15.4
0.00
0.00
0.00
0.00
?
10
0.00
0.00
?
0
0
10
20
mean = 8.3, variance = 38.7
30
0
10
20
30
mean = 5.0, variance = 60.0
Figure 1: D = 30, ?1 = 0.2, ?2 = 1 (a) Probability of increasing interaction depth; (b-f) FFM? MD
distributions with different ?.
5
Experimental Results
5.1
Simulation Studies
We shall compare MiFM methods against a variety of other regression techniques in the literature,
including Bayesian Factorization Machines (FM), lasso-type regression, Support Vector Regression
(SVR), multilayer perceptron neural network (MLP).2 The comparisons are done on the basis of
prediction accuracy of responses (Root Mean Squared Error on the held out data), quality of regression
coefficient estimates and the interactions recovered.
5.1.1
Predictive Performance
In this set of experiments we demonstrate that MiFMs with either ? = 0.7 or ? = 1 have dominant
predictive performance when high order interactions are in play.
In Fig. 2(a) we analyzed 70 random interactions of varying orders. We see that MiFM can handle
arbitrary complexity of the interactions, while other methods are comparative only when interaction
structure is simple (i.e. linear or 2-way on the right of the Fig. 2(a)).
2
Random Forest Regression and optimization based FM showed worse results than other methods.
6
2.5
?
?
?
?
?
?
?
1.5
?
?
?
1.1
2.0
?
?
?
1.2
2.0
RMSE
?
?
?
1.5
?
?
?
?
?
?
?
1.00
0.75
?
?
?
0.8
1.0
0.00
0.25
0.50
0.75
?
?
0.00
0.4
1.00
0.6
0.8
1.0
0.00
0.25
Proportion of 1? and 2?way interactions
Proportion of continues variables
?
?
?
?
?
?
1.0
1.0
1.0
0.6
Proportion of 1? and 2?way interactions
?
?
?
?
0.4
?
?
?
0.50
?
?
?
0.25
MiFM_1
OLS_MiFM_1
MiFM_0.7
OLS_MiFM_0.7
Elastic_Net
OLS_Elastic
?
Exact recovery proportion
?
?
1.4
MiFM_1
MiFM_0.7
SVR
MLP
FM
?
1.3
3.0
MiFM_1
MiFM_0.7
SVR
MLP
FM
2.5
3.5
3.0
?
0.50
0.75
Binary
Continues
Binary _6
Continues_6
Binary _4
Continues_4
1.00
alpha
Figure 2: RMSE for experiments: (a) interactions depths; (b) data with different ratio of continuous to
categorical variables; (c) quality of the MiFM1 and MiFM0.7 coefficients; (d) MiFM? exact recovery
of the interactions with different ? and data scenarios
Next, to assess the effectiveness of MiFM in handling categorical variables (cf. Section 3.3) we vary
the number of continuous variables from 1 (and 29 attributes across categories) to 30 (no categorical
variables). Results in Fig. 2(b) demonstrate that our models can handle both variable types in the data
(including continuous-categorical interactions), and still exhibit competitive RMSE performance.
5.1.2
Interactions Quality
Coefficients of the interactions This experiment verifies the posterior consistency result of Theorem 1 and validates our factorization model for coefficients approximation. In Fig. 2(c) we compare
MiFMs versus OLS fitted with the corresponding sets of chosen interactions. Additionally we benchmark against Elastic net (Zou & Hastie, 2005) based on the expanded data matrix with interactions of
all depths included, that is 2D ? 1 columns, and a corresponding OLS with only selected interactions.
Selection of the interactions In this experiments we assess how well MiFM can recover true
interactions. We consider three interaction structures: a realistic one with five linear, five 2-way, three
3-way and one of each 4, . . . , 8-way interactions, and two artificial ones with 15 either only 4- or only
6-way interactions to challenge our model. Both binary and continuous variables are explored. Fig.
2(d) shows that MiFM can exactly recover up to 83% of the interactions and with ? = 0.8 it recovers
75% of the interaction in 4 out of 6 scenarios. Situation with 6-way interactions is more challenging,
where 36% for binary data is recovered and almost half for continuous. It is interesting to note that
lower values of ? handle binary data better, while higher values are more appropriate for continuous,
which is especially noticeable on the "only 6-way" case. We think it might be related to the fact that
high order interactions between binary variables are very rare in the data (i.e. product of 6 binary
variables is equal to 0 most of the times) and we need a prior eager to explore (? = 0) to find them.
5.2
Real world applications
5.2.1
Finding epistasis
Identifying epistasis (i.e. interactions between genes) is one of the major questions in the field of
human genetics. Interactions between multiple genes and environmental factors can often tell a lot
more about the presence of a certain disease than any of the genes individually (Templeton, 2000).
Our analysis of the epistasis is based on the data from Himmelstein et al. (2011). These authors show
that interactions between single nucleotide polymorphisms (SNPs) are often powerful predictors
of various diseases, while individually SNPs might not contain important information at all. They
developed a model free approach to simulate data mimicking relationships between complex gene
interactions and the presence of a disease. We used datasets with five SNPs and either 3-,4- and
5-way interactions or only 5-way interactions. For this experiment we compared MiFM1 , MiFM0 ;
refitted logistic regression for each of our models based on the selected interactions (LMiFM1 and
LMiFM0 ), Multilayer Perceptron with 3 layers and Random Forest.3 Results in Table 1 demonstrate
that MiFM produces competitive performance compared to the very best black-box techniques on this
data set, while it also selects interacting genes (i.e. finds epistasis). We don?t know which of the 3and 4-way interactions are present in the data, but since there is only one possible 5-way interaction
we can check if it was identified or not ? both MiFM1 and MiFM0 had a 5-way interaction in at
least 95% of the posterior samples.
3
FM, SVM and logistic regression had low accuracy of around 50% and are not reported.
7
Table 1: Prediction Accuracy on the Held-out Samples for the Gene Data
MiFM1 MiFM0 LMiFM1 LMiFM0 MLP
RF
0.775
0.649
0.771
0.645
0.860
0.623
0.870
0.625
Fri
Sat
Sun
0.887
0.628
Fri
Sat
Sun
4
7
10
Month of the year
12
1
4
7
10
12
0.5
?1.0
?0.5
0.0
0.5
?0.5
?1.0
?0.25
1
0.0
0.25
MiFM_0 coefficient
0.50
1.0
2013
2014
2015
0.00
0.25
0.00
?0.25
MiFM_1 coefficient
0.50
2013
2014
2015
0.883
0.628
1.0
3-, 4-, 5-way
only 5-way
0
Month of the year
10
20
30
Week of the year
40
50
0
10
20
30
40
50
Week of the year
Figure 3: MiFM1 store - month - year interaction: (a) store in Merignac; (b) store in Perols; MiFM0
city - store - day of week - week of year interaction: (c) store in Merignac; (d) store in Perols.
5.2.2
Understanding retail demand
We finally report the analysis of data obtained from a major retailer with stores in multiple locations
all over the world. This dataset has 430k observations and 26 variables spanning over 1100 binary
variables after the one-hot encoding. Sales of a variety of products on different days and in different
stores are provided as response. We will compare MiFM1 and MiFM0 , both fitted with K = 12
and J = 150, versus
Factorization Machines in terms of adjusted mean absolute percent error
P
|?
y ?y |
AMAPE = 100 nP nyn n , a common metric for evaluating sales forecasts. FM is currently a
n
method of choice by the company for this data set, partly because the data is sparse and is similar in
nature to the recommender systems. AMAPE for MiFM1 is 92.4; for MiFM0 - 92.45; for FM - 92.0.
Posterior analysis of predictor interactions The unique strength of MiFM is the ability to provide
valuable insights about the data through its posterior analysis. MiFM1 recovered 62 non-linear
interactions among which there are five 3-way and three 4-way. MiFM0 selected 63 non-linear
interactions including nine 3-way and four 4-way. We note that choice ? = 0 was made to explore
deeper interactions and as we see MiFM0 has more deeper interactions than MiFM1 . Coefficients
for a 3-way interaction of MiFM1 for two stores in France across years and months are shown in
Fig. 3(a,b). We observe different behavior, which would not be captured by a low order interaction.
In Fig. 3(c,d) we plot coefficients of a 4-way MiFM0 interaction for the same two stores in France.
It is interesting to note negative correlation between Saturday and Sunday coefficients for the store
in Merignac, while the store in Perols is not affected by this interaction - this is an example of how
MiFM can select interactions between attributes across categories.
6
Discussion
We have proposed a novel regression method which is capable of learning interactions of arbitrary
orders among the regression predictors. Our model extends Finite Feature Model and utilizes the
extension to specify a hypergraph of interactions, while adopting a factorization mechanism for
representing the corresponding coefficients. We found that MiFM performs very well when there
are some important interactions among a relatively high number (higher than two) of predictor
variables. This is the situation where existing modeling techniques may be ill-equipped at describing
and recovering. There are several future directions that we would like to pursue. A thorough
understanding of the fully nonparametric version of the FFM? is of interest, that is, when the number
of columns is taken to infinity. Such understanding may lead to an extension of the IBP and new
modeling approaches in various domains.
Acknowledgments
This research is supported in part by grants NSF CAREER DMS-1351362, NSF CNS-1409303, a
research gift from Adobe Research and a Margaret and Herman Sokol Faculty Award.
8
References
Ai, Chunrong and Norton, Edward C. Interaction terms in logit and probit models. Economics letters, 80(1):
123?129, 2003.
Brambor, Thomas, Clark, William Roberts, and Golder, Matt. Understanding interaction models: Improving
empirical analyses. Political analysis, 14(1):63?82, 2006.
Cheng, Chen, Xia, Fen, Zhang, Tong, King, Irwin, and Lyu, Michael R. Gradient boosting factorization machines.
In Proceedings of the 8th ACM Conference on Recommender systems, pp. 265?272. ACM, 2014.
Cordell, Heather J. Detecting gene?gene interactions that underlie human diseases. Nature Reviews Genetics, 10
(6):392?404, 2009.
Cristianini, Nello and Shawe-Taylor, John. An introduction to support vector machines and other kernel-based
learning methods. Cambridge university press, 2000.
Fan, Jianqing and Lv, Jinchi. A selective overview of variable selection in high dimensional feature space.
Statistica Sinica, 20(1):101, 2010.
Freudenthaler, Christoph, Schmidt-Thieme, Lars, and Rendle, Steffen. Bayesian factorization machines. 2011.
Ghahramani, Zoubin and Griffiths, Thomas L. Infinite latent feature models and the Indian buffet process. In
Advances in neural information processing systems, pp. 475?482, 2005.
Ghosal, Subhashis, Ghosh, Jayanta K, Ramamoorthi, RV, et al. Posterior consistency of Dirichlet mixtures in
density estimation. The Annals of Statistics, 27(1):143?158, 1999.
Griffiths, Thomas L and Ghahramani, Zoubin. The Indian buffet process: An introduction and review. The
Journal of Machine Learning Research, 12:1185?1224, 2011.
Harshman, Richard A. Foundations of the PARAFAC procedure: Models and conditions for an" explanatory"
multi-modal factor analysis. 1970.
Himmelstein, Daniel S, Greene, Casey S, and Moore, Jason H. Evolving hard problems: generating human
genetics datasets with a complex etiology. BioData mining, 4(1):1, 2011.
Nguyen, Trung V, Karatzoglou, Alexandros, and Baltrunas, Linas. Gaussian process factorization machines
for context-aware recommendations. In Proceedings of the 37th international ACM SIGIR conference on
Research & development in information retrieval, pp. 63?72. ACM, 2014.
Rendle, Steffen. Factorization machines. In Data Mining (ICDM), 2010 IEEE 10th International Conference on,
pp. 995?1000. IEEE, 2010.
Rendle, Steffen, Gantner, Zeno, Freudenthaler, Christoph, and Schmidt-Thieme, Lars. Fast context-aware recommendations with factorization machines. In Proceedings of the 34th international ACM SIGIR conference
on Research and development in Information Retrieval, pp. 635?644. ACM, 2011.
Templeton, Alan R. Epistasis and complex traits. Epistasis and the evolutionary process, pp. 41?57, 2000.
Tibshirani, Robert. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), pp. 267?288, 1996.
Zhu, Ji, Rosset, Saharon, Hastie, Trevor, and Tibshirani, Rob. 1-norm support vector machines. Advances in
neural information processing systems, 16(1):49?56, 2004.
Zou, Hui and Hastie, Trevor. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 67(2):301?320, 2005.
9
| 6853 |@word version:1 briefly:1 polynomial:3 seems:1 proportion:4 faculty:1 logit:1 norm:1 simulation:2 accounting:2 pick:1 carry:2 series:2 selecting:1 zij:2 daniel:1 existing:3 current:1 com:2 incidence:3 recovered:3 must:1 john:1 realistic:1 treating:1 plot:1 update:2 half:1 fewer:1 selected:3 ffm:19 trung:1 beginning:1 short:1 alexandros:1 detecting:1 boosting:2 revisited:1 location:1 zhang:1 five:4 unbounded:2 mathematical:1 beta:3 saturday:1 combine:3 fitting:1 inside:1 paragraph:1 introduce:3 expected:1 behavior:9 nor:1 multi:13 steffen:3 insist:1 company:1 equipped:1 increasing:1 becomes:2 provided:1 gift:1 notation:2 bounded:2 moreover:1 mass:2 medium:1 sokol:1 thieme:2 pursue:1 developed:1 finding:1 ghosh:1 thorough:1 growth:1 exactly:3 control:3 sale:3 grant:1 omit:1 yn:8 harshman:2 underlie:1 before:2 positive:1 dropped:1 aggregating:1 tends:1 sd:1 consequence:1 encoding:3 establishing:1 subscript:1 abuse:1 might:4 black:1 exert:1 baltrunas:1 heather:1 quantified:1 specifying:1 challenging:2 christoph:2 jacket:3 factorization:18 analytics:1 range:1 adoption:1 unique:3 practical:1 acknowledgment:1 practice:4 lost:1 procedure:4 empirical:1 evolving:1 matching:1 pre:1 v11:1 griffith:6 induce:1 zoubin:2 get:5 cannot:1 svr:3 selection:17 put:1 context:4 demonstrated:1 economics:2 attention:1 starting:2 vit:1 sigir:2 subhashis:1 identifying:2 immediately:1 assigns:1 recovery:2 insight:1 refitted:1 handle:5 notion:2 updated:1 annals:1 construction:9 play:2 target:1 suppose:2 controlling:1 exact:2 agreement:1 infrequently:1 particularly:1 utilized:2 continues:2 xnd:1 distributional:1 observed:2 role:2 ensures:1 sun:2 removed:1 highest:1 valuable:1 mentioned:2 disease:4 environment:1 pd:3 complexity:4 hypergraph:8 covariates:3 cristianini:2 predictive:3 basis:1 joint:2 k0:5 represented:3 various:4 distinct:2 fast:1 describe:2 artificial:1 tell:1 neighborhood:1 refined:1 richer:6 larger:1 say:1 ability:2 statistic:3 think:1 validates:1 advantage:1 sequence:1 net:3 propose:3 interaction:145 product:4 jayanta:1 remainder:1 relevant:1 combining:1 iff:1 flexibility:1 margaret:1 intuitive:1 pronounced:1 requirement:1 produce:2 comparative:1 generating:1 object:1 ij:1 noticeable:1 ibp:3 eq:3 edward:1 strong:1 recovering:1 quantify:1 concentrate:1 guided:2 direction:1 attribute:7 stochastic:1 lars:2 human:3 karatzoglou:1 fix:1 polymorphism:1 geography:1 opt:1 adjusted:1 extension:4 hold:1 around:2 lyu:1 week:4 major:2 vary:1 adopt:2 purpose:1 nyn:1 estimation:2 combinatorial:1 currently:1 extremal:2 individually:2 city:1 promotion:1 gaussian:2 always:1 aim:1 sunday:1 season:3 shrinkage:1 exchangeability:2 varying:2 broader:1 encode:1 parafac:4 casey:1 methodological:1 bernoulli:3 likelihood:3 rank:1 check:1 political:2 contrast:2 sense:1 inference:1 typically:3 explanatory:2 initially:1 going:1 france:2 selects:1 i1:8 interested:1 mimicking:1 issue:2 among:16 classification:1 ill:1 denoted:1 favored:1 retaining:1 development:2 aforementioned:1 priori:5 special:1 marginal:4 equal:1 field:1 aware:2 beach:1 sampling:3 placing:1 represents:2 future:1 report:1 np:1 simplify:1 richard:1 winter:1 gamma:1 comprehensive:1 individual:1 replaced:1 lebesgue:1 statistician:1 cns:1 maintain:1 william:1 mlp:4 interest:3 mining:2 introduces:1 analyzed:1 mixture:1 genotype:1 held:2 accurate:1 fu:1 capable:2 encourage:1 nucleotide:1 indexed:2 iv:1 taylor:2 desired:3 fitted:2 instance:3 column:6 modeling:14 earlier:1 vertex:2 subset:1 rare:1 predictor:23 uniform:1 successful:1 too:1 eager:1 reported:1 rosset:1 st:1 density:9 fundamental:1 international:3 contract:2 michael:1 squared:1 opposed:1 choose:1 worse:1 admit:1 resort:2 leading:1 account:5 potential:1 sec:1 includes:1 coefficient:18 later:1 root:1 lot:1 jason:1 analyze:1 doing:1 competitive:3 recover:4 option:1 start:1 aggregation:1 rmse:3 contribution:2 ass:3 il:8 accuracy:3 variance:5 yield:1 identify:1 weak:4 bayesian:6 accurately:1 iid:2 explain:1 trevor:2 norton:2 definition:2 against:2 pp:7 involved:1 regress:1 dm:1 naturally:1 proof:1 mi:9 recovers:1 xn1:1 dataset:1 recall:1 color:1 appears:1 higher:3 supervised:1 follow:1 day:2 response:7 specify:1 modal:1 methodology:1 evaluated:1 done:1 box:1 marketing:1 lastly:1 correlation:1 hand:1 sketch:1 logistic:3 quality:4 usa:1 effect:1 matt:1 requiring:1 true:7 concept:1 contain:1 former:1 regularization:2 hence:1 symmetric:1 moore:1 i2:1 attractive:1 encourages:1 noted:1 generalized:2 complete:1 demonstrate:3 performs:1 saharon:1 percent:1 snp:3 meaning:1 consideration:1 novel:1 common:2 ols:2 functional:1 ji:1 overview:1 conditioning:1 exponentially:2 tail:1 hypergraphs:4 discussed:1 marginals:2 trait:1 cambridge:1 gibbs:3 ai:2 rd:4 consistency:7 outlined:1 shawe:2 had:2 specification:6 etc:1 pu:1 add:1 dominant:1 posterior:18 showed:1 scenario:3 store:12 certain:1 jianqing:1 retailer:1 binary:14 arbitrarily:2 hyperedge:2 fen:1 captured:1 somewhat:1 determine:1 fri:2 ii:1 rv:1 multiple:3 alan:1 adapt:1 cross:1 long:1 offer:1 retrieval:2 icdm:1 award:1 adobe:1 prediction:6 regression:27 basic:1 multilayer:2 metric:1 poisson:1 iteration:1 kernel:3 represent:1 adopting:1 retail:5 achieved:2 addition:2 background:2 want:3 conditionals:1 grow:1 hyperedges:1 extra:3 posse:1 subject:1 induced:4 ramamoorthi:1 effectiveness:1 presence:2 iii:1 logicblox:2 easy:1 variety:3 zi:6 hastie:4 lasso:3 restrict:1 fm:14 reduce:1 idea:1 topology:2 identified:2 vik:3 opposite:1 moonfolk:2 whether:1 motivated:2 handled:1 expression:1 forecasting:4 etiology:1 penalty:1 proceed:2 nine:1 deep:1 xuanlong:2 nikolaos:2 involve:1 nonparametric:2 induces:1 category:7 http:1 zj:12 nsf:2 tibshirani:3 zd:1 discrete:1 shall:3 affected:1 express:1 key:1 four:1 threshold:1 linas:1 neither:1 pj:1 verified:1 utilize:2 sum:3 year:7 letter:1 powerful:1 extends:2 arrive:1 place:1 family:1 almost:1 utilizes:2 decision:1 prefer:1 bit:1 bound:3 layer:1 summer:1 cheng:2 fan:2 encountered:1 greene:1 strength:2 constraint:1 precisely:1 infinity:1 generates:1 simulate:1 min:1 expanded:1 relatively:2 department:2 developing:1 according:4 poor:3 smaller:1 describes:1 across:4 templeton:2 wi:6 rob:1 making:1 taken:2 computationally:2 equation:1 mutually:1 conjugacy:1 discus:1 turn:2 mechanism:3 count:1 describing:1 know:2 rendle:8 end:1 umich:2 generalizes:2 available:1 hyperpriors:1 observe:1 appropriate:3 zi2:1 alternative:1 buffet:3 schmidt:2 existence:1 original:3 thomas:3 dirichlet:1 cf:2 ensure:1 maintaining:1 ghahramani:6 especially:1 establish:3 society:2 freudenthaler:3 question:1 parametric:4 exclusive:1 md:7 exhibit:2 gradient:3 evolutionary:1 card:1 simulated:1 w0:5 nello:1 reason:1 toward:2 spanning:2 assuming:1 code:1 besides:1 index:2 relationship:2 ratio:1 innovation:1 equivalently:1 ql:1 setup:1 sinica:1 robert:2 statement:1 zeno:1 yurochkin:1 negative:1 design:1 unknown:2 perform:3 allowing:2 upper:3 recommender:3 observation:2 datasets:3 benchmark:1 finite:4 descent:1 defining:1 extended:2 zi1:1 situation:3 interacting:6 arbitrary:6 ghosal:2 pair:4 namely:3 kl:3 extensive:1 z1:6 required:1 nip:1 address:1 able:1 suggested:1 proceeds:1 jinchi:1 sparsity:1 challenge:3 herman:1 recast:1 including:4 rf:1 royal:2 power:1 critical:1 hot:3 business:1 natural:1 suitable:1 indicator:1 zhu:2 representing:3 github:1 brief:1 numerous:2 categorical:8 naive:1 prior:20 review:3 literature:1 understanding:4 determining:1 fully:2 probit:1 highlight:1 permutation:1 interesting:3 versus:2 lv:2 clark:1 validation:1 foundation:1 degree:2 sufficient:1 row:2 genetics:6 selective:1 supported:1 free:1 bias:1 formal:1 allow:1 perceptron:2 deeper:2 taking:1 mikhail:1 sparse:3 absolute:1 xia:1 depth:12 dimension:1 xn:9 world:3 rich:1 evaluating:1 author:2 made:2 adaptive:1 collection:1 nguyen:3 sj:1 approximate:1 obtains:2 alpha:7 gene:11 active:1 overfitting:1 sat:2 conclude:1 xi:7 factorize:1 spectrum:2 don:1 continuous:7 latent:6 quantifies:1 table:2 additionally:2 nature:2 ca:1 elastic:3 career:1 forest:2 improving:1 du:3 complex:4 zou:3 domain:2 pk:3 main:2 statistica:1 hyperparameters:1 arise:1 verifies:1 fig:9 fashion:1 tong:1 fails:1 dozen:1 theorem:4 explored:1 svm:6 admits:1 exists:1 restricting:1 effectively:1 hui:1 supplement:3 magnitude:1 demand:5 forecast:1 chen:1 gantner:1 michigan:2 led:1 simply:4 likely:1 explore:2 absorbed:1 recommendation:3 truth:2 environmental:1 acm:6 conditional:9 viewed:1 goal:1 month:4 king:1 feasible:2 hard:1 included:2 infinite:2 typical:1 sampler:4 lemma:1 called:2 partly:1 tendency:1 experimental:1 meaningful:2 select:3 support:6 latter:1 arises:1 irwin:1 indian:3 incorporate:1 mcmc:6 avoiding:1 handling:1 |
6,472 | 6,854 | Predicting Organic Reaction Outcomes with
Weisfeiler-Lehman Network
Wengong Jin? Connor W. Coley? Regina Barzilay? Tommi Jaakkola?
?
Computer Science and Artificial Intelligence Lab, MIT
?
Department of Chemical Engineering, MIT
?
{wengong,regina,tommi}@csail.mit.edu, ? [email protected]
Abstract
The prediction of organic reaction outcomes is a fundamental problem in computational chemistry. Since a reaction may involve hundreds of atoms, fully exploring
the space of possible transformations is intractable. The current solution utilizes
reaction templates to limit the space, but it suffers from coverage and efficiency
issues. In this paper, we propose a template-free approach to efficiently explore the
space of product molecules by first pinpointing the reaction center ? the set of nodes
and edges where graph edits occur. Since only a small number of atoms contribute
to reaction center, we can directly enumerate candidate products. The generated
candidates are scored by a Weisfeiler-Lehman Difference Network that models
high-order interactions between changes occurring at nodes across the molecule.
Our framework outperforms the top-performing template-based approach with a
10% margin, while running orders of magnitude faster. Finally, we demonstrate
that the model accuracy rivals the performance of domain experts.
1
Introduction
One of the fundamental problems in organic chemistry is the prediction of which products form as
a result of a chemical reaction [16, 17]. While the products can be determined unambiguously for
simple reactions, it is a major challenge for many complex organic reactions. Indeed, experimentation
remains the primary manner in which reaction outcomes are analyzed. This is time consuming,
expensive, and requires the help of an experienced chemist. The empirical approach is particularly
limiting for the goal of automatically designing efficient reaction sequences that produce specific
target molecule(s), a problem known as chemical retrosynthesis [16, 17].
Viewing molecules as labeled graphs over atoms, we propose to formulate the reaction prediction
task as a graph transformation problem. A chemical reaction transforms input molecules (reactants)
into new molecules (products) by performing a set of graph edits over reactant molecules, adding
new edges and/or eliminating existing ones. Given that a typical reaction may involve more than 100
atoms, fully exploring all possible transformations is intractable. The computational challenge is
how to reduce the space of possible edits effectively, and how to select the product from among the
resulting candidates.
The state-of-the-art solution is based on reaction templates (Figure 1). A reaction template specifies a
molecular subgraph pattern to which it can be applied and the corresponding graph transformation.
Since multiple templates can match a set of reactants, another model is trained to filter candidate
products using standard supervised approaches. The key drawbacks of this approach are coverage
and scalability. A large number of templates is required to ensure that at least one can reconstitute the
correct product. The templates are currently either hand-crafted by experts [7, 1, 15] or generated
from reaction databases with heuristic algorithms [2, 11, 3]. For example, Coley et al. [3] extracts
140K unique reaction templates from a database of 1 million reactions. Beyond coverage, applying a
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: An example reaction where the reaction center is (27,28), (7,27), and (8,27), highlighted in
green. Here bond (27,28) is deleted and (7,27) and (8,27) are connected by aromatic bonds to form a
new ring. The corresponding reaction template consists of not only the reaction center, but nearby
functional groups that explicitly specify the context.
template involves graph matching and this makes examining large numbers of templates prohibitively
expensive. The current approach is therefore limited to small datasets with limited types of reactions.
In this paper, we propose a template-free approach by learning to identify the reaction center, a small
set of atoms/bonds that change from reactants to products. In our datasets, on average only 5.5%
of the reactant molecules directly participate in the reaction. The small size of the reaction centers
together with additional constraints on bond formations enables us to directly enumerate candidate
products. Our forward-prediction approach is then divided into two key parts: (1) learning to identify
reaction centers and (2) learning to rank the resulting enumerated candidate products.
Our technical approach builds on neural embedding of the Weisfeiler-Lehman isomorphism test.
We incorporate a specific attention mechanism to identify reaction centers while leveraging distal
chemical effects not accounted for in related convolutional representations [5, 4]. Moreover, we
propose a novel Weisfeiler-Lehman Difference Network to learn to represent and efficiently rank
candidate transformations between reactants and products.
We evaluate our method on two datasets derived from the USPTO [13], and compare our methods
to the current top performing system [3]. Our method achieves 83.9% and 77.9% accuracy on two
datasets, outperforming the baseline approach by 10%, while running 140 times faster. Finally, we
demonstrate that the model outperforms domain experts by a large margin.
2
Related Work
Template-based Approach Existing machine learning models for product prediction are mostly
built on reaction templates. These approaches differ in the way templates are specified and in the
way the final product is selected from multiple candidates. For instance, Wei et al. [18] learns to
select among 16 pre-specified, hand-encoded templates, given fingerprints of reactants and reagents.
While this work was developed on a narrow range of chemical reaction types, it is among the first
implementations that demonstrates the potential of neural models for analyzing chemical reactions.
More recent work has demonstrated the power of neural methods on a broader set of reactions. For
instance, Segler and Waller [14] and Coley et al. [3] use a data-driven approach to obtain a large set of
templates, and then employ a neural model to rank the candidates. The key difference between these
approaches is the representation of the reaction. In Segler and Waller [14], molecules are represented
based on their Morgan fingerprints, while Coley et al. [3] represents reactions by the features of
atoms and bonds in the reaction center. However, the template-based architecture limits both of these
methods in scaling up to larger datasets with more diversity.
Template-free Approach Kayala et al. [8] also presented a template-free approach to predict reaction outcomes. Our approach differs from theirs in several ways. First, Kayala et al. operates at the
mechanistic level - identifying elementary mechanistic steps rather than the overall transformations
from reactants to products. Since most reactions consist of many mechanistic steps, their approach
2
Figure 2: Overview of our approach. (1) we train a model to identify pairwise atom interactions
in the reaction center. (2) we pick the top K atom pairs and enumerate chemically-feasible bond
configurations between these atoms. Each bond configuration generates a candidate outcome of the
reaction. (3) Another model is trained to score these candidates to find the true product.
requires multiple predictions to fulfill an entire reaction. Our approach operates at the graph level predicting transformations from reactants to products in a single step. Second, mechanistic descriptions of reactions are not given in existing reaction databases. Therefore, Kayala et al. created their
training set based on a mechanistic-level template-driven expert system. In contrast, our model is
learned directly from real-world experimental data. Third, Kayala et al. uses feed-forward neural networks where atoms and graphs are represented by molecular fingerprints and additional hand-crafted
features. Our approach builds from graph neural networks to encode graph structures.
Molecular Graph Neural Networks The question of molecular graph representation is a key issue
in reaction modeling. In computational chemistry, molecules are often represented with Morgan
Fingerprints, boolean vectors that reflect the presence of various substructures in a given molecule.
Duvenaud et al. [5] developed a neural version of Morgan Fingerprints, where each convolution
operation aggregates features of neighboring nodes as a replacement of the fixed hashing function.
This representation was further expanded by Kearnes et al. [9] into graph convolution models. Dai
et al. [4] consider a different architecture where a molecular graph is viewed as a latent variable
graphical model. Their recurrent model is derived from Belief Propagation-like algorithms. Gilmer
et al. [6] generalized all previous architectures into message-passing network, and applied them to
quantum chemistry. The closest to our work is the Weisfeiler-Lehman Kernel Network proposed
by Lei et al. [12]. This recurrent model is derived from the Weisfeiler-Lehman kernel that produces
isomorphism-invariant representations of molecular graphs. In this paper, we further enhance this
representation to capture graph transformations for reaction prediction.
3
Overview
Our approach bypasses reaction templates by learning a reaction center identifier. Specifically, we
train a neural network that operates on the reactant graph to predict a reactivity score for every
pair of atoms (Section 3.1). A reaction center is then selected by picking a small number of atom
pairs with the highest reactivity scores. After identifying the reaction center, we generate possible
product candidates by enumerating possible bond configurations between atoms in the reaction center
(Section 3.2) subject to chemical constraints. We train another neural network to rank these product
candidates (represented as graphs, together with the reactants) so that the correct reaction outcome is
ranked highest (Section 3.3). The overall pipeline is summarized in Figure 2. Before describing the
two modules in detail, we formally define some key concepts used throughout the paper.
Chemical Reaction A chemical reaction is a pair of molecular graphs (Gr , Gp ), where Gr is
called the reactants and Gp the products. A molecular graph is described as G = (V, E), where
V = {a1 , a2 , ? ? ? , an } is the set of atoms and E = {b1 , b2 , ? ? ? , bm } is the set of associated bonds of
varying types (single, double, aromatic, etc.). Note that Gr is has multiple connected components
3
since there are multiple molecules comprising the reactants. The reactions used for training are
atom-mapped so that each atom in the product graph has a unique corresponding atom in the reactants.
Reaction Center A reaction center is a set of atom pairs {(ai , aj )}, where the bond type between ai
and aj differs from Gr to Gp . In other words, a reaction center is a minimal set of graph edits needed
to transform reactants to products. Since the reported reactions in the training set are atom-mapped,
reaction centers can be identified automatically given the product.
3.1
Reaction Center Identification
In a given reaction R = (Gr , Gp ), each atom pair (au , av ) in Gr is associated with a reactivity label
yuv 2 {0, 1} specifying whether their relation differs between reactants and products. The label is
determined by comparing Gr and Gp with the help of atom-mapping. We predict the label on the
basis of learned atom representations that incorporate contextual cues from the surrounding chemical
environment. In particular, we build on a Weisfeiler-Lehman Network (WLN) that has shown superior
results against other learned graph representations in the narrower setting of predicting chemical
properties of individual molecules [12].
3.1.1
Weisfeiler-Lehman Network (WLN)
The WLN is inspired by the Weisfeiler-Lehman isomorphism test for labeled graphs. The architecture
is designed to embed the computations inherent in WL isomorphism testing to generate learned
isomorphism-invariant representations for atoms.
WL Isomorphism Test The key idea of the isomorphism test is to repeatedly augment node labels
by the sorted set of node labels of neighbor nodes and to compress these augmented labels into new,
short labels. The initial labeling is the atom element. In each iteration, its label is augmented with the
element labels of its neighbors. Such a multi-set label is compactly represented as a new label by a
(L)
hash function. Let cv be the final label of atom av . The molecular graph G = (V, E) is represented
(L)
(L)
as a set {(cu , buv , cv ) | (u, v) 2 E}, where buv is the bond type between u and v. Two graphs are
said to be isomorphic if their set representations are the same. The number of distinct labels grows
exponentially with the number of iterations L.
WL Network The discrete relabeling process does not directly generalize to continuous feature
vectors. Instead, we appeal to neural networks to continuously embed the computations inherent
in the WL test. Let r be the analogous continuous relabeling function. Then a node v 2 G with
neighbor nodes N (v), node features fv , and edge features fuv is ?relabeled? according to
X
r(v) = ? (U1 fv + U2
? (V[fu , fuv ]))
(1)
u2N (v)
where ? (?) could be any non-linear function. We apply this relabeling operation iteratively to obtain
context-dependent atom vectors
X
1)
1)
h(l)
= ? (U1 h(l
+ U2
? (V[h(l
, fuv ]))
(1 ? l ? L)
(2)
v
v
u
u2N (v)
(0)
hv
where
= fv and U1 , U2 , V are shared across layers. The final atom representations arise from
mimicking the set comparison function in the WL isomorphism test, yielding
X
cv =
W(0) h(L)
W(1) fuv W(2) h(L)
(3)
u
v
u2N (v)
(L)
(L)
The set comparison here is realized by matching each rank-1 edge tensor hu ? fuv ? hv to a set
of reference edges also cast as rank-1 tensors W(0) [k] ? W(1) [k] ? W(2) [k], where W[k] is the
k-th row of matrix W. In other words, Eq. 3 above could be written as
E
X D
(L)
cv [k] =
W(0) [k] ? W(1) [k] ? W(2) [k], h(L)
?
f
?
h
(4)
uv
u
v
u2N (v)
The resulting cv is a vector representation that captures the local chemical environment of the atom
(through relabeling) and involves a comparison against a learned set of reference environments.
P The
representation of the whole graph G is simply the sum over all the atom representations: cG = v cv .
4
3.1.2
Finding Reaction Centers with WLN
We present two models to predict reactivity: the local and global models. Our local model is based
directly on the atom representations cu and cv in predicting label yuv . The global model, on the other
hand, selectively incorporates distal chemical effects with the goal of capturing the fact that atoms
outside of the reaction center may be necessary for the reaction to occur. For example, the reaction
center may be influenced by certain reagents1 . We incorporate these distal effects into the global
model through an attention mechanism.
Local Model Let cu , cv be the atom representations for atoms u and v, respectively, as returned by
the WLN. We predict the reactivity score of (u, v) by passing these through another neural network:
suv =
uT ? (Ma cu + Ma cv + Mb buv )
(5)
where (?) is the sigmoid function, and buv is an additional feature vector that encodes auxiliary
information about the pair such as whether the two atoms are in different molecules or which type of
bond connects them.
Global Model Let ?uv be the attention score of atom v on atom u. The global context representation
?u of atom u is calculated as the weighted sum of all reactant atoms where the weight comes from
c
the attention module:
X
?u =
c
?uv cv ;
?uv =
uT ? (Pa cu + Pa cv + Pb buv )
(6)
v
suv
=
? u + Ma c
?v + Mb buv )
uT ? (Ma c
(7)
Note that the attention is obtained with sigmoid rather than softmax non-linearity since there may be
multiple atoms relevant to a particular atom u.
Training Both models are trained to minimize the following loss function:
X X
L(T ) =
yuv log(suv ) + (1 yuv ) log(1 suv )
(8)
R2T u6=v2R
Here we predict each label independently because of the large number of variables. For a given
reaction with N atoms, we need to predict the reactivity score of O(N 2 ) pairs. This quadratic
complexity prohibits us from adding higher-order dependencies between different pairs. Nonetheless,
we found independent prediction yields sufficiently good performance.
3.2
Candidate Generation
We select the top K atom pairs with the highest predicted reactivity score and designate them,
collectively, as the reaction center. The set of candidate products are then obtained by enumerating all
possible bond configuration changes within the set. While the resulting set of candidate products is
exponential in K, many can be ruled out by invoking additional constraints. For example, every atom
has a maximum number of neighbors they can connect to (valence constraint). We also leverage
the statistical bias that reaction centers are very unlikely to consist of disconnected components
(connectivity constraint). Some multi-step reactions do exist that violate the connectivity constraint.
As we will show, the set of candidates arising from this procedure is more compact than those arising
from templates without sacrificing coverage.
3.3
Candidate Ranking
The training set for candidate ranking consists of lists T = {(r, p0 , p1 , ? ? ? , pm )}, where r are the
reactants, p0 is the known product, and p1 , ? ? ? , pm are other enumerated candidate products. The
goal is to learn a scoring function that ranks the highest known product p0 . The challenge in ranking
candidate products is again representational. We must learn to represent (r, p) in a manner that can
focus on the key difference between the reactants r and products p while also incorporating the
necessary chemical contexts surrounding the changes.
1
Molecules that do not typically contribute atoms to the product but are nevertheless necessary for the
reaction to proceed.
5
We again propose two alternative models to score each candidate pair (r, p). The first model naively
represents a reaction by summing difference vectors of all atom representations obtained from a
WLN on the associated connected components. Our second and improved model, called WLDN,
takes into account higher order interactions between these differences vectors.
(p )
WLN with Sum-Pooling Let cv i be the learned atom representation of atom v in candidate product
(p )
molecule pi . We define difference vector dv i pertaining to atom v as follows:
X
i)
i)
i)
d(p
= c(p
c(r)
s(pi ) = uT ? (M
d(p
(9)
v
v
v ;
v )
v2pi
Recall that the reactants and products are atom-mapped so we can use v to refer to the same atom.
The pooling operation is a simple sum over these difference vectors, resulting in a single vector for
each (r, pi ) pair. This vector is then fed into another neural network to score the candidate product pi .
Weisfeiler-Lehman Difference Network (WLDN) Instead of simply summing all difference vectors, the WLDN operates on another graph called a difference graph. A difference graph D(r, pi ) is
defined as a molecular graph which has the same atoms and bonds as pi , with atom v?s feature vector
(p )
replaced by dv i . Operating on the difference graph has several benefits. First, in D(r, pi ), atom v?s
feature vector deviates from zero only if it is close to the reaction center, thus focusing the processing
on the reaction center and its immediate context. Second, D(r, pi ) explicates neighbor dependencies
between difference vectors. The WLDN maps this graph-based representation into a fixed-length
vector, by applying a separately parameterized WLN on top of D(r, pi ):
0
1
?
?
X
i ,l)
i ,l 1)
i ,l 1)
h(p
= ? @U1 h(p
+ U2
? V[h(p
, fuv ] A (1 ? l ? L)
(10)
v
v
u
i ,L)
d(p
v
=
X
u2N (v)
i ,L)
W(0) h(p
u
W(1) fuv
i ,L)
W(2) h(p
v
(11)
u2N (v)
(pi ,0)
where hv
(pi )
= dv
. The final score of pi is s(pi ) = uT ? (M
P
(pi ,L)
v2pi
dv
).
Training Both models are trained to minimize the softmax log-likelihood objective over the scores
{s(p0 ), s(p1 ), ? ? ? , s(pm )} where s(p0 ) corresponds to the target.
4
Experiments
Data As a source of data for our experiments, we used reactions from USPTO granted patents,
collected by Lowe [13]. After removing duplicates and erroneous reactions, we obtained a set of
480K reactions, to which we refer in the paper as USPTO. This dataset is divided into 400K, 40K,
and 40K for training, development, and testing purposes.
In addition, for comparison purposes we report the results on the subset of 15K reaction from this
dataset (referred as USPTO-15K) used by Coley et al. [3]. They selected this subset to include
reactions covered by the 1.7K most common templates. We follow their split, with 10.5K, 1.5K, and
3K for training, development, and testing.
Setup for Reaction Center Identification The output of this component consists of K atom pairs
with the highest reactivity scores. We compute the coverage as the proportion of reactions where all
atom pairs in the true reaction center are predicted by the model, i.e., where the recorded product is
found in the model-generated candidate set.
The model features reflect basic chemical properties of atoms and bonds. Atom-level features include
its elemental identity, degree of connectivity, number of attached hydrogen atoms, implicit valence,
and aromaticity. Bond-level features include bond type (single, double, triple, or aromatic), whether
it is conjugated, and whether the bond is part of a ring.
Both our local and global models are build upon a Weisfeiler-Lehman Network, with unrolled depth
3. All models are optimized with Adam [10], with learning rate decay factor 0.9.
Setup for Candidate Ranking The goal of this evaluation is to determine whether the model can
select the correct product from a set of candidates derived from reaction center. We compare model
6
Method
Local
Local
Global
USPTO-15K
|?|
K=6
572K 80.1
1003K 81.6
756K 86.7
K=8
85.0
86.1
90.1
K=10
87.7
89.1
92.2
Local
Local
Global
USPTO
572K 83.0
1003K 82.4
756K 89.8
87.2
86.7
92.0
89.6
89.1
93.3
Avg. Num. of Candidates (USPTO)
Template
482.3 out of 5006
Global
60.9 246.5 1076
Method
Coley et al.
WLN
WLDN
WLN (*)
WLDN (*)
USPTO-15K
Cov. P@1
100.0 72.1
90.1
74.9
90.1
76.7
100.0 81.4
100.0 84.1
P@3
86.6
84.6
85.6
92.5
94.1
P@5
90.7
86.3
86.8
94.8
96.1
WLN
WLDN
WLN (*)
WLDN (*)
USPTO
92.0
73.5
92.0
74.0
100.0 76.7
100.0 77.8
86.1
86.7
91.0
91.9
89.0
89.5
94.6
95.4
(a) Reaction Center Prediction Performance. Coverage (b) Candidate Ranking Performance. Precision at ranks
is reported by picking the top K (K=6,8,10) reactivity 1,3,5 are reported. (*) denotes that the true product was
added if not covered by the previous stage.
pairs. |?| is the number of model parameters.
Table 1: Results on USPTO-15K and USPTO datasets.
accuracy against the top-performing template-based approach by Coley et al. [3]. This approach
employs frequency-based heuristics to construct reaction templates and then uses a neural model
to rank the derived candidates. As explained above, due to the scalability issues associated with
this baseline, we can only compare on USPTO-15K, which the authors restricted to contain only
examples that were instantiated by their most popular templates.
For all experiments, we set K = 8 for candidate generation. This set-up achieves 90% and 92%
coverage on two datasets and yields 250 candidates per reaction. In addition, we compare variants
of our model on the full USPTO dataset. To compare a standard WLN representation against its
counterpart with Difference Networks (WLDN), we train them under the same setup, fixing the
number of parameters to 650K.
Finally, to factorize the coverage of candidate selection and the accuracy of candidate ranking, we
consider two evaluation scenarios: (1) the candidate list as derived from reaction center; (2) the above
candidate list augmented with the true product if not found. This latter setup is marked with (*).
4.1
Results
Reaction Center Identification Table 1a reports the coverage of the model as compared to the real
reaction core. Clearly, the coverage depends on the number of atom pairs K, with the higher coverage
for larger values of K. These results demonstrate that even for K = 8, the model achieves high
coverage, above 90%.
The results also clearly demonstrate the advantage of the global model over the local one, which is
consistent across all experiments. The superiority of the global model is in line with the well-known
fact that reactivity depends on more than the immediate local environment surrounding the reaction
center. The presence of certain functional groups (structural motifs that appear frequently in organic
chemistry) far from the reaction center can promote or inhibit different modes of reactivity. Moreover,
reactivity is often influenced by the presence of reagents, which are separate molecules that may not
directly contribute atoms to the product. Consideration of both of these factors necessitates the use of
a model that can account for long-range dependencies between atoms.
Figure 3 depicts one such example, where the observed reactivity can be attributed to the presence of
a reagent molecule that is completely disconnected from the reaction center itself. While the local
model fails to anticipate this reactivity, the global one accurately predicts the reaction center. The
attention map highlights the reagent molecule as the determinant context.
Candidate Generation Here we compare the coverage of the generated candidates with the templatebased model. Table 1a shows that for K = 6, our model generates an average of 60.1 candidates
7
Figure 3: A reaction that reduces the carbonyl carbon of an amide by removing bond 4-23 (red circle).
Reactivity at this site would be highly unlikely without the presence of borohydride (atom 25, blue
circle). The global model correctly predicts bond 4-23 as the most susceptible to change, while the
local model does not even include it in the top ten predictions. The attention map of the global model
show that atoms 1, 25, and 26 were determinants of atom 4?s predicted reactivity.
(a) An example where reaction occurs at the ? carbon
(atom 7, red circle) of a carbonyl group (bond 8-13),
also adjacent to a phenyl group (atoms 1-6). The corresponding template explicitly requires both the carbonyl
and part of the phenyl ring as context (atoms 4, 7, 8,
(b) Performance of reactions with different popularity.
13), although reactivity in this case does not require the
MRR stands for mean reciprocal rank
additional specification of the phenyl group (atom 1).
Figure 4
and reaches a coverage of 89.8%. The template-based baseline requires 5006 templates extracted
from the training data (corresponding to a minimum of five precedent reactions) to achieve 90.1%
coverage with an average of 482 candidates per example.
This weakness of the baseline model can be explained by the difficulty in defining general heuristics
with which to extract templates from reaction examples. It is possible to define different levels
of specificity based on the extent to which atoms surrounding the reaction center are included or
generalized [11]. This introduces an unavoidable trade-off between generality (fewer templates,
higher coverage, more candidates) and specificity (more templates, less coverage, fewer candidates).
Figure 4a illustrates one reaction example where the corresponding template is rare due to the
adjacency of the reaction center to both a carbonyl group and a phenyl ring. Because adjacency to
either group can influence reactivity, both are included as part of the template, although reactivity in
this case does not require the additional specification of the phenyl group.
The massive number of templates required for high coverage is a serious impediment for the baseline
approach because each template application requires solving a subgraph isomorphism problem.
Specifically, it takes on average 7 seconds to apply the 5006 templates to a test instance, while our
method takes less than 50 ms, about 140 times faster.
Candidate Ranking Table 1b reports the performance on the product prediction task. Since the
baseline templates from [3] were optimized on the test and have 100% coverage, we compare its
performance against our models to which the correct product is added (WLN(*) and WLDN(*)).
Our model clearly outperforms the baseline by a wide margin. Even when compared against the
candidates automatically computed from the reaction center, WLDN outperforms the baseline in
top-1 accuracy. The results also demonstrate that the WLDN model consistently outperforms the
WLN model. This is consistent with our intuition that modeling higher order dependencies between
8
the difference vectors is advantageous over simply summing over them. Table 1b also shows that
the model performance scales nicely when tested on the full USPTO dataset. Moreover, the relative
performance difference between WLN and WLDN is preserved on this dataset.
We further analyze model performance based on the frequency of the underlying transformation
as reflected by the the number of template precedents. In Figure 4b we group the test instances
according to their frequency and report the coverage of the global model and the mean reciprocal
rank (MRR) of the WLDN model on each of them. As expected, our approach achieves the highest
performance for frequent reactions. However, it maintains reasonable coverage and ranking accuracy
even for rare reactions, which are particularly challenging for template-based methods.
4.2
Human Evaluation Study
We randomly selected 80 reaction examples from the test set, ten from each of the template popularity
intervals of Figure 4b, and asked ten chemists to predict the outcome of each given its reactants. The
average accuracy across the ten performers was 48.2%. Our model achieves an accuracy of 69.1%,
outperforming even the best individual performer who scored 66.3%.
Chemist
Our Model
56.3
50.0
40.0
63.8
66.3 65.0
69.1
40.0
58.8
25.0
16.3
Table 2: Human and model performance on 80 reactions randomly selected from the USPTO test set
to cover a diverse range of reaction types. The first 8 are chemists with rich experience in organic
chemistry (graduate and postdoctoral chemists) and the last two are graduate students in chemical
engineering who use organic chemistry concepts regularly but have less formal training.
5
Conclusion
We proposed a novel template-free approach for chemical reaction prediction. Instead of generating
candidate products by reaction templates, we first predict a small set of atoms/bonds in reaction center,
and then produce candidate products by enumerating all possible bond configuration changes within
the set. Compared to template based approach, our framework runs 140 times faster, allowing us to
scale to much larger reaction databases. Both our reaction center identifier and candidate ranking
model build from Weisfeiler-Lehman Network and its variants that learn compact representation of
graphs and reactions. We hope our work will encourage both computer scientists and chemists to
explore fully data driven approaches for this task.
Acknowledgement
We thank Tim Jamison, Darsh Shah, Karthik Narasimhan and the reviewers for their helpful comments. We also thank members of the MIT Department of Chemistry and Department of Chemical
Engineering who participated in the human benchmarking study. This work was supported by the
DARPA Make-It program under contract ARO W911NF-16-2-0023.
References
[1] Jonathan H Chen and Pierre Baldi. No electron left behind: a rule-based expert system to predict
chemical reactions and reaction mechanisms. Journal of chemical information and modeling,
49(9):2034?2043, 2009.
[2] Clara D Christ, Matthias Zentgraf, and Jan M Kriegl. Mining electronic laboratory notebooks:
analysis, retrosynthesis, and reaction based enumeration. Journal of chemical information and
modeling, 52(7):1745?1756, 2012.
[3] Connor W Coley, Regina Barzilay, Tommi S Jaakkola, William H Green, and Klavs F Jensen.
Prediction of organic reaction outcomes using machine learning. ACS Central Science, 2017.
[4] Hanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for
structured data. arXiv preprint arXiv:1603.05629, 2016.
9
[5] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel,
Al?n Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning
molecular fingerprints. In Advances in neural information processing systems, pages 2224?
2232, 2015.
[6] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural
message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
[7] Markus Hartenfeller, Martin Eberle, Peter Meier, Cristina Nieto-Oberhuber, Karl-Heinz Altmann, Gisbert Schneider, Edgar Jacoby, and Steffen Renner. A collection of robust organic
synthesis reactions for in silico molecule design. Journal of chemical information and modeling,
51(12):3093?3098, 2011.
[8] Matthew A Kayala, Chlo?-Agathe Azencott, Jonathan H Chen, and Pierre Baldi. Learning to
predict chemical reactions. Journal of chemical information and modeling, 51(9):2209?2222,
2011.
[9] Steven Kearnes, Kevin McCloskey, Marc Berndl, Vijay Pande, and Patrick Riley. Molecular
graph convolutions: moving beyond fingerprints. Journal of computer-aided molecular design,
30(8):595?608, 2016.
[10] Diederik P Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In
International Conference on Learning Representation, 2015.
[11] James Law, Zsolt Zsoldos, Aniko Simon, Darryl Reid, Yang Liu, Sing Yoong Khew, A Peter
Johnson, Sarah Major, Robert A Wade, and Howard Y Ando. Route designer: a retrosynthetic
analysis tool utilizing automated retrosynthetic rule generation. J. Chem. Inf. Model., 49(3):
593?602, 2009. ISSN 1549-9596.
[12] Tao Lei, Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Deriving neural architectures
from sequence and graph kernels. In Proceedings of 34th International Conference on Machine
Learning (ICML), 2017.
[13] D. M. Lowe. Patent reaction extraction: downloads; https://bitbucket.org/dan2097/
patent-reaction-extraction/downloads. 2014.
[14] Marwin HS Segler and Mark P Waller. Neural-symbolic machine learning for retrosynthesis
and reaction prediction. Chemistry-A European Journal, 2017.
[15] Sara Szymkuc, Ewa P. Gajewska, Tomasz Klucznik, Karol Molga, Piotr Dittwald, Micha?
Startek, Micha? Bajczyk, and Bartosz A. Grzybowski. Computer-assisted synthetic planning:
The end of the beginning. Angew. Chem., Int. Ed., 55(20):5904?5937, 2016. ISSN 1521-3773.
doi: 10.1002/anie.201506101. URL http://dx.doi.org/10.1002/anie.201506101.
[16] Matthew H Todd. Computer-aided organic synthesis. Chemical Society Reviews, 34(3):247?266,
2005.
[17] Wendy A Warr. A short review of chemical reaction database systems, computer-aided synthesis
design, reaction prediction and synthetic feasibility. Molecular Informatics, 33(6-7):469?476,
2014.
[18] Jennifer N Wei, David Duvenaud, and Al?n Aspuru-Guzik. Neural networks for the prediction
of organic chemistry reactions. ACS Central Science, 2(10):725?732, 2016.
10
| 6854 |@word h:1 cu:5 version:1 eliminating:1 determinant:2 proportion:1 advantageous:1 hu:1 p0:5 invoking:1 pick:1 initial:1 configuration:5 cristina:1 score:12 liu:1 outperforms:5 reaction:124 existing:3 current:3 comparing:1 contextual:1 clara:1 diederik:1 dx:1 written:1 must:1 enables:1 designed:1 hash:1 intelligence:1 selected:5 cue:1 fewer:2 v2r:1 beginning:1 reciprocal:2 short:2 core:1 num:1 node:9 contribute:3 org:2 five:1 consists:3 baldi:2 manner:2 bitbucket:1 pairwise:1 expected:1 indeed:1 p1:3 frequently:1 planning:1 multi:2 steffen:1 heinz:1 inspired:1 automatically:3 nieto:1 enumeration:1 moreover:3 linearity:1 underlying:1 prohibits:1 developed:2 narasimhan:1 finding:1 transformation:9 every:2 prohibitively:1 demonstrates:1 superiority:1 appear:1 reid:1 before:1 engineering:3 local:13 scientist:1 waller:3 limit:2 todd:1 analyzing:1 downloads:2 au:1 specifying:1 challenging:1 sara:1 micha:2 limited:2 range:3 graduate:2 unique:2 eberle:1 testing:3 differs:3 procedure:1 jan:1 empirical:1 matching:2 organic:11 pre:1 word:2 specificity:2 symbolic:1 close:1 selection:1 context:7 applying:2 influence:1 silico:1 map:3 demonstrated:1 center:41 reviewer:1 attention:7 independently:1 jimmy:1 formulate:1 identifying:2 rule:2 utilizing:1 deriving:1 u6:1 embedding:1 analogous:1 limiting:1 target:2 massive:1 guzik:2 us:2 designing:1 pa:2 element:2 expensive:2 particularly:2 predicts:2 labeled:2 database:5 observed:1 steven:1 module:2 preprint:2 pande:1 capture:2 hv:3 connected:3 trade:1 highest:6 inhibit:1 intuition:1 environment:4 complexity:1 asked:1 chemist:6 trained:4 solving:1 explicates:1 upon:1 efficiency:1 basis:1 completely:1 compactly:1 necessitates:1 darpa:1 represented:6 various:1 surrounding:4 train:4 distinct:1 instantiated:1 buv:6 pertaining:1 artificial:1 doi:2 labeling:1 aggregate:1 formation:1 outcome:8 outside:1 kevin:1 heuristic:3 encoded:1 larger:3 amide:1 cov:1 gp:5 highlighted:1 reactivity:19 transform:1 final:4 itself:1 sequence:2 advantage:1 matthias:1 propose:5 aro:1 interaction:3 product:44 mb:2 frequent:1 neighboring:1 relevant:1 subgraph:2 achieve:1 representational:1 description:1 scalability:2 elemental:1 double:2 produce:3 generating:1 adam:3 ring:4 karol:1 help:2 tim:1 recurrent:2 ac:2 fixing:1 sarah:1 barzilay:3 eq:1 coverage:21 auxiliary:1 involves:2 come:1 predicted:3 tommi:4 differ:1 drawback:1 correct:4 filter:1 stochastic:1 human:3 viewing:1 adjacency:2 reagent:4 require:2 regina:4 anticipate:1 elementary:1 ryan:1 enumerated:2 designate:1 exploring:2 assisted:1 sufficiently:1 duvenaud:3 maclaurin:1 mapping:1 predict:11 electron:1 matthew:2 major:2 achieves:5 a2:1 notebook:1 purpose:2 bond:23 currently:1 kearnes:2 label:15 wl:5 tool:1 weighted:1 hope:1 mit:5 clearly:3 rather:2 fulfill:1 varying:1 jaakkola:3 broader:1 encode:1 derived:6 focus:1 consistently:1 rank:11 likelihood:1 contrast:1 cg:1 baseline:8 helpful:1 dependent:1 motif:1 entire:1 unlikely:2 typically:1 relation:1 comprising:1 tao:1 mimicking:1 issue:3 among:3 overall:2 wln:16 augment:1 development:2 art:1 softmax:2 construct:1 nicely:1 beach:1 atom:70 extraction:2 piotr:1 represents:2 icml:1 promote:1 report:4 inherent:2 employ:2 duplicate:1 serious:1 randomly:2 individual:2 relabeling:4 replaced:1 connects:1 kayala:5 replacement:1 karthik:1 william:1 r2t:1 ando:1 message:2 dougal:1 highly:1 mining:1 edits:4 evaluation:3 weakness:1 introduces:1 analyzed:1 zsolt:1 yielding:1 behind:1 edge:5 fu:1 encourage:1 necessary:3 experience:1 ruled:1 circle:3 reactant:21 sacrificing:1 minimal:1 instance:4 modeling:6 boolean:1 cover:1 w911nf:1 riley:2 segler:3 subset:2 rare:2 hundred:1 examining:1 johnson:1 gr:7 reported:3 dependency:4 connect:1 synthetic:2 st:1 fundamental:2 u2n:6 international:2 csail:1 contract:1 off:1 informatics:1 picking:2 enhance:1 together:2 continuously:1 synthesis:3 connectivity:3 again:2 reflect:2 recorded:1 unavoidable:1 central:2 weisfeiler:12 expert:5 yuv:4 conjugated:1 account:2 potential:1 diversity:1 chemistry:11 summarized:1 b2:1 student:1 int:1 lehman:12 explicitly:2 ranking:9 depends:2 bombarell:1 lowe:2 lab:1 analyze:1 red:2 hirzel:1 maintains:1 tomasz:1 substructure:1 simon:1 minimize:2 accuracy:8 convolutional:2 who:3 efficiently:2 azencott:1 yield:2 identify:4 generalize:1 identification:3 accurately:1 edgar:1 aromatic:3 influenced:2 suffers:1 reach:1 ed:1 against:6 nonetheless:1 frequency:3 james:1 chlo:1 associated:4 attributed:1 dataset:5 popular:1 recall:1 ut:5 focusing:1 feed:1 hashing:1 higher:5 supervised:1 follow:1 unambiguously:1 specify:1 wei:2 improved:1 reflected:1 generality:1 implicit:1 stage:1 agathe:1 hand:4 iparraguirre:1 propagation:1 mode:1 aj:2 lei:3 grows:1 usa:1 effect:3 concept:2 true:4 contain:1 counterpart:1 chemical:27 iteratively:1 fuv:7 laboratory:1 distal:3 adjacent:1 samuel:1 m:1 generalized:2 demonstrate:5 consideration:1 novel:2 superior:1 sigmoid:2 common:1 functional:2 overview:2 patent:3 attached:1 exponentially:1 million:1 theirs:1 refer:2 connor:2 ai:2 cv:12 uv:4 pm:3 fingerprint:7 moving:1 specification:2 operating:1 etc:1 patrick:2 closest:1 recent:1 mrr:2 inf:1 driven:3 scenario:1 route:1 certain:2 outperforming:2 jorge:1 scoring:1 morgan:3 minimum:1 additional:6 dai:3 george:1 performer:2 schneider:1 determine:1 multiple:6 violate:1 full:2 reduces:1 technical:1 faster:4 match:1 long:2 divided:2 molecular:14 a1:1 feasibility:1 prediction:16 variant:2 basic:1 arxiv:4 iteration:2 represent:2 kernel:3 aromaticity:1 pinpointing:1 preserved:1 addition:2 ewa:1 separately:1 participated:1 interval:1 source:1 comment:1 subject:1 pooling:2 member:1 regularly:1 leveraging:1 incorporates:1 jacoby:1 structural:1 presence:5 leverage:1 yang:1 split:1 embeddings:1 automated:1 architecture:5 identified:1 impediment:1 reduce:1 idea:1 enumerating:3 whether:5 isomorphism:9 url:1 granted:1 song:1 peter:2 returned:1 passing:3 proceed:1 repeatedly:1 enumerate:3 covered:2 involve:2 transforms:1 rival:1 ten:4 generate:2 specifies:1 http:2 exist:1 designer:1 arising:2 per:2 correctly:1 popularity:2 blue:1 diverse:1 wendy:1 discrete:1 group:9 key:7 nevertheless:1 pb:1 deleted:1 dahl:1 graph:37 sum:4 run:1 parameterized:1 chemically:1 throughout:1 reasonable:1 electronic:1 utilizes:1 scaling:1 capturing:1 layer:1 altmann:1 quadratic:1 occur:2 constraint:6 encodes:1 markus:1 nearby:1 generates:2 u1:4 performing:4 expanded:1 martin:1 schoenholz:1 department:3 structured:1 according:2 disconnected:2 across:4 dv:4 invariant:2 explained:2 restricted:1 pipeline:1 remains:1 jennifer:1 describing:1 mechanism:3 needed:1 mechanistic:5 fed:1 end:1 operation:3 experimentation:1 apply:2 pierre:2 alternative:1 shah:1 compress:1 top:9 running:2 ensure:1 include:4 denotes:1 graphical:1 uspto:15 build:5 society:1 tensor:2 objective:1 question:1 realized:1 added:2 occurs:1 primary:1 said:1 valence:2 separate:1 mapped:3 suv:4 thank:2 participate:1 collected:1 extent:1 length:1 issn:2 unrolled:1 setup:4 mostly:1 susceptible:1 robert:1 carbon:2 ba:1 implementation:1 design:3 allowing:1 av:2 convolution:3 datasets:7 sing:1 howard:1 jin:2 immediate:2 defining:1 david:2 pair:16 required:2 specified:2 cast:1 optimized:2 meier:1 fv:3 learned:6 narrow:1 kingma:1 nip:1 beyond:2 justin:1 pattern:1 challenge:3 program:1 built:1 green:2 wade:1 belief:1 power:1 ranked:1 difficulty:1 predicting:4 created:1 extract:2 deviate:1 review:2 acknowledgement:1 precedent:2 relative:1 law:1 fully:3 loss:1 highlight:1 generation:4 triple:1 gilmer:2 degree:1 consistent:2 bypass:1 pi:14 row:1 karl:1 accounted:1 supported:1 last:1 free:5 bias:1 formal:1 neighbor:5 template:47 wide:1 aspuru:2 benefit:1 calculated:1 depth:1 world:1 stand:1 rich:1 quantum:2 forward:2 author:1 avg:1 collection:1 bm:1 far:1 compact:2 rafael:1 global:15 b1:1 summing:3 consuming:1 factorize:1 discriminative:1 postdoctoral:1 continuous:2 latent:2 hydrogen:1 table:6 learn:4 molecule:20 ca:1 robust:1 complex:1 european:1 domain:2 marc:1 whole:1 scored:2 arise:1 identifier:2 augmented:3 crafted:2 referred:1 site:1 benchmarking:1 depicts:1 precision:1 experienced:1 fails:1 exponential:1 candidate:47 third:1 learns:1 hanjun:1 removing:2 embed:2 erroneous:1 specific:2 jensen:1 appeal:1 list:3 decay:1 intractable:2 consist:2 incorporating:1 naively:1 adding:2 effectively:1 relabeled:1 magnitude:1 illustrates:1 occurring:1 margin:3 chen:2 vijay:1 timothy:1 simply:3 explore:2 vinyals:1 mccloskey:1 bo:1 u2:4 christ:1 collectively:1 corresponds:1 extracted:1 ma:4 goal:4 viewed:1 narrower:1 sorted:1 identity:1 marked:1 shared:1 feasible:1 change:6 aided:3 included:2 determined:2 typical:1 operates:4 specifically:2 called:3 isomorphic:1 experimental:1 select:4 formally:1 selectively:1 mark:1 latter:1 chem:2 jonathan:2 oriol:1 incorporate:3 evaluate:1 tested:1 |
6,473 | 6,855 | Practical Data-Dependent Metric Compression with
Provable Guarantees
Piotr Indyk?
MIT
Ilya Razenshteyn?
MIT
Tal Wagner?
MIT
Abstract
We introduce a new distance-preserving compact representation of multidimensional point-sets. Given n points in a d-dimensional space where each
coordinate is represented using B bits (i.e., dB bits per point), it produces a representation of size O(d log(dB/) + log n) bits per point from which one can
approximate the distances up to a factor of 1 ? . Our algorithm almost matches
the recent bound of [6] while being much simpler. We compare our algorithm
to Product Quantization (PQ) [7], a state of the art heuristic metric compression
method. We evaluate both algorithms on several data sets: SIFT (used in [7]),
MNIST [11], New York City taxi time series [4] and a synthetic one-dimensional
data set embedded in a high-dimensional space. With appropriately tuned parameters, our algorithm produces representations that are comparable to or better than
those produced by PQ, while having provable guarantees on its performance.
1
Introduction
Compact distance-preserving representations of high-dimensional objects are very useful tools in
data analysis and machine learning. They compress each data point in a data set using a small number
of bits while preserving the distances between the points up to a controllable accuracy. This makes it
possible to run data analysis algorithms, such as similarity search, machine learning classifiers, etc, on
data sets of reduced size. The benefits of this approach include: (a) reduced running time (b) reduced
storage and (c) reduced communication cost (between machines, between CPU and RAM, between
CPU and GPU, etc). These three factors make the computation more efficient overall, especially on
modern architectures where the communication cost is often the dominant factor in the running time,
so fitting the data in a single processing unit is highly beneficial. Because of these benefits, various
compact representations have been extensively studied over the last decade, for applications such
as: speeding up similarity search [3, 5, 10, 19, 22, 7, 15, 18], scalable learning algorithms [21, 12],
streaming algorithms [13] and other tasks. For example, a recent paper [8] describes a similarity
search software package based on one such method (Product Quantization (PQ)) that has been used
to solve very large similarity search problems over billions of point on GPUs at Facebook.
The methods for designing such representations can be classified into data-dependent and dataoblivious. The former analyze the whole data set in order to construct the point-set representation,
while the latter apply a fixed procedure individually to each data point. A classic example of the
data-oblivious approach is based on randomized dimensionality reduction [9], which states that
any set of n points in the Euclidean space of arbitrary dimension D can be mapped into a space of
dimension d = O(?2 log n), such that the distances between all pairs of points are preserved up to a
factor of 1 ? . This allows representing each point using d(B + log D) bits, where B is the number
?
Authors ordered alphabetically.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
of bits of precision in the coordinates of the original pointset. 2 More efficient representations are
possible if the goal is to preserve only the distances in a certain range. In particular, O(?2 log n) bits
are sufficient to distinguish between distances smaller than 1 and greater than 1 + , independently of
the precision parameter [10] (see also [16] for kernel generalizations). Even more efficient methods
are known if the coordinates are binary [3, 12, 18].
Data-dependent methods compute the bit representations of points ?holistically", typically by solving
a global optimization problem. Examples of this approach include Semantic Hashing [17], Spectral
Hashing [22] or Product Quantization [7] (see also the survey [20]). Although successful, most of
the results in this line of research are empirical in nature, and we are not aware of any worst-case
accuracy vs. compression tradeoff bounds for those methods along the lines of the aforementioned
data oblivious approaches.
A recent work [6] shows that it is possible to combine the two approaches and obtain algorithms that
adapt to the data while providing worst-case accuracy/compression tradeoffs. In particular, the latter
paper shows how to construct representations of d-dimensional pointsets that preserve all distances up
to a factor of 1? while using only O((d+log n) log(1/)+log(Bn)) bits per point. Their algorithm
uses hierarchical clustering in order to group close points together, and represents each point by a
displacement vector from a near by point that has already been stored. The displacement vector is
then appropriately rounded to reduce the representation size. Although theoretically interesting, that
algorithm is rather complex and (to the best of our knowledge) has not been implemented.
Our results. The main contribution of this paper is QuadSketch (QS), a simple data-adaptive
algorithm, which is both provable and practical. It represents each point using O(d log(dB/)+log n)
bits, where (as before) we can set d = O(?2 log n) using the Johnson-Lindenstrauss lemma. Our
bound significantly improves over the ?vanilla? O(dB) bound (obtained by storing all d coordinates
to full precision), and comes close to bound of [6]. At the same time, the algorithm is quite simple
and intuitive: it computes a d-dimensional quadtree3 and appropriately prunes its edges and nodes.4
We evaluate QuadSketch experimentally on both real and synthetic data sets: a SIFT feature data
set from [7], MNIST [11], time series data reflecting taxi ridership in New York City [4] and a
synthetic data set (Diagonal) containing random points from a one-dimensional subspace (i.e., a line)
embedded in a high-dimensional space. The data sets are quite diverse: SIFT and MNIST data sets are
de-facto ?standard? test cases for nearest neighbor search and distance preserving sketches, NYC taxi
data was designed to contain anomalies and ?irrelevant? dimensions, while Diagonal has extremely
low intrinsic dimension. We compare our algorithms to Product Quantization (PQ) [7], a state of
the art method for computing distance-preserving sketches, as well as a baseline simple uniform
quantization method (Grid). The sketch length/accuracy tradeoffs for QS and PQ are comparable on
SIFT and MNIST data, with PQ having higher accuracy for shorter sketches while QS having better
accuracy for longer sketches. On NYC taxi data, the accuracy of QS is higher over the whole range
of sketch lengths . Finally, Diagonal exemplifies a situation where the low dimensionality of the data
set hinders the performance of PQ, while QS naturally adapts to this data set. Overall, QS performs
well on ?typical? data sets, while its provable guarantees ensure robust performance in a wide range
of scenarios. Both algorithms improve over the baseline quantization method.
2
Formal Statement of Results
Preliminaries. Let X = {x1 , . . . , xn } ? Rd be a pointset in Euclidean space. A compression
scheme constructs from X a bit representation referred to as a sketch. Given the sketch, and
without access to the original pointset, one can decompress the sketch into an approximate pointset
2
The bounds can be stated more generally in terms of the aspect ratio ? of the point-set. See Section 2 for
the discussion.
3
Traditionally, the term ?quadtree? is used for the case of d = 2, while its higher-dimensional variants are
called ? hyperoctrees? [23]. However, for the sake of simplicity, in this paper we use the same term ?quadtree?
for any value of d.
4
We note that a similar idea (using kd-trees instead of quadtrees) has been earlier proposed in [1]. However,
we are not aware of any provable space/distortion tradeoffs for the latter algorithm.
2
? = {?
X
x1 , . . . , x
?n } ? Rd . The goal is to minimize the size of the sketch, while approximately
preserving the geometric properties of the pointset, in particular the distances and near neighbors.
In the previous section we parameterized the sketch size in terms of the number of points n, the
dimension d, and the bits per coordinate B. In fact, our results are more general, and can be stated in
terms of the aspect ratio of the pointset, denoted by ? and defined as the ratio between the largest to
smallest distance,
max1?i<j?n kxi ? xj k
?=
.
min1?i<j?n kxi ? xj k
Note that log(?) ? log d + B, so our bounds, stated in terms of log ?, immediately imply analogous
bounds in terms of B.
? ) to suppress polylogarithmic factors in f .
We will use [n] to denote {1, . . . , n}, and O(f
QuadSketch. Our compression algorithm, described in detail in Section 3, is based on a randomized
variant of a quadtree followed by a pruning step. In its simplest variant, the trade-off between the
sketch size and compression quality is governed by a single parameter ?. Specifically, ? controls
the pruning step, in which the algorithm identifies ?non-important? bits among those stored in
the quadtree (i.e. bits whose omission would have little effect on the approximation quality), and
removes them from the sketch. Higher values of ? result in sketches that are longer but have better
approximation quality.
Approximate nearest neighbors. Our main theorem provides the following guarantees for the
basic variant of QuadSketch: for each point, the distances from that point to all other points are
preserved up to a factor of 1 ? with a constant probability.
Theorem 1. Given , ? > 0, let ? = O(log(d log ?/?)) and L = log ? + ?. QuadSketch runs in
?
time O(ndL)
and produces a sketch of size O(nd? + n log n) bits, with the following guarantee:
For every i ? [n],
Pr ?j?[n] k?
xi ? x
?j k = (1 ? )kxi ? xj k ? 1 ? ?.
? then xi? is a (1 + )In particular, with probability 1 ? ?, if x
?i? is the nearest neighbor of x
?i in X,
approximate nearest neighbor of xi in X.
Note that the theorem allows us to compress the input point-set into a sketch and then decompress it
back into a point-set which can be fed to a black box similarity search algorithm. Alternatively, one
can decompress only specific points and approximate the distance between them.
For example, if d = O(?2 log n) and ? is polynomially bounded in n, then Theorem 1 uses
? = O(log log n + log(1/)) bits per coordinate to preserve (1 + )-approximate nearest neighbors.
The full version of QuadSketch, described in Section 3, allows extra fine-tuning by exposing additional
parameters of the algorithm. The guarantees for the full version are summarized by Theorem 3 in
Section 3.
Maximum distortion. We also show that a recursive application of QuadSketch makes it possible
to approximately preserve the distances between all pairs of points. This is the setting considered
in [6]. (In contrast, Theorem 1 preserves the distances from any single point.)
Theorem 2. Given > 0, let ? = O(log(d log ?/)) and L = log ? + ?. There is a randomized
?
algorithm that runs in time O(ndL)
and produces a sketch of size O(nd? + n log n) bits, such that
with high probability, every distance kxi ? xj k can be recovered from the sketch up to distortion
1 ? .
Theorem 2 has smaller sketch size than that provided by the ?vanilla? bound, and only slightly
larger than that in [6]. For example, for d = O(?2 log n) and ? = poly(n), it improves over the
?vanilla? bound by a factor of O(log n/ log log n) and is lossier than the bound of [6] by a factor
of O(log log n). However, compared to the latter, our construction time is nearly linear in n. The
comparison is summarized in Table 1.
3
Table 1: Comparison of Euclidean metric sketches with maximum distortion 1 ? , for d =
O(?2 log n) and log ? = O(log n).
R EFERENCE
?Vanilla? bound
Algorithm of [6]
Theorem 2
B ITS PER POINT
C ONSTRUCTION TIME
?2
log n)
O(
?2
log n log(1/))
?
? 1+? + ?2 n) for ? ? (0, 1]
O(n
O(
?2
log n (log log n + log(1/)))
? ?2 n)
O(
O(
2
We remark that Theorem 2 does not let us recover an approximate embedding of the pointset,
x
?1 , . . . , x
?n , as Theorem 1 does. Instead, the sketch functions as an oracle that accepts queries of the
form (i, j) and return an approximation for the distance kxi ? xj k.
3
The Compression Scheme
The sketching algorithm takes as input the pointset X, and two parameters L and ? that control the
amount of compression.
Step 1: Randomly shifted grid. The algorithm starts by imposing a randomly shifted axis-parallel
grid on the points. We first enclose the whole pointset in an axis-parallel hypercube H. Let
0
?0 = maxi?[n] kx1 ? xi k, and ? = 2dlog ? e . Set up H to be centered at x1 with side length 4?.
Now choose ?1 , . . . , ?d ? [??, ?] independently and uniformly at random, and shift H in each
coordinate j by ?j . By the choice of side length 4?, one can see that H after the shift still contains
the whole pointset. For every integer ` such that ?? < ` ? log(4?), let G` denote the axis-parallel
grid with cell side 2` which is aligned with H.
Note that this step can be often eliminated in practice without affecting the empirical performance of
the algorithm, but it is necessary in order to achieve guarantees for arbitrary pointsets.
Step 2: Quadtree construction. The 2d -ary quadtree on the nested grids G` is naturally defined
by associating every grid cell c in G` with the tree node at level `, such that its children are the 2d
grid cells in G`?1 which are contained in c. The edge connecting a node v to a child v 0 is labeled
with a bitstring of length d defined as follows: the j th bit is 0 if v 0 coincides with the bottom half of
v along coordinate j, and 1 if v 0 coincides with the upper half along that coordinate.
In order to construct the tree, we start with H as the root, and bucket the points contained in it into
the 2d children cells. We only add child nodes for cells that contain at least one point of X. Then we
continue by recursing on the child nodes. The quadtree construction is finished after L levels. We
denote the resulting edge-labeled tree by T ? . A construction for L = 2 is illustrated in Figure 1.
Figure 1: Quadtree construction for points x, y, z. The x and y coordinates are written as binary
numbers.
4
We define the level of a tree node with side length 2` to be ` (note that ` can be negative). The degree
of a node in T ? is its number of children. Since all leaves are located at the bottom level, each point
xi ? X is contained in exactly one leaf, which we henceforth denote by vi .
Step 3: Pruning. Consider a downward path u0 , u1 , . . . , uk in T ? , such that u1 , . . . , uk?1 are
nodes with degree 1, and u0 , uk are nodes with degree other than 1 (uk may be a leaf). For every
such path in T ? , if k > ? + 1, we remove the nodes u?+1 , . . . , uk?1 from T ? with all their adjacent
edges (and edge labels). Instead we connect uk directly to u? as its child. We refer to that edge as
the long edge, and label it with the length of the path it replaces (k ? ?). The original edges from T ?
are called short edges. At the end of the pruning step, we denote the resulting tree by T .
The sketch. For each point xi ? X the sketch stores the index of the leaf vi that contains it. In
addition it stores the structure of the tree T , encoded using the Eulerian Tour Technique5 . Specifically,
starting at the root, we traverse T in the Depth First Search (DFS) order. In each step, DFS either
explores the child of the current node (downward step), or returns to the parent node (upward step).
We encode a downward step by 0 and an upward step by 1. With each downward step we also store
the label of the traversed edge (a length-d bitstring for a short edge or the edge length for a long edge,
and an additional bit marking if the edge is short or long).
?i from the sketch is done simply by following the downward path
Decompression. Recovering x
from the root of T to the associated leaf vi , collecting the edge labels of the short edges, and placing
zeros instead of the missing bits of the long edges. The collected bits then correspond to the binary
expansion of the coordinates of x
?i .
More formally, for every node u (not necessarily a leaf) we define c(u) ? Rd as follows: For
j ? {1, . . . , d}, concatenate the j th bit of every short edge label traversed along the downward path
from the root to u. When traversing a long edge labeled with length k, concatenate k zeros.6 Then,
place a binary floating point in the resulting bitstring, after the bit corresponding to level 0. (Recall
that the levels in T are defined by the grid cell side lengths, and T might not have any nodes in level
0; in this case we need to pad with 0?s either on the right or on the left until we have a 0 bit in the
location corresponding to level 0.) The resulting binary string is the binary expansion of the j th
coordinate of c(u). Now x
?i is defined to be c(vi ).
Block QuadSketch. We can further modify QuadSketch in a manner similar to Product Quantization [7]. Specifically, we partition the d dimensions into m blocks B1 . . . Bm of size d/m each, and
apply QuadSketch separately to each block. More formally, for each Bi , we apply QuadSketch to the
pointset (x1 )Bi . . . (xn )Bi , where xB denotes the m/d-dimensional vector obtained by projecting x
on the dimensions in B.
The following statement is an immediate corollary of Theorem 1.
Theorem 3. Given , ? > 0, and m dividing d, set the pruning parameter ? to O(log(d log ?/?))
?
and the number of levels L to log ? + ?. The m-block variant of QuadSketch runs in time O(ndL)
and produces a sketch of size O(nd? + nm log n) bits, with the following guarantee: For every
i ? [n],
Pr ?j?[n] k?
xi ? x
?j k = (1 ? )kxi ? xj k ? 1 ? m?.
It can be seen that increasing the number of blocks m up to a certain threshold ( d?/ log n ) does
not affect the asymptotic bound on the sketch size. Although we cannot prove that varying m allows
to improve the accuracy of the sketch, this seems to be the case empirically, as demonstrated in the
experimental section.
5
See e.g., https://en.wikipedia.org/wiki/Euler_tour_technique.
This is the ?lossy? step in our sketching method: the original bits could be arbitrary, but they are replaced
with zeros.
6
5
Table 2: Datasets used in our empirical evaluation. The aspect ratio of SIFT and MNIST is estimated
on a random sample.
Dataset
SIFT
MNIST
NYC Taxi
Diagonal (synthetic)
4
Points
1, 000, 000
60, 000
8, 874
10, 000
Dimension
128
784
48
128
Aspect ratio (?)
? 83.2
? 9.2
49.5
20, 478, 740.2
Experiments
We evaluate QuadSketch experimentally and compare its performance to Product Quantization
(PQ) [7], a state-of-the-art compression scheme for approximate nearest neighbors, and to a baseline
of uniform scalar quantization, which we refer to as Grid. For each dimension of the dataset, Grid
places k equally spaced landmark scalars on the interval between the minimum and the maximum
values along that dimension, and rounds each coordinate to the nearest landmark.
All three algorithms work by partitioning the data dimensions into blocks, and performing a quantization step in each block independently of the other ones. QuadSketch and PQ take the number of
blocks as a parameter, and Grid uses blocks of size 1. The quantization step is the basic algorithm
described in Section 3 for QuadSketch, k-means for PQ, and uniform scalar quantization for Grid.
We test the algorithms on four datasets: The SIFT data used in [7], MNIST [11] (with all vectors
normalized to 1), NYC Taxi ridership data [4], and a synthetic dataset called Diagonal, consisting of
random points on a line embedded in a high-dimensional space. The properties of the datasets are
summarized in Table 2. Note that we were not able to compute the exact diameters for MNIST and
SIFT, hence we only report estimates for ? for these data sets, obtained via random sampling.
The Diagonal dataset consists of 10, 000 points of the form (x, x, . . . , x), where x is chosen independently and uniformly at random from the interval [0..40000]. This yields a dataset with a very
large aspect ratio ?, and on which partitioning into blocks is not expected to be beneficial since all
coordinates are maximally correlated.
For SIFT and MNIST we use the standard query set provided with each dataset. For Taxi and
Diagonal we use 500 queries chosen at random from each dataset. For the sake of consistency, for all
data sets, we apply the same quantization process jointly to both the point set and the query set, for
both PQ and QS. We note, however, that both algorithms can be run on ?out of sample? queries.
For each dataset, we enumerate the number of blocks over all divisors of the dimension d. For
QuadSketch, L ranges in 2, . . . , 20, and ? ranges in 1, . . . , L ? 1. For PQ, the number of k-means
landmarks per block ranges in 25 , 26 , . . . , 212 . For both algorithms we include the results for all
combinations of the parameters, and plot the envelope of the best performing combinations.
We report two measures of performance for each dataset: (a) the accuracy, defined as the fraction of
queries for which the sketch returns the true nearest neighbor, and (b) the average distortion, defined
as the ratio between the (true) distances from the query to the reported near neighbor and to the true
nearest neighbor. The sketch size is measured in bits per coordinate. The results appear in Figures 2
to 5. Note that the vertical coordinate in the distortion plots corresponds to the value of , not 1 + .
For SIFT, we also include a comparison with Cartesian k-Means (CKM) [14], in Figure 6.
4.1
QuadSketch Parameter Setting
We plot how the different parameters of QuadSketch effect its performance. Recall that L determines
the number of levels in the quadtree prior to the pruning step, and ? controls the amount of pruning.
By construction, the higher we set these parameters, the larger the sketch will be and with better
accuracy. The empirical tradeoff for the SIFT dataset is plotted in Figure 7.
6
Figure 2: Results for the SIFT dataset.
Figure 3: Results for the MNIST dataset.
Figure 4: Results for the Taxi dataset.
Figure 5: Results for the Diagonal dataset.
7
Figure 6: Additional results for the SIFT dataset.
Figure 7: On the left, L varies from 2 to 11 for a fixed setting of 16 blocks and ? = L ? 1 (no
pruning). On the right, ? varies from 1 to 9 for a fixed setting of 16 blocks and L = 10. Increasing ?
beyond 6 does not have further effect on the resulting sketch.
The optimal setting for the number of blocks is not monotone, and generally depends on the specific
dataset. It was noted in [7] that on SIFT data an intermediate number of blocks gives the best results,
and this is confirmed by our experiments. Figure 8 lists the performance on the SIFT dataset for a
varying number of blocks, for a fixed setting of L = 6 and ? = 5. It shows that the sketch quality
remains essentially the same, while the size varies significantly, with the optimal size attained at 16
blocks.
# Blocks
1
2
4
8
16
32
64
128
Bits per coordinate
5.17
4.523
4.02
3.272
2.795
3.474
4.032
4.079
Accuracy
0.719
0.717
0.722
0.712
0.712
0.712
0.713
0.72
Average distortion
1.0077
1.0076
1.0079
1.0079
1.008
1.0082
1.0081
1.0078
Figure 8: QuadSketch accuracy on SIFT data by number of blocks, with L = 6 and ? = 5.
8
References
[1] R. Arandjelovi?c and A. Zisserman. Extremely low bit-rate nearest neighbor search using a set
compression tree. IEEE transactions on pattern analysis and machine intelligence, 36(12):2396?
2406, 2014.
[2] Y. Bartal. Probabilistic approximation of metric spaces and its algorithmic applications. In
Foundations of Computer Science, 1996. Proceedings., 37th Annual Symposium on, pages
184?193. IEEE, 1996.
[3] A. Z. Broder. On the resemblance and containment of documents. In Compression and
Complexity of Sequences 1997. Proceedings, pages 21?29. IEEE, 1997.
[4] S. Guha, N. Mishra, G. Roy, and O. Schrijvers. Robust random cut forest based anomaly
detection on streams. In International Conference on Machine Learning, pages 2712?2721,
2016.
[5] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of
dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing,
pages 604?613. ACM, 1998.
[6] P. Indyk and T. Wagner. Near-optimal (euclidean) metric compression. In Proceedings of the
Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 710?723. SIAM,
2017.
[7] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE
transactions on pattern analysis and machine intelligence, 33(1):117?128, 2011.
[8] J. Johnson, M. Douze, and H. J?gou. Billion-scale similarity search with gpus. CoRR,
abs/1702.08734, 2017.
[9] W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert space.
Contemporary mathematics, 26(189-206):1, 1984.
[10] E. Kushilevitz, R. Ostrovsky, and Y. Rabani. Efficient search for approximate nearest neighbor
in high dimensional spaces. SIAM Journal on Computing, 30(2):457?474, 2000.
[11] Y. LeCun and C. Cortes. The mnist database of handwritten digits, 1998.
[12] P. Li, A. Shrivastava, J. L. Moore, and A. C. K?nig. Hashing algorithms for large-scale learning.
In Advances in neural information processing systems, pages 2672?2680, 2011.
R
[13] S. Muthukrishnan et al. Data streams: Algorithms and applications. Foundations and Trends
in Theoretical Computer Science, 1(2):117?236, 2005.
[14] M. Norouzi and D. J. Fleet. Cartesian k-means. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pages 3017?3024, 2013.
[15] M. Norouzi, D. J. Fleet, and R. R. Salakhutdinov. Hamming distance metric learning. In
Advances in neural information processing systems, pages 1061?1069, 2012.
[16] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In
Advances in neural information processing systems, pages 1509?1517, 2009.
[17] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate
Reasoning, 50(7):969?978, 2009.
[18] A. Shrivastava and P. Li. Densifying one permutation hashing via rotation for fast near neighbor
search. In ICML, pages 557?565, 2014.
[19] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In
Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1?8.
IEEE, 2008.
9
[20] J. Wang, W. Liu, S. Kumar, and S.-F. Chang. Learning to hash for indexing big data: a survey.
Proceedings of the IEEE, 104(1):34?57, 2016.
[21] K. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg. Feature hashing for
large scale multitask learning. In Proceedings of the 26th Annual International Conference on
Machine Learning, pages 1113?1120. ACM, 2009.
[22] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Advances in neural information
processing systems, pages 1753?1760, 2009.
[23] M.-M. Yau and S. N. Srihari. A hierarchical data structure for multidimensional digital images.
Communications of the ACM, 26(7):504?515, 1983.
10
| 6855 |@word multitask:1 version:2 compression:13 seems:1 nd:3 bn:1 reduction:1 liu:1 series:2 contains:2 tuned:1 document:1 mishra:1 recovered:1 current:1 written:1 gpu:1 exposing:1 concatenate:2 partition:1 razenshteyn:1 remove:2 designed:1 plot:3 v:1 hash:1 half:2 leaf:6 intelligence:2 short:5 provides:1 node:14 location:1 traverse:1 org:1 simpler:1 along:5 symposium:3 prove:1 consists:1 fitting:1 combine:1 manner:1 introduce:1 theoretically:1 expected:1 jegou:1 salakhutdinov:2 cpu:2 little:1 curse:1 gou:1 increasing:2 provided:2 bounded:1 string:1 guarantee:8 every:8 multidimensional:2 collecting:1 exactly:1 classifier:1 ostrovsky:1 facto:1 partitioning:2 unit:1 control:3 uk:6 appear:1 before:1 modify:1 taxi:8 path:5 approximately:2 black:1 might:1 studied:1 quadtrees:1 range:6 bi:3 practical:2 lecun:1 recursive:1 practice:1 block:20 digit:1 procedure:1 displacement:2 empirical:4 significantly:2 cannot:1 close:2 storage:1 demonstrated:1 missing:1 pointsets:2 starting:1 independently:4 survey:2 simplicity:1 immediately:1 q:7 kushilevitz:1 classic:1 embedding:1 coordinate:17 traditionally:1 analogous:1 construction:6 anomaly:2 exact:1 us:3 designing:1 decompress:3 trend:1 roy:1 recognition:3 located:1 cut:1 labeled:3 ckm:1 bottom:2 min1:1 database:2 wang:1 worst:2 hinders:1 trade:1 contemporary:1 complexity:1 solving:1 max1:1 represented:1 various:1 muthukrishnan:1 fast:1 query:7 quite:2 heuristic:1 whose:1 solve:1 larger:2 distortion:7 encoded:1 cvpr:1 jointly:1 indyk:3 sequence:1 douze:2 product:7 aligned:1 densifying:1 kx1:1 adapts:1 achieve:1 intuitive:1 billion:2 parent:1 bartal:1 motwani:1 produce:5 object:1 measured:1 nearest:13 dividing:1 pointset:11 implemented:1 enclose:1 come:1 recovering:1 dfs:2 centered:1 generalization:1 preliminary:1 traversed:2 extension:1 considered:1 algorithmic:1 mapping:1 torralba:2 smallest:1 label:5 bitstring:3 sensitive:1 individually:1 largest:1 city:2 tool:1 mit:3 rather:1 varying:2 thirtieth:1 corollary:1 encode:1 exemplifies:1 contrast:1 baseline:3 dependent:3 streaming:1 typically:1 pad:1 upward:2 overall:2 aforementioned:1 among:1 denoted:1 art:3 construct:4 aware:2 having:3 piotr:1 beach:1 eliminated:1 sampling:1 represents:2 placing:1 icml:1 nearly:1 report:2 oblivious:2 modern:1 randomly:2 preserve:5 floating:1 replaced:1 consisting:1 divisor:1 ab:1 detection:1 highly:1 evaluation:1 xb:1 edge:19 necessary:1 shorter:1 traversing:1 tree:8 euclidean:4 plotted:1 theoretical:1 earlier:1 cost:2 tour:1 uniform:3 successful:1 johnson:3 guha:1 stored:2 reported:1 connect:1 arandjelovi:1 varies:3 kxi:6 synthetic:5 st:1 explores:1 randomized:3 broder:1 international:3 siam:3 probabilistic:1 off:1 rounded:1 together:1 ilya:1 sketching:2 connecting:1 nm:1 containing:1 choose:1 henceforth:1 yau:1 return:3 li:2 de:1 summarized:3 vi:4 depends:1 stream:2 root:4 analyze:1 start:2 recover:1 parallel:3 contribution:1 minimize:1 accuracy:12 correspond:1 spaced:1 yield:1 handwritten:1 norouzi:2 produced:1 confirmed:1 ndl:3 classified:1 ary:1 facebook:1 naturally:2 associated:1 hamming:1 dataset:17 recall:2 knowledge:1 dimensionality:3 improves:2 hilbert:1 reflecting:1 back:1 hashing:7 higher:5 attained:1 zisserman:1 maximally:1 wei:2 done:1 box:1 smola:1 until:1 langford:1 sketch:32 quality:4 resemblance:1 lossy:1 usa:1 effect:3 contain:2 true:3 normalized:1 former:1 hence:1 moore:1 semantic:2 illustrated:1 adjacent:1 round:1 noted:1 coincides:2 schrijvers:1 performs:1 reasoning:1 image:2 lazebnik:1 wikipedia:1 rotation:1 empirically:1 refer:2 imposing:1 nyc:4 vanilla:4 grid:12 rd:3 tuning:1 decompression:1 consistency:1 mathematics:1 pq:12 access:1 similarity:6 longer:2 etc:2 add:1 dominant:1 recent:3 irrelevant:1 scenario:1 store:3 certain:2 binary:7 continue:1 preserving:6 seen:1 greater:1 additional:3 minimum:1 prune:1 u0:2 full:3 match:1 adapt:1 long:6 equally:1 scalable:1 variant:5 basic:2 essentially:1 metric:6 vision:2 kernel:2 cell:6 preserved:2 affecting:1 addition:1 fine:1 separately:1 interval:2 recursing:1 appropriately:3 extra:1 envelope:1 nig:1 db:4 integer:1 near:5 intermediate:1 xj:6 affect:1 architecture:1 associating:1 reduce:1 idea:1 tradeoff:5 shift:3 fleet:2 york:2 remark:1 enumerate:1 useful:1 generally:2 amount:2 extensively:1 simplest:1 reduced:4 http:1 wiki:1 diameter:1 holistically:1 shifted:2 estimated:1 per:9 diverse:1 discrete:1 dasgupta:1 group:1 four:1 threshold:1 ram:1 monotone:1 fraction:1 raginsky:1 run:5 package:1 parameterized:1 place:2 almost:1 comparable:2 bit:29 bound:13 followed:1 distinguish:1 replaces:1 oracle:1 annual:4 software:1 sake:2 tal:1 aspect:5 u1:2 rabani:1 extremely:2 kumar:1 performing:2 gpus:2 marking:1 combination:2 kd:1 beneficial:2 describes:1 smaller:2 slightly:1 projecting:1 dlog:1 pr:2 invariant:1 indexing:1 bucket:1 remains:1 fed:1 end:1 apply:4 hierarchical:2 spectral:2 attenberg:1 weinberger:1 eulerian:1 original:4 compress:2 denotes:1 clustering:1 include:4 running:2 ensure:1 especially:1 hypercube:1 already:1 diagonal:8 subspace:1 distance:20 mapped:1 landmark:3 collected:1 provable:5 length:11 code:2 index:1 providing:1 ratio:7 statement:2 stated:3 negative:1 suppress:1 quadtree:9 twenty:1 upper:1 vertical:1 datasets:3 immediate:1 situation:1 hinton:1 communication:3 arbitrary:3 omission:1 pair:2 accepts:1 polylogarithmic:1 nip:1 able:1 beyond:1 pattern:4 eighth:1 representing:1 scheme:3 improve:2 imply:1 identifies:1 axis:3 finished:1 schmid:1 speeding:1 prior:1 geometric:1 asymptotic:1 embedded:3 permutation:1 interesting:1 digital:1 foundation:2 degree:3 sufficient:1 storing:1 last:1 formal:1 side:5 neighbor:15 wide:1 wagner:2 benefit:2 dimension:12 xn:2 lindenstrauss:2 depth:1 computes:1 author:1 adaptive:1 bm:1 polynomially:1 alphabetically:1 transaction:2 approximate:11 compact:3 pruning:8 global:1 b1:1 containment:1 xi:7 fergus:2 alternatively:1 search:12 decade:1 table:4 nature:1 robust:2 ca:1 controllable:1 shrivastava:2 forest:1 expansion:2 complex:1 poly:1 necessarily:1 main:2 whole:4 big:1 child:8 x1:4 referred:1 en:1 precision:3 governed:1 theorem:13 removing:1 specific:2 sift:16 maxi:1 list:1 cortes:1 intrinsic:1 quantization:14 mnist:11 corr:1 downward:6 cartesian:2 locality:1 simply:1 srihari:1 ordered:1 contained:3 scalar:3 chang:1 nested:1 corresponds:1 determines:1 acm:5 goal:2 towards:1 lipschitz:1 experimentally:2 typical:1 specifically:3 uniformly:2 lemma:1 called:3 experimental:1 formally:2 latter:4 evaluate:3 correlated:1 |
6,474 | 6,856 | REBAR: Low-variance, unbiased gradient estimates
for discrete latent variable models
George Tucker1,?, Andriy Mnih2 , Chris J. Maddison2,3 ,
Dieterich Lawson1,* , Jascha Sohl-Dickstein1
1
Google Brain, 2 DeepMind, 3 University of Oxford
{gjt, amnih, dieterichl, jaschasd}@google.com
[email protected]
Abstract
Learning in models with discrete latent variables is challenging due to high variance
gradient estimators. Generally, approaches have relied on control variates to reduce
the variance of the REINFORCE estimator. Recent work (Jang et al., 2016; Maddison et al., 2016) has taken a different approach, introducing a continuous relaxation
of discrete variables to produce low-variance, but biased, gradient estimates. In this
work, we combine the two approaches through a novel control variate that produces
low-variance, unbiased gradient estimates. Then, we introduce a modification
to the continuous relaxation and show that the tightness of the relaxation can be
adapted online, removing it as a hyperparameter. We show state-of-the-art variance
reduction on several benchmark generative modeling tasks, generally leading to
faster convergence to a better final log-likelihood.
1
Introduction
Models with discrete latent variables are ubiquitous in machine learning: mixture models, Markov
Decision Processes in reinforcement learning (RL), generative models for structured prediction,
and, recently, models with hard attention (Mnih et al., 2014) and memory networks (Zaremba &
Sutskever, 2015). However, when the discrete latent variables cannot be marginalized out analytically,
maximizing objectives over these models using REINFORCE-like methods (Williams, 1992) is
challenging due to high-variance gradient estimates obtained from sampling. Most approaches to
reducing this variance have focused on developing clever control variates (Mnih & Gregor, 2014;
Titsias & L?zaro-Gredilla, 2015; Gu et al., 2015; Mnih & Rezende, 2016). Recently, Jang et al. (2016)
and Maddison et al. (2016) independently introduced a novel distribution, the Gumbel-Softmax or
Concrete distribution, that continuously relaxes discrete random variables. Replacing every discrete
random variable in a model with a Concrete random variable results in a continuous model where the
reparameterization trick is applicable (Kingma & Welling, 2013; Rezende et al., 2014). The gradients
are biased with respect to the discrete model, but can be used effectively to optimize large models.
The tightness of the relaxation is controlled by a temperature hyperparameter. In the low temperature
limit, the gradient estimates become unbiased, but the variance of the gradient estimator diverges, so
the temperature must be tuned to balance bias and variance.
We sought an estimator that is low-variance, unbiased, and does not require tuning additional
hyperparameters. To construct such an estimator, we introduce a simple control variate based on the
difference between the REINFORCE and the reparameterization trick gradient estimators for the
relaxed model. This reduces variance, but does not outperform state-of-the-art methods on its own.
Our key contribution is to show that it is possible to conditionally marginalize the control variate
?
Work done as part of the Google Brain Residency Program.
Source code for experiments: github.com/tensorflow/models/tree/master/research/rebar
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
to significantly improve its effectiveness. We call this the REBAR gradient estimator, because it
combines REINFORCE gradients with gradients of the Concrete relaxation. Next, we show that
a modification to the Concrete relaxation connects REBAR to MuProp in the high temperature
limit. Finally, because REBAR is unbiased for all temperatures, we show that the temperature
can be optimized online to reduce variance further and relieve the burden of setting an additional
hyperparameter.
In our experiments, we illustrate the potential problems inherent with biased gradient estimators on
a toy problem. Then, we use REBAR to train generative sigmoid belief networks (SBNs) on the
MNIST and Omniglot datasets and to train conditional generative models on MNIST. Across tasks,
we show that REBAR has state-of-the-art variance reduction which translates to faster convergence
and better final log-likelihoods. Although we focus on binary variables for simplicity, this work is
equally applicable to categorical variables (Appendix C).
2
Background
For clarity, we first consider a simplified scenario. Let b ? Bernoulli (?) be a vector of independent
binary random variables parameterized by ?. We wish to maximize
E [f (b, ?)] ,
p(b)
where f (b, ?) is differentiable with respect to b and ?, and we suppress the dependence of p(b) on ? to
reduce notational clutter. This covers a wide range of discrete latent variable problems; for example,
in variational inference f (b, ?) would be the stochastic variational lower bound.
Typically, this problem has been approached by gradient ascent, which requires efficiently estimating
?
d
@f (b, ?)
@
E [f (b, ?)] = E
+ f (b, ?) log p(b) .
(1)
d? p(b)
@?
@?
p(b)
In practice, the first term can be estimated effectively with a single Monte Carlo sample, however,
a na?ve single sample estimator of the second term has high variance. Because the dependence of
f (b, ?) on ? is straightforward to account for, to simplify exposition we assume that f (b, ?) = f (b)
does not depend on ? and concentrate on the second term.
2.1
Variance reduction through control variates
Paisley et al. (2012); Ranganath et al. (2014); Mnih & Gregor (2014); Gu et al. (2015) show that
carefully designed control variates can reduce the variance of the second term significantly. Control
variates seek to reduce the variance of such estimators using closed form expectations for closely
related terms. We can subtract any c (random or constant) as long as we can correct the bias (see
Appendix A and (Paisley et al., 2012) for a review of control variates in this context):
?
?
?
@
@
@
@
E [f (b)] =
E [f (b) c] + E [c] = E (f (b) c)
log p(b) +
E [c]
@? p(b,c)
@? p(b,c)
@?
@? p(b,c)
p(b,c)
p(b,c)
For example, NVIL (Mnih & Gregor, 2014) learns a c that does not depend2 on b and MuProp (Gu
et al., 2015) uses a linear Taylor expansion of f around Ep(b|?) [b]. Unfortunately, even with a control
variate, the term can still have high variance.
2.2
Continuous relaxations for discrete variables
Alternatively, following Maddison et al. (2016), we can parameterize b as b = H(z), where H is the
element-wise hard threshold function3 and z is a vector of independent Logistic random variables
defined by
?
u
z := g(u, ?) := log
+ log
,
1 ?
1 u
2
3
In this case, c depends on the implicit observation in variational inference.
H(z) = 1 if z 0 and H(z) = 0 if z < 0.
2
where u ? Uniform(0, 1). Notably, z is differentiably reparameterizable (Kingma & Welling,
2013; Rezende et al., 2014), but the discontinuous hard threshold function prevents us from using
the reparameterization trick directly. Replacing all occurrences of the hard threshold function
1
z
z
with a continuous relaxation H(z) ? (z) :=
however results in a
= 1 + exp
reparameterizable computational graph. Thus, we can compute low-variance gradient estimates for
the relaxed model that approximate the gradient for the discrete model. In summary,
?
@
@
@
@
E [f (b)] =
E [f (H(z))] ?
E [f ( (z))] = E
f ( (g(u, ?))) ,
@? p(b)
@? p(z)
@? p(z)
p(u) @?
where > 0 can be thought of as a temperature that controls the tightness of the relaxation (at low
temperatures, the relaxation is nearly tight). This generally results in a low-variance, but biased
Monte Carlo estimator for the discrete model. As ! 0, the approximation becomes exact, but the
variance of the Monte Carlo estimator diverges. Thus, in practice, must be tuned to balance bias
and variance. See Appendix C and Jang et al. (2016); Maddison et al. (2016) for the generalization to
the categorical case.
3
REBAR
We seek a low-variance, unbiased gradient estimator. Inspired by the Concrete relaxation, our strategy
will be to construct a control variate (see Appendix A for a review of control variates in this context)
based on the difference between the REINFORCE gradient estimator for the relaxed model and the
gradient estimator from the reparameterization trick. First, note that closely following Eq. 1
?
?
@
@
@
@
E f (b) log p(b) =
E [f (b)] =
E [f (H(z))] = E f (H(z)) log p(z) . (2)
@?
@? p(b)
@? p(z)
@?
p(b)
p(z)
The similar form of the REINFORCE gradient estimator for the relaxed model
?
@
@
E [f ( (z))] = E f ( (z)) log p(z)
@? p(z)
@?
p(z)
(3)
suggests it will be strongly correlated and thus be an effective control variate. Unfortunately, the
Monte Carlo gradient estimator derived from the left hand side of Eq. 2 has much lower variance
than the Monte Carlo gradient estimator derived from the right hand side. This is because the left
hand side can be seen as analytically performing a conditional marginalization over z given b, which
is noisily approximated by Monte Carlo samples on the right hand side (see Appendix B for details).
Our key insight is that an analogous conditional marginalization can be performed for the control
variate (Eq. 3),
?
?
?
@
@
@
E f ( (z)) log p(z) = E
E [f ( (z))] + E
E [f ( (z))]
log p(b) ,
@?
@?
p(z)
p(b) @? p(z|b)
p(b) p(z|b)
where the first term on the right-hand side can be efficiently estimated with the reparameterization
trick (see Appendix C for the details)
?
?
?
@
@
E
E [f ( (z))] = E E
f ( (?
z )) ,
p(b) @? p(z|b)
p(b) p(v) @?
where v ? Uniform(0, 1) and z? ? g?(v, b, ?) is the differentiable reparameterization for z|b (Appendix C). Therefore,
?
?
?
?
@
@
@
E f ( (z)) log p(z) = E E
f ( (?
z )) + E
E [f ( (z))]
log p(b) .
@?
@?
p(z)
p(b) p(v) @?
p(b) p(z|b)
Using this to form the control variate and correcting with the reparameterization trick gradient, we
arrive at
?
@
@
E [f (b)] = E [f (H(z)) ?f ( (?
z ))]
log p(b)
@? p(b)
@?
p(u,v)
b=H(z)
+?
@
f(
@?
3
(z))
?
@
f(
@?
(?
z )) ,
(4)
where u, v ? Uniform(0, 1), z ? g(u, ?), z? ? g?(v, H(z), ?), and ? is a scaling on the control
variate. The REBAR estimator is the single sample Monte Carlo estimator of this expectation. To
reduce computation and variance, we couple u and v using common random numbers (Appendix G,
(Owen, 2013)). We estimate ? by minimizing the variance of the Monte Carlo estimator with SGD.
In Appendix D, we present an alternative derivation of REBAR that is shorter, but less intuitive.
3.1
Rethinking the relaxation and a connection to MuProp
Because
! 1, we consider an alternative relaxation
?
?
1 2+ +1
?
1
u
H(z) ?
log
+ log
=
+1
1 ?
1 u
(z) !
1
2
as
(5)
(z ),
2
where z = ++1+1 log 1 ? ? + log 1 u u . As ! 1, the relaxation converges to the mean, ?, and still
as ! 0, the relaxation becomes exact. Furthermore, as ! 1, the REBAR estimator converges
to MuProp without the linear term (see Appendix E). We refer to this estimator as SimpleMuProp in
the results.
3.2
Optimizing temperature ( )
The REBAR gradient estimator is unbiased for any choice of > 0, so we can optimize to minimize
the variance of the estimator without affecting its unbiasedness (similar to optimizing the dispersion
coefficients in Ruiz et al. (2016)). In particular, denoting the REBAR gradient estimator by r( ), then
?
?
?
@ ? ?
@r( )
@
2
2
Var(r( )) =
E r( )
E [r( )] = E 2r( )
@
@
@
because E[r( )] does not depend on . The resulting expectation can be estimated with a single
sample Monte Carlo estimator. This allows the tightness of the relaxation to be adapted online jointly
with the optimization of the parameters and relieves the burden of choosing ahead of time.
3.3
Multilayer stochastic networks
Suppose we have multiple layers of stochastic units (i.e., b = {b1 , b2 , . . . , bn }) where p(b) factorizes
as
p(b1:n ) = p(b1 )p(b2 |b1 ) ? ? ? p(bn |bn 1 ),
and similarly for the underlying Logistic random variables p(z1:n ) recalling that bi = H(zi ). We
can define a relaxed distribution over z1:n where we replace the hard threshold function H(z) with a
continuous relaxation (z). We refer to the relaxed distribution as q(z1:n ).
We can take advantage of the structure of p, by using the fact that the high variance REINFORCE
term of the gradient also decomposes
?
?
X
@
@
E f (b) log p(b) =
E f (b) log p(bi |bi 1 ) .
@?
@?
p(b)
p(b)
i
Focusing on the ith term, we have
?
@
E f (b) log p(bi |bi 1 ) =
E
@?
p(b)
p(b1:i
?
which suggests the following control variate
?
E
E
[f (b1:i
p(zi |bi ,bi
1)
q(zi+1:n |zi )
E
1 ) p(bi |bi
1,
?
E
1 ) p(bi+1:n |bi )
(zi:n ))]
[f (b)]
@
log p(bi |bi
@?
@
log p(bi |bi
@?
1)
,
1)
for the middle expectation. Similarly to the single layer case, we can debias the control variate
with terms that are reparameterizable. Note that due to the switch between sampling from p and
sampling from q, this approach requires n passes through the network (one pass per layer). We
discuss alternatives that do not require multiple passes through the network in Appendix F.
4
3.4
Q-functions
Finally, we note that since the derivation of this control variate is independent of f , the REBAR
control variate can be generalized by replacing f with a learned, differentiable Q-function. This
suggests that the REBAR control variate is applicable to RL, where it would allow a ?pseudo-action?dependent baseline. In this case, the pseudo-action would be the relaxation of the discrete output
from a policy network.
4
Related work
Most approaches to optimizing an expectation of a function w.r.t. a discrete distribution based on
samples from the distribution can be seen as applications of the REINFORCE (Williams, 1992)
gradient estimator, also known as the likelihood ratio (Glynn, 1990) or score-function estimator
(Fu, 2006). Following the notation from Section 2, the basic form of an estimator of this type
@
is (f (b) c) @?
log p(b) where b is a sample from the discrete distribution and c is some quantity
independent of b, known as a baseline. Such estimators are unbiased, but without a carefully chosen
baseline their variance tends to be too high for the estimator to be useful and much work has gone
into finding effective baselines.
In the context of training latent variable models, REINFORCE-like methods have been used to
implement sampling-based variational inference with either fully factorized (Wingate & Weber, 2013;
Ranganath et al., 2014) or structured (Mnih & Gregor, 2014; Gu et al., 2015) variational distributions.
All of these involve learned baselines: from simple scalar baselines (Wingate & Weber, 2013;
Ranganath et al., 2014) to nonlinear input-dependent baselines (Mnih & Gregor, 2014). MuProp
(Gu et al., 2015) combines an input-dependent baseline with a first-order Taylor approximation to
the function based on the corresponding mean-field network to achieve further variance reduction.
REBAR is similar to MuProp in that it also uses gradient information from a proxy model to reduce
the variance of a REINFORCE-like estimator. The main difference is that in our approach the proxy
model is essentially the relaxed (but still stochastic) version of the model we are interested in, whereas
MuProp uses the mean field version of the model as a proxy, which can behave very differently
from the original model due to being completely deterministic. The relaxation we use was proposed
by (Maddison et al., 2016; Jang et al., 2016) as a way of making discrete latent variable models
reparameterizable, resulting in a low-variance but biased gradient estimator for the original model.
REBAR on the other hand, uses the relaxation in a control variate which results in an unbiased,
low-variance estimator. Alternatively, Titsias & L?zaro-Gredilla (2015) introduced local expectation
gradients, a general purpose unbiased gradient estimator for models with continuous and discrete
latent variables. However, it typically requires substantially more computation than other methods.
Recently, a specialized REINFORCE-like method was proposed for the tighter multi-sample version
of the variational bound (Burda et al., 2015) which uses a leave-out-out technique to construct
per-sample baselines (Mnih & Rezende, 2016). This approach is orthogonal to ours, and we expect it
to benefit from incorporating the REBAR control variate.
5
Experiments
As our goal was variance reduction to improve optimization, we compared our method to the
state-of-the-art unbiased single-sample gradient estimators, NVIL (Mnih & Gregor, 2014) and
MuProp (Gu et al., 2015), and the state-of-the-art biased single-sample gradient estimator GumbelSoftmax/Concrete (Jang et al., 2016; Maddison et al., 2016) by measuring their progress on the
training objective and the variance of the unbiased gradient estimators4 . We start with an illustrative
problem and then follow the experimental setup established in (Maddison et al., 2016) to evaluate the
methods on generative modeling and structured prediction tasks.
4
Both MuProp and REBAR require twice as much computation per step as NVIL and Concrete. To present
comparable results with previous work, we plot our results in steps. However, to offer a fair comparison, NVIL
should use two samples and thus reduce its variance by half (or log(2) ? 0.69 in our plots).
5
Figure 1: Log variance of the gradient estimator (left) and loss (right) for the toy problem with
t = 0.45. Only the unbiased estimators converge to the correct answer. We indicate the temperature
in parenthesis where relevant.
5.1
Toy problem
To illustrate the potential ill-effects of biased gradient estimators, we evaluated the methods on a
simple toy problem. We wish to minimize Ep(b) [(b t)2 ], where t 2 (0, 1) is a continuous target
value, and we have a single parameter controlling the Bernoulli distribution. Figure 1 shows the
perils of biased gradient estimators. The optimal solution is deterministic (i.e., p(b = 1) 2 {0, 1}),
whereas the Concrete estimator converges to a stochastic one. All of the unbiased estimators correctly
converge to the optimal loss, whereas the biased estimator fails to. For this simple problem, it is
sufficient to reduce temperature of the relaxation to achieve an acceptable solution.
5.2
Learning sigmoid belief networks (SBNs)
Next, we trained SBNs on several standard benchmark tasks. We follow the setup established in
(Maddison et al., 2016). We used the statically binarized MNIST digits from Salakhutdinov & Murray
(2008) and a fixed binarization of the Omniglot character dataset. We used the standard splits into
training, validation, and test sets. The network used several layers of 200 stochastic binary units
interleaved with deterministic nonlinearities. In our experiments, we used either a linear deterministic
layer (denoted linear) or 2 layers of 200 tanh units (denoted nonlinear).
5.2.1
Generative modeling on MNIST and Omniglot
For generative modeling, we maximized a single-sample variational lower bound on the log-likelihood.
We performed amortized inference (Kingma & Welling, 2013; Rezende et al., 2014) with an inference
network with similar architecture in the reverse direction. In particular, denoting the image by x and
the hidden layer stochastic activations by b ? q(b|x, ?), we have
log p(x|?)
E
q(b|x,?)
[log p(x, b|?)
log q(b|x, ?)] ,
which has the required form for REBAR.
To measure the variance of the gradient estimators, we follow a single optimization trajectory
and use the same random numbers for all methods. This significantly reduces the variance in
our measurements. We plot the log variance of the unbiased gradient estimators in Figure 2 for
MNIST (Appendix Figure App.3 for Omniglot). REBAR produced the lowest variance across
linear and nonlinear models for both tasks. The reduction in variance was especially large for
the linear models. For the nonlinear model, REBAR (0.1) reduced variance at the beginning of
training, but its performance degraded later in training. REBAR was able to adaptively change the
temperature as optimization progressed and retained superior variance reduction. We also observed
that SimpleMuProp was a surprisingly strong baseline that improved significantly over NVIL. It
performed similarly to MuProp despite not explicitly using the gradient of f .
Generally, lower variance gradient estimates led to faster optimization of the objective and convergence to a better final value (Figure 3, Table 1, Appendix Figures App.2 and App.4). For the nonlinear
model, the Concrete estimator underperformed optimizing the training objective in both tasks.
6
Figure 2: Log variance of the gradient estimator for the two layer linear model (left) and single layer
nonlinear model (right) on the MNIST generative modeling task. All of the estimators are unbiased,
so their variance is directly comparable. We estimated moments from exponential moving averages
(with decay=0.999; we found that the results were robust to the exact value). The temperature is
shown in parenthesis where relevant.
Figure 3: Training variational lower bound for the two layer linear model (left) and single layer
nonlinear model (right) on the MNIST generative modeling task. We plot 5 trials over different
random initializations for each method with the median trial highlighted. The temperature is shown
in parenthesis where relevant.
Although our primary focus was optimization, for completeness, we include results on the test set in
Appendix Table App.2 computed with a 100-sample lower bound Burda et al. (2015). Improvements
on the training variational lower bound do not directly translate into improved test log-likelihood.
Previous work (Maddison et al., 2016) showed that regularizing the inference network alone was
sufficient to prevent overfitting. This led us to hypothesize that the overfitting results was primarily
due to overfitting in the inference network (q). To test this, we trained a separate inference network
on the validation and test sets, taking care not to affect the model parameters. This reduced overfitting
(Appendix Figure App.5), but did not completely resolve the issue, suggesting that the generative and
inference networks jointly overfit.
5.2.2
Structured prediction on MNIST
Structured prediction is a form of conditional density estimation that aims to model high dimensional
observations given a context. We followed the structured prediction task described by Raiko et al.
(2014), where we modeled the bottom half of an MNIST digit (x) conditional on the top half (c). The
conditional generative network takes as input c and passes it through an SBN. We optimized a single
sample lower bound on the log-likelihood
log p(x|c, ?)
E
p(b|c,?)
[log p(x|b, ?)] .
We measured the log variance of the gradient estimator (Figure 4) and found that REBAR significantly
reduced variance. In some configurations, MuProp excelled, especially with the single layer linear
model where the first order expansion that MuProp uses is most accurate. Again, the training objective
performance generally mirrored the reduction in variance of the gradient estimator (Figure 5, Table
1).
7
MNIST gen.
Linear 1 layer
Linear 2 layer
Nonlinear
NVIL
MuProp
REBAR (0.1)
REBAR
Concrete (0.1)
112.5
99.6
102.2
111.7
99.07
101.5
111.7
99
101.4
111.6
98.8
101.1
111.3
99.62
102.8
117.44
109.98
110.4
117.09
109.55
109.58
116.93
109.12
109
116.83
108.99
108.72
117.23
109.95
110.64
64.33
63.69
47.6
65.73
65.5
47.302
65.21
61.72
46.44
65.49
66.88
47.02
Omniglot gen.
Linear 1 layer
Linear 2 layer
Nonlinear
MNIST struct. pred.
Linear 1 layer
Linear 2 layer
Nonlinear
69.17
68.87
54.08
Table 1: Mean training variational lower bound over 5 trials with different random initializations.
The standard error of the mean is given in the Appendix. We bolded the best performing method (up
to standard error) for each task. We report trials using the best performing learning rate for each task.
Figure 4: Log variance of the gradient estimator for the two layer linear model (left) and single layer
nonlinear model (right) on the structured prediction task.
6
Discussion
Inspired by the Concrete relaxation, we introduced REBAR, a novel control variate for REINFORCE,
and demonstrated that it greatly reduces the variance of the gradient estimator. We also showed that
with a modification to the relaxation, REBAR and MuProp are closely related in the high temperature
limit. Moreover, we showed that we can adapt the temperature online and that it further reduces
variance.
Roeder et al. (2017) show that the reparameterization gradient includes a score function term which
can adversely affect the gradient variance. Because the reparameterization gradient only enters the
Figure 5: Training variational lower bound for the two layer linear model (left) and single layer
nonlinear model (right) on the structured prediction task. We plot 5 trials over different random
initializations for each method with the median trial highlighted.
8
REBAR estimator through differences of reparameterization gradients, we implicitly implement the
recommendation from (Roeder et al., 2017).
When optimizing the relaxation temperature, we require the derivative with respect to of the
gradient of the parameters. Empirically, the temperature changes slowly relative to the parameters,
so we might be able to amortize the cost of this operation over several parameter updates. We leave
exploring these ideas to future work.
It would be natural to explore the extension to the multi-sample case (e.g., VIMCO (Mnih & Rezende,
2016)), to leverage the layered structure in our models using Q-functions, and to apply this approach
to reinforcement learning.
Acknowledgments
We thank Ben Poole and Eric Jang for helpful discussions and assistance replicating their results.
References
Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv
preprint arXiv:1509.00519, 2015.
Michael C Fu. Gradient estimation. Handbooks in operations research and management science, 13:
575?616, 2006.
Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the
ACM, 33(10):75?84, 1990.
Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation
for stochastic neural networks. arXiv preprint arXiv:1511.05176, 2015.
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv
preprint arXiv:1611.01144, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
Diederik P Kingma and Max Welling.
arXiv:1312.6114, 2013.
Auto-encoding variational bayes.
arXiv preprint
Chris J. Maddison, Daniel Tarlow, and Tom Minka. A* Sampling. In Advances in Neural Information
Processing Systems 27, 2014.
Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous
relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In
Proceedings of The 31st International Conference on Machine Learning, pp. 1791?1799, 2014.
Andriy Mnih and Danilo Rezende. Variational inference for monte carlo objectives. In Proceedings
of The 33rd International Conference on Machine Learning, pp. 2188?2196, 2016.
Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In
Advances in neural information processing systems, pp. 2204?2212, 2014.
Art B. Owen. Monte Carlo theory, methods and examples. 2013.
John Paisley, David M Blei, and Michael I Jordan. Variational bayesian inference with stochastic
search. In Proceedings of the 29th International Coference on International Conference on
Machine Learning, pp. 1363?1370, 2012.
Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary
stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014.
Rajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In AISTATS, pp.
814?822, 2014.
9
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and
approximate inference in deep generative models. In Proceedings of The 31st International
Conference on Machine Learning, pp. 1278?1286, 2014.
Geoffrey Roeder, Yuhuai Wu, and David Duvenaud. Sticking the landing: An asymptotically
zero-variance gradient estimator for variational inference. arXiv preprint arXiv:1703.09194, 2017.
Francisco JR Ruiz, Michalis K Titsias, and David M Blei. Overdispersed black-box variational
inference. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence,
pp. 647?656. AUAI Press, 2016.
Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In
Proceedings of the 25th international conference on Machine learning, pp. 872?879. ACM, 2008.
Michalis K Titsias and Miguel L?zaro-Gredilla. Local expectation gradients for black box variational
inference. In Advances in Neural Information Processing Systems, pp. 2638?2646, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine learning, 8(3-4):229?256, 1992.
David Wingate and Theophane Weber. Automated variational inference in probabilistic programming.
arXiv preprint arXiv:1301.1299, 2013.
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural Turing machines. arXiv
preprint arXiv:1505.00521, 362, 2015.
10
| 6856 |@word trial:6 version:3 middle:1 seek:2 bn:3 sgd:1 moment:1 reduction:8 configuration:1 score:2 jimenez:1 daniel:1 tuned:2 denoting:2 ours:1 com:2 activation:1 diederik:2 must:2 john:1 ronald:1 hypothesize:1 designed:1 plot:5 update:1 alone:1 generative:12 half:3 intelligence:1 beginning:1 ith:1 tarlow:1 blei:3 completeness:1 wierstra:1 become:1 combine:3 introduce:2 notably:1 multi:2 brain:2 excelled:1 inspired:2 salakhutdinov:3 resolve:1 becomes:2 estimating:1 underlying:1 notation:1 moreover:1 factorized:1 theophane:1 lowest:1 substantially:1 deepmind:1 finding:1 pseudo:2 quantitative:1 every:1 binarized:1 auai:1 zaremba:2 uk:1 control:25 unit:3 local:2 tends:1 limit:3 despite:1 encoding:1 oxford:1 laurent:1 might:1 black:3 twice:1 initialization:3 suggests:3 challenging:2 range:1 bi:15 gone:1 acknowledgment:1 zaro:3 thirty:1 practice:2 implement:2 backpropagation:2 digit:2 significantly:5 thought:1 cannot:1 clever:1 marginalize:1 layered:1 context:4 yee:1 optimize:2 landing:1 deterministic:4 demonstrated:1 maximizing:1 williams:3 attention:2 straightforward:1 independently:1 jimmy:1 focused:1 simplicity:1 stats:1 correcting:1 jascha:1 estimator:56 insight:1 iain:1 reparameterization:11 analogous:1 target:1 suppose:1 controlling:1 exact:3 programming:1 us:6 trick:6 element:1 amortized:1 approximated:1 ep:2 observed:1 bottom:1 preprint:10 levine:1 wingate:3 enters:1 parameterize:1 sbn:1 trained:2 depend:2 tight:1 titsias:4 debias:1 eric:2 completely:2 gu:8 differently:1 derivation:2 train:2 effective:2 monte:11 artificial:1 approached:1 choosing:1 tightness:4 dieterich:1 jointly:2 highlighted:2 final:3 online:4 shakir:1 advantage:1 differentiable:3 relevant:3 gen:2 translate:1 achieve:2 intuitive:1 sticking:1 sutskever:3 convergence:3 diverges:2 produce:2 karol:1 adam:1 converges:3 leave:2 ben:2 illustrate:2 recurrent:1 ac:1 miguel:1 measured:1 progress:1 eq:3 strong:1 indicate:1 concentrate:1 direction:1 closely:3 correct:2 discontinuous:1 stochastic:13 require:4 generalization:1 tighter:1 exploring:1 extension:1 vimco:1 around:1 duvenaud:1 exp:1 sought:1 purpose:1 estimation:3 ruslan:2 applicable:3 tanh:1 yuhuai:1 weighted:1 aim:1 jaschasd:1 factorizes:1 rezende:8 focus:2 derived:2 notational:1 improvement:1 bernoulli:2 likelihood:7 greatly:1 baseline:10 helpful:1 inference:18 roeder:3 dependent:3 typically:2 hidden:1 interested:1 issue:1 ill:1 denoted:2 art:6 softmax:2 field:2 construct:3 beach:1 sampling:5 nearly:1 progressed:1 future:1 report:1 connectionist:1 simplify:1 inherent:1 primarily:1 ve:1 connects:1 relief:1 recalling:1 mnih:15 function3:1 mixture:1 nvil:6 accurate:1 rajesh:1 fu:2 shorter:1 orthogonal:1 tree:1 taylor:2 modeling:6 cover:1 measuring:1 cost:1 introducing:1 uniform:3 coference:1 too:1 answer:1 unbiasedness:1 st:3 adaptively:1 density:1 international:6 probabilistic:1 michael:2 continuously:1 concrete:12 ilya:2 na:1 again:1 management:1 slowly:1 gjt:1 berglund:1 adversely:1 derivative:1 leading:1 wojciech:1 toy:4 volodymyr:1 suggesting:1 account:1 potential:2 nonlinearities:1 b2:2 includes:1 relieve:1 coefficient:1 explicitly:1 depends:1 performed:3 later:1 closed:1 start:1 relied:1 bayes:1 contribution:1 minimize:2 degraded:1 variance:56 bolded:1 efficiently:2 maximized:1 peril:1 bayesian:1 produced:1 carlo:11 trajectory:1 app:5 pp:9 mohamed:1 glynn:2 minka:1 couple:1 dataset:1 ubiquitous:1 sean:1 carefully:2 focusing:1 danilo:2 follow:3 tom:1 improved:2 done:1 box:3 ox:1 strongly:1 evaluated:1 furthermore:1 roger:1 implicit:1 autoencoders:1 overfit:1 hand:6 replacing:3 nonlinear:12 google:3 logistic:2 usa:1 effect:1 unbiased:17 analytically:2 overdispersed:1 conditionally:1 assistance:1 shixiang:2 illustrative:1 generalized:1 whye:1 temperature:18 weber:3 variational:20 wise:1 novel:3 recently:3 image:1 sigmoid:2 common:1 specialized:1 superior:1 rl:2 empirically:1 refer:2 measurement:1 dinh:1 paisley:3 tuning:1 rd:1 similarly:3 omniglot:5 replicating:1 moving:1 own:1 recent:1 showed:3 noisily:1 optimizing:5 reverse:1 scenario:1 binary:4 yuri:1 seen:2 george:1 additional:2 relaxed:7 care:1 tapani:1 converge:2 maximize:1 multiple:2 sbns:3 reduces:4 faster:3 adapt:1 offer:1 long:2 equally:1 controlled:1 parenthesis:3 prediction:7 basic:1 multilayer:1 essentially:1 expectation:7 arxiv:20 sergey:1 background:1 affecting:1 whereas:3 median:2 source:1 biased:9 ascent:1 pass:3 effectiveness:1 jordan:1 call:1 leverage:1 feedforward:1 split:1 relaxes:1 automated:1 switch:1 marginalization:2 variate:24 zi:5 affect:2 architecture:1 andriy:5 reduce:9 idea:1 translates:1 peter:1 action:2 deep:2 heess:1 generally:5 useful:1 involve:1 clutter:1 reduced:3 outperform:1 mirrored:1 estimated:4 per:3 correctly:1 discrete:18 hyperparameter:3 key:2 threshold:4 clarity:1 prevent:1 graph:1 relaxation:25 asymptotically:1 turing:1 parameterized:1 master:1 uncertainty:1 arrive:1 muprop:15 wu:1 decision:1 appendix:16 scaling:1 acceptable:1 comparable:2 interleaved:1 bound:9 layer:22 followed:1 adapted:2 ahead:1 alex:1 performing:3 statically:1 structured:8 developing:1 gredilla:3 jr:1 across:2 character:1 modification:3 making:1 taken:1 discus:1 amnih:1 operation:2 apply:1 occurrence:1 alternative:3 jang:7 struct:1 original:2 top:1 michalis:2 include:1 marginalized:1 murray:2 especially:2 gregor:7 objective:6 quantity:1 strategy:1 primary:1 dependence:2 gradient:56 separate:1 reinforce:12 thank:1 rethinking:1 chris:3 maddison:11 code:1 retained:1 modeled:1 ratio:2 balance:2 minimizing:1 setup:2 unfortunately:2 reparameterizable:4 ba:1 suppress:1 policy:1 teh:1 observation:2 dispersion:1 markov:1 datasets:1 benchmark:2 daan:1 behave:1 communication:1 introduced:3 pred:1 david:5 required:1 optimized:2 connection:1 z1:3 learned:2 tensorflow:1 established:2 kingma:5 nip:1 able:2 poole:2 differentiably:1 program:1 max:1 memory:1 belief:4 natural:1 improve:2 github:1 raiko:2 categorical:3 auto:1 binarization:1 review:2 relative:1 graf:1 fully:1 expect:1 loss:2 var:1 geoffrey:1 validation:2 sufficient:2 proxy:3 summary:1 surprisingly:1 alain:1 bias:3 side:5 allow:1 burda:3 wide:1 taking:1 benefit:1 reinforcement:4 simplified:1 welling:4 ranganath:4 approximate:2 implicitly:1 overfitting:4 handbook:1 b1:6 francisco:1 alternatively:2 continuous:9 latent:8 search:1 decomposes:1 table:4 underperformed:1 robust:1 ca:1 nicolas:1 expansion:2 did:1 aistats:1 main:1 hyperparameters:1 fair:1 amortize:1 grosse:1 fails:1 wish:2 exponential:1 learns:1 ruiz:2 removing:1 decay:1 burden:2 incorporating:1 mnist:11 sohl:1 effectively:2 importance:1 gumbel:2 subtract:1 led:2 explore:1 visual:1 prevents:1 scalar:1 recommendation:1 gerrish:1 acm:2 cmaddis:1 conditional:6 goal:1 exposition:1 owen:2 replace:1 hard:5 change:2 reducing:1 pas:1 mathias:1 experimental:1 guillaume:1 evaluate:1 regularizing:1 correlated:1 |
6,475 | 6,857 | Nonlinear random matrix theory for deep learning
Jeffrey Pennington
Google Brain
[email protected]
Pratik Worah
Google Research
[email protected]
Abstract
Neural network configurations with random weights play an important role in the
analysis of deep learning. They define the initial loss landscape and are closely
related to kernel and random feature methods. Despite the fact that these networks
are built out of random matrices, the vast and powerful machinery of random matrix
theory has so far found limited success in studying them. A main obstacle in this
direction is that neural networks are nonlinear, which prevents the straightforward
utilization of many of the existing mathematical results. In this work, we open
the door for direct applications of random matrix theory to deep learning by
demonstrating that the pointwise nonlinearities typically applied in neural networks
can be incorporated into a standard method of proof in random matrix theory
known as the moments method. The test case for our study is the Gram matrix
Y T Y , Y = f (W X), where W is a random weight matrix, X is a random data
matrix, and f is a pointwise nonlinear activation function. We derive an explicit
representation for the trace of the resolvent of this matrix, which defines its limiting
spectral distribution. We apply these results to the computation of the asymptotic
performance of single-layer random feature networks on a memorization task and
to the analysis of the eigenvalues of the data covariance matrix as it propagates
through a neural network. As a byproduct of our analysis, we identify an intriguing
new class of activation functions with favorable properties.
1
Introduction
The list of successful applications of deep learning is growing at a staggering rate. Image
recognition (Krizhevsky et al., 2012), audio synthesis (Oord et al., 2016), translation (Wu et al.,
2016), and speech recognition (Hinton et al., 2012) are just a few of the recent achievements. Our
theoretical understanding of deep learning, on the other hand, has progressed at a more modest pace.
A central difficulty in extending our understanding stems from the complexity of neural network loss
surfaces, which are highly non-convex functions, often of millions or even billions (Shazeer et al.,
2017) of parameters.
In the physical sciences, progress in understanding large complex systems has often come by
approximating their constituents with random variables; for example, statistical physics and
thermodynamics are based in this paradigm. Since modern neural networks are undeniably large
complex systems, it is natural to consider what insights can be gained by approximating their
parameters with random variables. Moreover, such random configurations play at least two privileged
roles in neural networks: they define the initial loss surface for optimization, and they are closely
related to random feature and kernel methods. Therefore it is not surprising that random neural
networks have attracted significant attention in the literature over the years.
Another useful technique for simplifying the study of large complex systems is to approximate
their size as infinite. For neural networks, the concept of size has at least two axes: the number
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
of samples and the number of parameters. It is common, particularly in the statistics literature, to
consider the mean performance of a finite-capacity model against a given data distribution. From this
perspective, the number of samples, m, is taken to be infinite relative to the number of parameters, n,
i.e. n/m ! 0. An alternative perspective is frequently employed in the study of kernel or random
feature methods. In this case, the number of parameters is taken to be infinite relative to the number
of samples, i.e. n/m ! 1. In practice, however, most successful modern deep learning architectures
tend to have both a large number of samples and a large number of parameters, often of roughly the
same order of magnitude. (One simple explanation for this scaling may just be that the other extremes
tend to produce over- or under-fitting). Motivated by this observation, in this work we explore the
infinite size limit in which both the number of samples and the number of parameters go to infinity at
the same rate, i.e. n, m ! 1 with n/m = , for some finite constant . This perspective puts us
squarely in the regime of random matrix theory.
An abundance of matrices are of practical and theoretical interest in the context of random neural
networks. For example, the output of the network, its Jacobian, and the Hessian of the loss function
with respect to the weights are all interesting objects of study. In this work we focus on the
1
computation of the eigenvalues of the matrix M ? m
Y T Y , where Y = f (W X), W is a Gaussian
random weight matrix, X is a Gaussian random data matrix, and f is a pointwise activation function.
In many ways, Y is a basic primitive whose understanding is necessary for attacking more complicated
cases; for example, Y appears in the expressions for all three of the matrices mentioned above. But
studying Y is also quite interesting in its own right, with several interesting applications to machine
learning that we will explore in Section 4.
1.1
Our contribution
The nonlinearity of the activation function prevents us from leveraging many of the existing mathematical results from random matrix theory. Nevertheless, most of the basic tools for computing spectral
densities of random matrices still apply in this setting. In this work, we show how to overcome
some of the technical hurdles that have prevented explicit computations of this type in the past. In
particular, we employ the so-called moments method, deducing the spectral density of M from the
traces tr M k . Evaluating the traces involves computing certain multi-dimensional integrals, which
we show how to evaluate, and enumerating a certain class of graphs, for which we derive a generating
function. The result of our calculation is a quartic equation which is satisfied by the trace of the
resolvent of M , G(z) = E[tr(M zI) 1 ]. It depends on two parameters that together capture the
only relevant properties of the nonlinearity f : ?, the Gaussian mean of f 2 , and ?, the square of the
Gaussian mean of f 0 . Overall, the techniques presented here pave the way for studying other types of
nonlinear random matrices relevant for the theoretical understanding of neural networks.
1.2
Applications of our results
We show that the training loss of a ridge-regularized single-layer random-feature least-squares
2 0
memorization problem with regularization parameter is related to
G ( ). We observe
increased memorization capacity for certain types of nonlinearities relative to others. In particular,
for a fixed value of , the training loss is lower if ?/? is large, a condition satisfied by a large class of
activation functions, for example when f is close to an even function. We believe this observation
could have an important practical impact in designing next-generation activation functions.
We also examine the eigenvalue density of M and observe that if ? = 0 the distribution collapses to
the Marchenko-Pastur distribution (Mar?cenko & Pastur, 1967), which describes the eigenvalues of the
Wishart matrix X T X. We therefore make the surprising observation that there exist functions f such
that f (W X) has the same singular value distribution as X. Said another way, the eigenvalues of the
data covariance matrix are unchanged in distribution after passing through a single nonlinear layer
of the network. We conjecture that this property is actually satisfied through arbitrary layers of the
network, and find supporting numerical evidence. This conjecture may be regarded as a claim about
the universality of our results with respect to the distribution of X. Note that preserving the first
moment of this distribution is also an effect achieved through batch normalization (Ioffe & Szegedy,
2015), although higher moments are not necessarily preserved. We therefore offer the hypothesis that
choosing activation functions with ? = 0 might lead to improved training performance, in the same
way that batch normalization does, at least early in training.
2
1.3
Related work
The study of random neural networks has a relatively long history, with much of the initial work
focusing on approaches from statistical physics and the theory of spin glasses. For example, Amit
et al. (1985) analyze the long-time behavior of certain dynamical models of neural networks in terms
of an Ising spin-glass Hamiltonian, and Gardner & Derrida (1988) examine the storage capacity of
neural networks by studying the density of metastable states of a similar spin-glass system. More
recently, Choromanska et al. (2015) studied the critical points of random loss surfaces, also by
examining an associated spin-glass Hamiltonian, and Schoenholz et al. (2017) developed an exact
correspondence between random neural networks and statistical field theory.
In a somewhat tangential direction, random neural networks have also been investigated through their
relationship to kernel methods. The correspondence between infinite-dimensional neural networks
and Gaussian processes was first noted by Neal (1994a,b). In the finite-dimensional setting, the
approximate correspondence to kernel methods led to the development random feature methods that
can accelerate the training of kernel machines (Rahimi & Recht, 2007). More recently, a duality
between random neural networks with general architectures and compositional kernels was explored
by Daniely et al. (2016).
In the last several years, random neural networks have been studied from many other perspectives.
Saxe et al. (2014) examined the effect of random initialization on the dynamics of learning in deep
linear networks. Schoenholz et al. (2016) studied how information propagates through random
networks, and how that affects learning. Poole et al. (2016) and Raghu et al. (2016) investigated
various measures of expressivity in the context of deep random neural networks.
Despite this extensive literature related to random neural networks, there has been relatively little
research devoted to studying random matrices with nonlinear dependencies. The main focus in this
direction has been kernel random matrices and robust statistics models (El Karoui et al., 2010; Cheng
& Singer, 2013). In a closely-related contemporaneous work, Louart et al. (2017) examined the
resolvent of Gram matrix Y Y T in the case where X is deterministic.
2
Preliminaries
Throughout this work we will be relying on a number of basic concepts from random matrix theory.
Here we provide a lightning overview of the essentials, but refer the reader to the more pedagogical
literature for background (Tao, 2012).
2.1
Notation
Let X 2 Rn0 ?m be a random data matrix with i.i.d. elements Xi? ? N (0, x2 ) and W 2 Rn1 ?n0 be
2
a random weight matrix with i.i.d. elements Wij ? N (0, w
/n0 ). As discussed in Section 1, we are
interested in the regime in which both the row and column dimensions of these matrices are large and
approach infinity at the same rate. In particular, we define
?
n0
,
m
?
n0
,
n1
(1)
to be fixed constants as n0 , n1 , m ! 1. In what follows, we will frequently consider the limit that
n0 ! 1 with the understanding that n1 ! 1 and m ! 1, so that eqn. (1) is satisfied.
We denote the matrix of pre-activations by Z = W X. Let f : R ! R be a function with
zero mean and finite moments,
Z
Z
z2
z2
dz
dz
2
p
p
e
f ( w x z) = 0,
e 2 f ( w x z)k < 1 for k > 1 ,
(2)
2?
2?
and denote the matrix of post-activations Y = f (Z), where f is applied pointwise. We will be
interested in the Gram matrix,
1
M = Y Y T 2 Rn1 ?n1 .
(3)
m
3
2.2
Spectral density and the Stieltjes transform
The empirical spectral density of M is defined as,
?M (t) =
n1
1 X
(t
n1 j=1
j (M ))
(4)
,
where is the Dirac delta function, and the j (M ), j = 1, . . . , n1 , denote the n1 eigenvalues of M ,
including multiplicity. The limiting spectral density is defined as the limit of eqn. (4) as n1 ! 1, if
it exists.
For z 2 C \ supp(?M ) the Stieltjes transform G of ?M is defined as,
Z
?M (t)
1 ?
G(z) =
dt =
E tr(M zIn1 )
z t
n1
1
?
,
(5)
where the expectation is with respect to the random variables W and X. The quantity (M zIn1 ) 1
is the resolvent of M . The spectral density can be recovered from the Stieltjes transform using the
inversion formula,
1
?M ( ) =
lim Im G( + i?) .
(6)
? ?!0+
2.3
Moment method
One of the main tools for computing the limiting spectral distributions of random matrices is the
moment method, which, as the name suggests, is based on computations of the moments of ?M . The
asymptotic expansion of eqn. (5) for large z gives the Laurent series,
G(z) =
1
X
mk
,
z k+1
(7)
k=0
where mk is the kth moment of the distribution ?M ,
Z
?
1 ?
mk = dt ?M (t)tk =
E tr M k .
n1
(8)
If one can compute mk , then the density ?M can be obtained via eqns. (7) and (6). The idea behind
the moment method is to compute mk by expanding out powers of M inside the trace as,
2
3
X
?
1 ?
1
E tr M k =
E4
Mi1 i2 Mi2 i3 ? ? ? Mik 1 ik Mik i1 5 ,
(9)
n1
n1
i1 ,...,ik 2[n1 ]
and evaluating the leading contributions to the sum as the matrix dimensions go to infinity, i.e. as
n0 ! 1. Determining the leading contributions involves a complicated combinatorial analysis,
combined with the evaluation of certain nontrivial high-dimensional integrals. In the next section and
the supplementary material, we provide an outline for how to tackle these technical components of
the computation.
3
3.1
The Stieltjes transform of M
Main result
The following theorem characterizes G as the solution to a quartic polynomial equation.
Theorem 1. For M , , ,
?=
Z
w,
and
2
e z /2
dz p
f(
2?
w
defined as in Section 2.1, and constants ? and ? defined as,
"
#2
Z
z 2 /2
e
2
and ? = w x dz p
f 0 ( w x z) ,
(10)
x z)
2?
x
4
the Stieltjes transform of the spectral density of M satisfies,
? ?
1
1
G(z) = P
+
z
z
z
where,
P = 1 + (?
?)tP P +
1
,
(11)
P P t?
,
P P t?
(12)
and
P = 1 + (P
1) ,
P = 1 + (P
1) .
(13)
The proof of Theorem 1 is relatively long and complicated, so it?s deferred to the supplementary
material. The main idea underlying the proof is to translate the calculation of the moments in
eqn. (7) into two subproblems, one of enumerating certain connected outer-planar graphs, and another
of evaluating integrals that correspond to cycles in those graphs. The complexity resides both in
characterizing which outer-planar graphs contribute at leading order to the moments, and also in
computing those moments explicitly. A generating function encapsulating these results (P from
Theorem 1) is shown to satisfy a relatively simple recurrence relation. Satisfying this recurrence
relation requires that P solve eqn. (12). Finally, some bookkeeping relates G to P .
3.2
Limiting cases
3.2.1
?=?
In Section 3 of the supplementary material, we use a Hermite polynomial expansion of f to show that
? = ? if and only if f is a linear function. In this case, M = ZZ T , where Z = W X is a product of
Gaussian random matrices. Therefore we expect G to reduce to the Stieltjes transform of a so-called
product Wishart matrix. In (Dupic & Castillo, 2014), a cubic equation defining the Stieltjes transform
of such matrices is derived. Although eqn. (11) is generally quartic, the coefficient of the quartic term
vanishes when ? = ? (see Section 4 of the supplementary material). The resulting cubic polynomial
is in agreement with the results in (Dupic & Castillo, 2014).
3.2.2
?=0
Another interesting limit is when ? = 0, which significantly simplifies the expression in eqn. (12).
Without loss of generality, we can take ? = 1 (the general case can be recovered by rescaling z). The
resulting equation is,
?
?
z G2 + 1
z 1 G + = 0,
(14)
which is precisely the equation satisfied by the Stieltjes transform of the Marchenko-Pastur distribution
with shape parameter / . Notice that when = 1, the latter is the limiting spectral distribution of
XX T , which implies that Y Y T and XX T have the same limiting spectral distribution. Therefore we
have identified a novel type of isospectral nonlinear transformation. We investigate this observation
in Section 4.1.
4
4.1
Applications
Data covariance
Consider a deep feedforward neural network with lth-layer post-activation matrix given by,
Y l = f (W l Y l
1
),
Y0 =X.
(15)
The matrix Y l (Y l )T is the lth-layer data covariance matrix. The distribution of its eigenvalues (or the
singular values of Y l ) determine the extent to which the input signals become distorted or stretched
as they propagate through the network. Highly skewed distributions indicate strong anisotropy in
the embedded feature space, which is a form of poor conditioning that is likely to derail or impede
learning. A variety of techniques have been developed to alleviate this problem, the most popular of
which is batch normalization. In batch normalization, the variance of individual activations across the
batch (or dataset) is rescaled to equal one. The covariance is often ignored ? variants that attempt to
5
1
1
= 1 ( = 0)
0.500
0.500
= 1/4 ( = 0.498)
= 0 ( = 0.733)
= -1 ( = 1)
0.100
d(1, 10)
d(1, 1)
= -1/4 ( = 0.884)
0.050
0.100
0.050
= 1 ( = 0)
= 1/4 ( = 0.498)
= 0 ( = 0.733)
0.010
0.010
0.005
0.005
= -1/4 ( = 0.884)
= -1 ( = 1)
5
10
50
100
500 1000
5000
5
10
n0
50
100
500 1000
5000
n0
(a) L = 1
(b) L = 10
Figure 1: Distance between the (a) first-layer and (b) tenth-layer empirical eigenvalue distributions
of the data covariance matrices and our theoretical prediction for the first-layer limiting distribution
??1 , as a function of network width n0 . Plots are for shape parameters = 1 and = 3/2. The
different curves correspond to different piecewise linear activation functions parameterize by ?:
? = 1 is linear, ? = 0 is (shifted) relu, and ? = 1 is (shifted) absolute value. In (a), for all ?, we
see good convergence of the empirical distribution ?1 to our asymptotic prediction ??1 . In (b), in
accordance with our conjecture, we find good agreement between ??1 and the tenth-layer empirical
distribution ? = 0, but not for other values of ?. This provides evidence that when ? = 0 the
eigenvalue distribution is preserved by the nonlinear transformations.
fully whiten the activations can be very slow. So one aspect of batch normalization, as it is used in
practice, is that it preserves the trace of the covariance matrix (i.e. the first moment of its eigenvalue
distribution) as the signal propagates through the network, but it does not control higher moments of
the distribution. A consequence is that there may still be a large imbalance in singular values.
An interesting question, therefore, is whether there exist efficient techniques that could preserve or
approximately preserve the full singular value spectrum of the activations as they propagate through
the network. Inspired by the results of Section 3.2.2, we hypothesize that choosing an activation
function with ? = 0 may be one way to approximately achieve this behavior, at least early in training.
From a mathematical perspective, this hypothesis is similar to asking whether our results in eqn. (11)
are universal with respect to the distribution of X. We investigate this question empirically.
Let ?l be the empirical eigenvalue density of Y l (Y l )T , and let ??1 be the limiting density determined
by eqn. (11) (with = 1). We would like to measure the distance between ??1 and ?l in order to
see whether the eigenvalues propagate without getting distorted. There are many options that would
suffice, but we choose to track the following metric,
Z
d(?
?1 , ?l ) ? d |?
?1 ( ) ?l ( )| .
(16)
To observe the effect of varying ?, we utilize a variant of the relu activation function with non-zero
slope for negative inputs,
1+?
[x]+ + ?[ x]+ p
2?
f? (x) = q
.
1
1
2
2
(1
+
?
)
(1
+
?)
2
2?
(17)
One may interpret ? as (the negative of) the ratio of the slope for negative x to the slope for positive
x. It is straightforward to check that f? has zero Gaussian mean and that,
? = 1,
?=
(1
2(1 + ?2 )
?)2
,
2
2
? (1 + ?)
(18)
so we can adjust ? (without affecting ?) by changing ?. Fig. 1(a) shows that for any value of ? (and
thus ?) the distance between ??1 and ?1 approaches zero as the network width increases. This offers
6
1.0
1.0
0.8
? = -?
? = -2
? = -?
? = -2
? = -8
?=0
? = -8
?=0
? = -6
?=2
? = -6
?=2
0.8
? = -4
? = -4
Etrain
0.6
Etrain
0.6
0.4
0.4
0.2
0.2
0.0
-8
-6
-4
0
-2
2
0.0
-8
4
-6
-4
log10(?/?)
(a)
= 12 ,
=
0
-2
2
4
log10(?/?)
(b)
1
2
= 12 ,
=
3
4
Figure 2: Memorization performance of random feature networks versus ridge regularization parameter . Theoretical curves are solid lines and numerical solutions to eqn. (19) are points.
? log10 (?/? 1) distinguishes classes of nonlinearities, with = 1 corresponding to a
linear network. Each numerical simulation is done with a different randomly-chosen function f and
the specified . The good agreement confirms that no details about f other than are relevant. In
(a), there are more random features than data points, allowing for perfect memorization unless the
function f is linear, in which case the model is rank constrained. In (b), there are fewer random
features than data points, and even the nonlinear models fail to achieve perfect memorization. For
a fixed amount of regularization , curves with larger values of (smaller values of ?) have lower
training loss and hence increased memorization capacity.
numerical evidence that eqn. (11) is in fact the correct asymptotic limit. It also shows how quickly
the asymptotic behavior sets in, which is useful for interpreting Fig. 1(b), which shows the distance
between ??1 and ?10 . Observe that if ? = 0, ?10 approaches ??1 as the network width increases. This
provides evidence for the conjecture that the eigenvalues are in fact preserved as they propagate
through the network, but only when ? = 0, since we see the distances level off at some finite value
when ? 6= 0. We also note that small non-zero values of ? may not distort the eigenvalues too much.
These observations suggest a new method of tuning the network for fast optimization. Recent work (Pennington et al., 2017) found that inducing dynamical isometry, i.e. equilibrating the
singular value distribution of the input-output Jacobian, can greatly speed up training. In our context,
by choosing an activation function with ? ? 0, we can induce a similar type of isometry, not of the
input-output Jacobian, but of the data covariance matrix as it propagates through the network. We
conjecture that inducing this additional isometry may lead to further training speed-ups, but we leave
further investigation of these ideas to future work.
4.2
Asymptotic performance of random feature methods
Consider the ridge-regularized least squares loss function defined by,
L(W2 ) =
1
kY
2n2 m
W2T Y k2F + kW2 k2F ,
Y = f (W X) ,
(19)
where X 2 Rn0 ?m is a matrix of m n0 -dimensional features, Y 2 Rn2 ?m is a matrix of regression
targets, W 2 Rn1 ?n0 is a matrix of random weights and W2 2 Rn1 ?n2 is a matrix of parameters to
be learned. The matrix Y is a matrix of random features1 . The optimal parameters are,
?
? 1
1
1 T
?
T
W2 = Y QY , Q =
Y Y + Im
.
(20)
m
m
1
We emphasize that we are using an unconvential notation for the random features ? we call them Y in order
to make contact with the previous sections.
7
Our problem setup and analysis are similar to that of (Louart et al., 2017), but in contrast to that work,
we are interested in the memorization setting in which the network is trained on random input-output
pairs. Performance on this task is then a measure of the capacity of the model, or the complexity of
the function class it belongs to. In this context, we take the data X and the targets Y to be independent
Gaussian random matrices. From eqns. (19) and (20), the expected training loss is given by,
? 2
Etrain = EW,X,Y [L(W2? )] = EW,X,Y
tr Y T YQ2
m
? 2
(21)
= EW,X
tr Q2
m
2
@
=
EW,X [tr Q] .
m@
It is evident from eqn. (5) and the definition of Q that EW,X [tr Q] is related to G( ). However, our
results from the previous section cannot be used directly because Q contains the trace Y T Y , whereas
G was computed with respect to Y Y T . Thankfully, the two matrices differ only by a finite number of
zero eigenvalues. Some simple bookkeeping shows that
1
(1
EW,X [tr Q] =
m
/ )
G(
).
(22)
From eqn. (11) and its total derivative with respect to z, an equation for G0 (z) can be obtained by
computing the resultant of the two polynomials and eliminating G(z). An equation for Etrain follows;
see Section 4 of the supplementary material for details. An analysis of this equation shows that it is
homogeneous in , ?, and ?, i.e., for any > 0,
Etrain ( , ?, ?) = Etrain (
, ?, ?) .
(23)
In fact, this homogeneity is entirely expected from eqn. (19): an increase in the regularization constant
can be compensated by a decrease in scale of W2 , which, in turn, can be compensated by increasing
the scale of Y , which is equivalent to increasing ? and ?. Owing to this homogeneity, we are free to
choose = 1/?. For simplicity, we set ? = 1 and examine the two-variable function Etrain ( , 1, ?).
The behavior when = 0 is a measure of the capacity of the model with no regularization and
depends on the value of ?,
?
[1
]+
if ? = 1 and < 1,
Etrain (0, 1, ?) =
(24)
[1
/ ]+ otherwise.
As discussed in Section 3.2, when ? = ? = 1, the function f reduces to the identity. With this in
mind, the various cases in eqn. (24) are readily understood by considering the effective rank of the
random feature matrix Y.
In Fig. 2, we compare our theoretical predictions for Etrain to numerical simulations of solutions
to eqn. (19). The different curves explore various ratios of ? log10 (?/? 1) and therefore
probe different classes of nonlinearities. For each numerical simulation, we choose a random
quintic polynomial f with the specified value of (for details on this choice, see Section 3 of the
supplementary material). The excellent agreement between theory and simulations confirms that
Etrain depends only on and not on any other details of f . The black curves correspond to the
performance of a linear network. The results show that for ? very close to ?, the models are unable to
utilize their nonlinearity unless the regularization parameter is very small. Conversely, for ? close to
zero, the models exploits the nonlinearity very efficiently and absorb large amounts of regularization
without a significant drop in performance. This suggests that small ? might provide an interesting
class of nonlinear functions with enhanced expressive power. See Fig. 3 for some examples of
activation functions with this property.
5
Conclusions
1
In this work we studied the Gram matrix M = m
Y T Y , where Y = f (W X) and W and X are
random Gaussian matrices. We derived a quartic polynomial equation satisfied by the trace of the
resolvent of M , which defines its limiting spectral density. In obtaining this result, we demonstrated
8
f (1) (x)
f (2) (x)
f (3) (x)
f (4) (x)
Figure 3: Examples ofpactivation functions and their derivatives for which ? = 1 and ? = 0. In
2
red, f (1) = c1
1 + 5e 2x ; in green, f (2) (x) = c2 sin(2x) + cos(3x/2) 2e 2 x e 9/8 ;
p
x2
in orange, f (3) (x) = c3 |x|
2/? ; and in blue, f (4) (x) = c4 1 p43 e 2 erf(x). If we let
w = x = 1, then eqn. (2) is satisfied and ? = 0 for all cases. We choose the normalization
constants ci so that ? = 1.
that pointwise nonlinearities can be incorporated into a standard method of proof in random matrix
theory known as the moments method, thereby opening the door for future study of other nonlinear
random matrices appearing in neural networks.
We applied our results to a memorization task in the context of random feature methods and obtained
an explicit characterizations of the training error as a function of a ridge regression parameter. The
training error depends on the nonlinearity only through two scalar quantities, ? and ?, which are
certain Gaussian integrals of f . We observe that functions with small values of ? appear to have
increased capacity relative to those with larger values of ?.
We also make the surprising observation that for ? = 0, the singular value distribution of f (W X) is
the same as the singular value distribution of X. In other words, the eigenvalues of the data covariance
matrix are constant in distribution when passing through a single nonlinear layer of the network.
We conjectured and found numerical evidence that this property actually holds when passing the
signal through multiple layers. Therefore, we have identified a class of activation functions that
maintains approximate isometry at initialization, which could have important practical consequences
for training speed.
Both of our applications suggest that functions with ? ? 0 are a potentially interesting class of
activation functions. This is a large class of functions, as evidenced in Fig. 3, among which are many
types of nonlinearities that have not been thoroughly explored in practical applications. It would be
interesting to investigate these nonlinearities in future work.
References
Amit, Daniel J, Gutfreund, Hanoch, and Sompolinsky, Haim. Spin-glass models of neural networks.
Physical Review A, 32(2):1007, 1985.
Cheng, Xiuyuan and Singer, Amit. The spectrum of random inner-product kernel matrices. Random
Matrices: Theory and Applications, 2(04):1350010, 2013.
Choromanska, Anna, Henaff, Mikael, Mathieu, Michael, Arous, G?rard Ben, and LeCun, Yann. The
loss surfaces of multilayer networks. In AISTATS, 2015.
Daniely, A., Frostig, R., and Singer, Y. Toward Deeper Understanding of Neural Networks: The
Power of Initialization and a Dual View on Expressivity. arXiv:1602.05897, 2016.
Dupic, Thomas and Castillo, Isaac P?rez. Spectral density of products of wishart dilute random
matrices. part i: the dense case. arXiv preprint arXiv:1401.7802, 2014.
El Karoui, Noureddine et al. The spectrum of kernel random matrices. The Annals of Statistics, 38
(1):1?50, 2010.
Gardner, E and Derrida, B. Optimal storage properties of neural network models. Journal of Physics
A: Mathematical and general, 21(1):271, 1988.
9
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E., Mohamed, Abdel-rahman, Jaitly, Navdeep,
Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural networks
for acoustic modeling in speech recognition: The shared views of four research groups. IEEE
Signal Processing Magazine, 29(6):82?97, 2012.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by
reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine
Learning, pp. 448?456, 2015.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pp. 1097?
1105, 2012.
Louart, Cosme, Liao, Zhenyu, and Couillet, Romain. A random matrix approach to neural networks.
arXiv preprint arXiv:1702.05419, 2017.
Mar?cenko, Vladimir A and Pastur, Leonid Andreevich. Distribution of eigenvalues for some sets of
random matrices. Mathematics of the USSR-Sbornik, 1(4):457, 1967.
Neal, Radford M. Priors for infinite networks (tech. rep. no. crg-tr-94-1). University of Toronto,
1994a.
Neal, Radford M. Bayesian Learning for Neural Networks. PhD thesis, University of Toronto, Dept.
of Computer Science, 1994b.
Oord, Aaron van den, Dieleman, Sander, Zen, Heiga, Simonyan, Karen, Vinyals, Oriol, Graves, Alex,
Kalchbrenner, Nal, Senior, Andrew, and Kavukcuoglu, Koray. Wavenet: A generative model for
raw audio. arXiv preprint arXiv:1609.03499, 2016.
Pennington, J, Schoenholz, S, and Ganguli, S. Resurrecting the sigmoid in deep learning through
dynamical isometry: theory and practice. In Advances in neural information processing systems,
2017.
Poole, B., Lahiri, S., Raghu, M., Sohl-Dickstein, J., and Ganguli, S. Exponential expressivity in deep
neural networks through transient chaos. arXiv:1606.05340, June 2016.
Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., and Sohl-Dickstein, J. On the expressive power of
deep neural networks. arXiv:1606.05336, June 2016.
Rahimi, Ali and Recht, Ben. Random features for large-scale kernel machines. In In Neural
Infomration Processing Systems, 2007.
Saxe, A. M., McClelland, J. L., and Ganguli, S. Exact solutions to the nonlinear dynamics of learning
in deep linear neural networks. International Conference on Learning Representations, 2014.
Schoenholz, S. S., Gilmer, J., Ganguli, S., and Sohl-Dickstein, J. Deep Information Propagation.
ArXiv e-prints, November 2016.
Schoenholz, S. S., Pennington, J., and Sohl-Dickstein, J. A Correspondence Between Random Neural
Networks and Statistical Field Theory. ArXiv e-prints, 2017.
Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., and Dean, J. Outrageously
large neural language models using sparsely gated mixtures of experts. ICLR, 2017. URL
http://arxiv.org/abs/1701.06538.
Tao, Terence. Topics in random matrix theory, volume 132. American Mathematical Society
Providence, RI, 2012.
Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V., Norouzi, Mohammad, Macherey,
Wolfgang, Krikun, Maxim, Cao, Yuan, Gao, Qin, Macherey, Klaus, et al. Google?s neural
machine translation system: Bridging the gap between human and machine translation. arXiv
preprint arXiv:1609.08144, 2016.
10
| 6857 |@word inversion:1 polynomial:6 eliminating:1 nd:1 open:1 confirms:2 simulation:4 propagate:4 covariance:9 simplifying:1 thereby:1 tr:11 solid:1 arous:1 moment:16 initial:3 configuration:2 series:1 contains:1 daniel:1 past:1 existing:2 recovered:2 com:2 z2:2 surprising:3 karoui:2 activation:20 universality:1 intriguing:1 attracted:1 readily:1 numerical:7 shape:2 christian:1 hypothesize:1 plot:1 drop:1 n0:12 generative:1 fewer:1 krikun:1 hamiltonian:2 provides:2 characterization:1 contribute:1 toronto:2 org:1 hermite:1 mathematical:5 c2:1 direct:1 become:1 ik:2 yuan:1 fitting:1 inside:1 expected:2 behavior:4 roughly:1 frequently:2 growing:1 multi:1 brain:1 examine:3 wavenet:1 inspired:1 marchenko:2 relying:1 anisotropy:1 little:1 considering:1 increasing:2 features1:1 xx:2 moreover:1 notation:2 underlying:1 rn0:2 suffice:1 what:2 q2:1 developed:2 gutfreund:1 transformation:2 tackle:1 utilization:1 control:1 appear:1 positive:1 understood:1 accordance:1 limit:5 consequence:2 despite:2 laurent:1 approximately:2 might:2 black:1 initialization:3 studied:4 examined:2 suggests:2 conversely:1 co:1 limited:1 collapse:1 kw2:1 practical:4 lecun:1 practice:3 empirical:5 universal:1 significantly:1 ups:1 pre:1 equilibrating:1 induce:1 word:1 suggest:2 cannot:1 close:3 put:1 context:5 storage:2 memorization:9 equivalent:1 deterministic:1 demonstrated:1 dz:4 compensated:2 dean:1 straightforward:2 attention:1 go:2 primitive:1 convex:1 sainath:1 simplicity:1 insight:1 regarded:1 limiting:9 annals:1 target:2 play:2 enhanced:1 magazine:1 exact:2 zhenyu:1 homogeneous:1 designing:1 hypothesis:2 jaitly:1 agreement:4 romain:1 element:2 recognition:3 particularly:1 satisfying:1 sparsely:1 ising:1 mike:1 role:2 preprint:4 capture:1 parameterize:1 connected:1 cycle:1 sompolinsky:1 decrease:1 rescaled:1 mentioned:1 vanishes:1 complexity:3 dynamic:2 trained:1 ali:1 accelerate:1 various:3 fast:1 effective:1 klaus:1 choosing:3 kalchbrenner:1 whose:1 quite:1 supplementary:6 solve:1 larger:2 otherwise:1 statistic:3 erf:1 simonyan:1 mi2:1 transform:8 eigenvalue:17 product:4 qin:1 relevant:3 cao:1 translate:1 achieve:2 inducing:2 dirac:1 ky:1 constituent:1 achievement:1 getting:1 billion:1 convergence:1 sutskever:1 extending:1 produce:1 generating:2 perfect:2 leave:1 ben:2 object:1 tk:1 derive:2 andrew:2 derrida:2 progress:1 strong:1 involves:2 come:1 implies:1 indicate:1 differ:1 direction:3 closely:3 correct:1 owing:1 human:1 saxe:2 transient:1 material:6 preliminary:1 alleviate:1 investigation:1 im:2 crg:1 hold:1 dieleman:1 claim:1 early:2 favorable:1 combinatorial:1 tool:2 gaussian:10 i3:1 varying:1 ax:1 focus:2 derived:2 june:2 rank:2 check:1 greatly:1 contrast:1 tech:1 glass:5 ganguli:5 el:2 typically:1 relation:2 wij:1 choromanska:2 i1:2 interested:3 tao:2 overall:1 among:1 dual:1 classification:1 ussr:1 development:1 constrained:1 orange:1 field:2 equal:1 beach:1 koray:1 zz:1 yu:1 k2f:2 progressed:1 future:3 others:1 piecewise:1 few:1 employ:1 modern:2 tangential:1 distinguishes:1 randomly:1 preserve:3 opening:1 homogeneity:2 individual:1 jeffrey:1 n1:14 attempt:1 ab:1 interest:1 highly:2 investigate:3 evaluation:1 adjust:1 deferred:1 mixture:1 extreme:1 behind:1 devoted:1 integral:4 byproduct:1 necessary:1 machinery:1 modest:1 unless:2 theoretical:6 mk:5 increased:3 column:1 modeling:1 obstacle:1 asking:1 tp:1 daniely:2 krizhevsky:2 successful:2 examining:1 too:1 dependency:1 providence:1 combined:1 thoroughly:1 st:1 density:14 recht:2 international:2 oord:2 physic:3 off:1 dong:1 terence:1 michael:1 synthesis:1 squarely:1 together:1 quickly:1 ilya:1 thesis:1 central:1 satisfied:7 rn1:4 zen:1 choose:4 wishart:3 expert:1 derivative:2 leading:3 rescaling:1 american:1 li:1 szegedy:2 supp:1 nonlinearities:7 rn2:1 coefficient:1 satisfy:1 explicitly:1 depends:4 resolvent:5 view:2 stieltjes:8 wolfgang:1 analyze:1 characterizes:1 red:1 option:1 complicated:3 maintains:1 slope:3 contribution:3 square:3 spin:5 convolutional:1 variance:1 efficiently:1 correspond:3 identify:1 landscape:1 bayesian:1 vincent:1 kavukcuoglu:1 raw:1 norouzi:1 history:1 distort:1 definition:1 against:1 pp:2 mohamed:1 isaac:1 resultant:1 proof:4 associated:1 dataset:1 popular:1 lim:1 actually:2 appears:1 focusing:1 higher:2 dt:2 planar:2 improved:1 rard:1 done:1 mar:2 generality:1 just:2 rahman:1 hand:1 eqn:17 expressive:2 lahiri:1 nonlinear:13 propagation:1 google:5 defines:2 believe:1 usa:1 effect:3 name:1 concept:2 impede:1 regularization:7 hence:1 staggering:1 neal:3 i2:1 sin:1 skewed:1 width:3 eqns:2 recurrence:2 davis:1 noted:1 whiten:1 outline:1 ridge:4 pedagogical:1 evident:1 mohammad:1 interpreting:1 image:1 chaos:1 novel:1 recently:2 common:1 sigmoid:1 bookkeeping:2 physical:2 overview:1 empirically:1 conditioning:1 volume:1 million:1 discussed:2 interpret:1 significant:2 refer:1 stretched:1 tuning:1 mathematics:1 nonlinearity:5 frostig:1 language:1 lightning:1 surface:4 patrick:1 own:1 recent:2 isometry:5 perspective:5 quartic:5 belongs:1 conjectured:1 henaff:1 pastur:4 certain:7 rep:1 success:1 preserving:1 additional:1 somewhat:1 george:1 employed:1 deng:1 attacking:1 paradigm:1 determine:1 signal:4 relates:1 full:1 multiple:1 reduces:1 stem:1 rahimi:2 technical:2 mirhoseini:1 calculation:2 offer:2 long:4 post:2 prevented:1 privileged:1 impact:1 prediction:3 variant:2 basic:3 regression:2 liao:1 multilayer:1 expectation:1 metric:1 navdeep:1 arxiv:14 kernel:11 normalization:7 sergey:1 achieved:1 qy:1 c1:1 preserved:3 background:1 hurdle:1 affecting:1 whereas:1 singular:7 w2:5 yonghui:1 tend:2 leveraging:1 call:1 hanoch:1 door:2 feedforward:1 sander:1 variety:1 affect:1 relu:2 zi:1 architecture:2 identified:2 reduce:1 idea:3 simplifies:1 inner:1 enumerating:2 shift:1 whether:3 motivated:1 expression:2 url:1 accelerating:1 bridging:1 karen:1 speech:2 hessian:1 passing:3 compositional:1 deep:17 ignored:1 useful:2 generally:1 amount:2 mcclelland:1 http:1 exist:2 notice:1 shifted:2 delta:1 track:1 pace:1 blue:1 dickstein:4 group:1 four:1 demonstrating:1 nevertheless:1 changing:1 tenth:2 dahl:1 utilize:2 nal:1 vast:1 graph:4 year:2 sum:1 powerful:1 distorted:2 throughout:1 reader:1 wu:2 yann:1 scaling:1 entirely:1 layer:12 haim:1 correspondence:4 cheng:2 nontrivial:1 jpennin:1 dilute:1 infinity:3 precisely:1 alex:2 x2:2 ri:1 kleinberg:1 aspect:1 speed:3 relatively:4 conjecture:5 schoenholz:5 metastable:1 poor:1 describes:1 across:1 smaller:1 y0:1 quoc:1 den:1 multiplicity:1 taken:2 equation:9 turn:1 fail:1 singer:3 encapsulating:1 mind:1 raghu:3 studying:5 apply:2 observe:5 probe:1 spectral:13 appearing:1 alternative:1 batch:7 outrageously:1 thomas:1 log10:4 mikael:1 exploit:1 amit:3 approximating:2 society:1 unchanged:1 contact:1 g0:1 question:2 quantity:2 print:2 pave:1 said:1 kth:1 iclr:1 distance:5 unable:1 capacity:7 outer:2 topic:1 evaluate:1 extent:1 toward:1 maziarz:1 pointwise:5 relationship:1 ratio:2 vladimir:1 setup:1 potentially:1 subproblems:1 trace:8 negative:3 gated:1 allowing:1 imbalance:1 observation:6 finite:6 november:1 supporting:1 defining:1 hinton:4 incorporated:2 shazeer:2 arbitrary:1 evidenced:1 pair:1 specified:2 extensive:1 c3:1 imagenet:1 c4:1 acoustic:1 w2t:1 learned:1 heiga:1 expressivity:3 nip:1 poole:3 dynamical:3 regime:2 built:1 including:1 green:1 explanation:1 sbornik:1 power:4 critical:1 difficulty:1 natural:1 regularized:2 etrain:10 thermodynamics:1 mi1:1 mathieu:1 gardner:2 review:1 understanding:7 literature:4 prior:1 determining:1 asymptotic:6 relative:4 embedded:1 loss:12 expect:1 fully:1 graf:1 macherey:2 interesting:8 generation:1 versus:1 geoffrey:2 abdel:1 gilmer:1 vanhoucke:1 propagates:4 translation:3 row:1 last:1 free:1 mik:2 deeper:1 senior:2 characterizing:1 absolute:1 van:1 overcome:1 dimension:2 curve:5 gram:4 pratik:1 evaluating:3 cenko:2 resides:1 nguyen:1 far:1 contemporaneous:1 approximate:3 emphasize:1 absorb:1 ioffe:2 xi:1 spectrum:3 thankfully:1 robust:1 ca:1 expanding:1 obtaining:1 expansion:2 investigated:2 complex:3 necessarily:1 excellent:1 anna:1 aistats:1 main:5 dense:1 n2:2 fig:5 cubic:2 slow:1 xiuyuan:1 explicit:3 exponential:1 jacobian:3 abundance:1 rez:1 zhifeng:1 formula:1 undeniably:1 e4:1 theorem:4 covariate:1 list:1 explored:2 evidence:5 noureddine:1 essential:1 exists:1 sohl:4 pennington:4 gained:1 ci:1 maxim:1 phd:1 magnitude:1 chen:1 gap:1 led:1 explore:3 likely:1 gao:1 deducing:1 prevents:2 vinyals:1 g2:1 scalar:1 radford:2 satisfies:1 lth:2 identity:1 shared:1 leonid:1 infinite:6 determined:1 reducing:1 called:2 castillo:3 total:1 duality:1 ew:6 aaron:1 tara:1 internal:1 latter:1 infomration:1 oriol:1 dept:1 audio:2 schuster:1 |
6,476 | 6,858 | Parallel Streaming Wasserstein Barycenters
Matthew Staib
MIT CSAIL
[email protected]
Sebastian Claici
MIT CSAIL
[email protected]
Justin Solomon
MIT CSAIL
[email protected]
Stefanie Jegelka
MIT CSAIL
[email protected]
Abstract
Efficiently aggregating data from different sources is a challenging problem, particularly when samples from each source are distributed differently. These differences
can be inherent to the inference task or present for other reasons: sensors in a sensor
network may be placed far apart, affecting their individual measurements. Conversely, it is computationally advantageous to split Bayesian inference tasks across
subsets of data, but data need not be identically distributed across subsets. One principled way to fuse probability distributions is via the lens of optimal transport: the
Wasserstein barycenter is a single distribution that summarizes a collection of input
measures while respecting their geometry. However, computing the barycenter
scales poorly and requires discretization of all input distributions and the barycenter
itself. Improving on this situation, we present a scalable, communication-efficient,
parallel algorithm for computing the Wasserstein barycenter of arbitrary distributions. Our algorithm can operate directly on continuous input distributions and is
optimized for streaming data. Our method is even robust to nonstationary input
distributions and produces a barycenter estimate that tracks the input measures
over time. The algorithm is semi-discrete, needing to discretize only the barycenter
estimate. To the best of our knowledge, we also provide the first bounds on the
quality of the approximate barycenter as the discretization becomes finer. Finally,
we demonstrate the practical effectiveness of our method, both in tracking moving
distributions on a sphere, as well as in a large-scale Bayesian inference task.
1
Introduction
A key challenge when scaling up data aggregation occurs when data comes from multiple sources,
each with its own inherent structure. Sensors in a sensor network may be configured differently or
placed far apart, but each individual sensor simply measures a different view of the same quantity.
Similarly, user data collected by a server in California will differ from that collected by a server in
Europe: the data samples may be independent but are not identically distributed.
One reasonable approach to aggregation in the presence of multiple data sources is to perform
inference on each piece independently and fuse the results. This is possible when the data can be
distributed randomly, using methods akin to distributed optimization [52, 53]. However, when the
data is not split in an i.i.d. way, Bayesian inference on different subsets of observed data yields
slightly different ?subset posterior? distributions for each subset that must be combined [33]. Further
complicating matters, data sources may be nonstationary. How can we fuse these different data
sources for joint analysis in a consistent and structure-preserving manner?
We address this question using ideas from the theory of optimal transport. Optimal transport gives us a
principled way to measure distances between measures that takes into account the underlying space on
which the measures are defined. Intuitively, the optimal transport distance between two distributions
measures the amount of work one would have to do to move all mass from one distribution to the
other. Given J input measures {?j }Jj=1 , it is natural, in this setting, to ask for a measure ? that
minimizes the total squared distance to the input measures. This measure ? is called the Wasserstein
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
barycenter of the input measures [1], and should be thought of as an aggregation of the input measures
which preserves their geometry. This particular aggregation enjoys many nice properties: in the
earlier Bayesian inference example, aggregating subset posterior distributions via their Wasserstein
barycenter yields guarantees on the original inference task [47].
If the measures ?j are discrete, their barycenter can be computed relatively efficiently via either
a sparse linear program [2], or regularized projection-based methods [16, 7, 51, 17]. However, 1.
these techniques scale poorly with the support of the measures, and quickly become impractical
as the support becomes large. 2. When the input measures are continuous, to the best of our
knowledge the only option is to discretize them via sampling, but the rate of convergence to the true
(continuous) barycenter is not well-understood. These two confounding factors make it difficult to
utilize barycenters in scenarios like parallel Bayesian inference where the measures are continuous
and a fine approximation is needed. These are the primary issues we work to address in this paper.
Given sample access to J potentially continuous distributions ?j , we propose a communicationefficient, parallel algorithm to estimate their barycenter. Our method can be parallelized to J worker
machines, and the messages sent between machines are merely single integers. We require a discrete
approximation only of the barycenter itself, making our algorithm semi-discrete, and our algorithm
scales well to fine approximations (e.g. n ? 106 ). In contrast to previous work, we provide guarantees
on the quality of the approximation as n increases. These rates apply to the general setting in which
the ?j ?s are defined on manifolds, with applications to directional statistics [46]. Our algorithm
is based on stochastic gradient descent as in [22] and hence is robust to gradual changes in the
distributions: as the ?j ?s change over time, we maintain a moving estimate of their barycenter, a task
which is not possible using current methods without solving a large linear program in each iteration.
We emphasize that we aggregate the input distributions into a summary, the barycenter, which is itself
a distribution. Instead of performing any single domain-specific task such as clustering or estimating
an expectation, we can simply compute the barycenter of the inputs and process it later any arbitrary
way. This generality coupled with the efficiency and parallelism of our algorithm yields immediate
applications in fields from large scale Bayesian inference to e.g. streaming sensor fusion.
Contributions. 1. We give a communication-efficient and fully parallel algorithm for computing
the barycenter of a collection of distributions. Although our algorithm is semi-discrete, we stress
that the input measures can be continuous, and even nonstationary. 2. We give bounds on the quality
of the recovered barycenter as our discretization becomes finer. These are the first such bounds we
are aware of, and they apply to measures on arbitrary compact and connected manifolds. 3. We
demonstrate the practical effectiveness of our method, both in tracking moving distributions on a
sphere, as well as in a real large-scale Bayesian inference task.
1.1
Related work
Optimal transport. A comprehensive treatment of optimal transport and its many applications is
beyond the scope of our work. We refer the interested reader to the detailed monographs by Villani
[49] and Santambrogio [42]. Fast algorithms for optimal transport have been developed in recent
years via Sinkhorn?s algorithm [15] and in particular stochastic gradient methods [22], on which
we build in this work. These algorithms have enabled several applications of optimal transport and
Wasserstein metrics to machine learning, for example in supervised learning [21], unsupervised
learning [34, 5], and domain adaptation [14]. Wasserstein barycenters in particular have been applied
to a wide variety of problems including fusion of subset posteriors [47], distribution clustering [51],
shape and texture interpolation [45, 40], and multi-target tracking [6].
When the distributions ?j are discrete, transport barycenters can be computed relatively efficiently via
either a sparse linear program [2] or regularized projection-based methods [16, 7, 51, 17]. In settings
like posterior inference, however, the distributions ?j are likely continuous rather than discrete, and
the most obvious viable approach requires discrete approximation of each ?j . The resulting discrete
barycenter converges to the true, continuous barycenter as the approximations become finer [10, 28],
but the rate of convergence is not well-understood, and finely approximating each ?j yields a very
large linear program.
Scalable Bayesian inference. Scaling Bayesian inference to large datasets has become an important
topic in recent years. There are many approaches to this, ranging from parallel Gibbs sampling [38, 26]
2
to stochastic and streaming algorithms [50, 13, 25, 12]. For a more complete picture, we refer the
reader to the survey by Angelino et al. [3].
One promising method is via subset posteriors: instead of sampling from the posterior distribution
given by the full data, the data is split into smaller tractable subsets. Performing inference on each
subset yields several subset posteriors, which are biased but can be combined via their Wasserstein
barycenter [47], with provable guarantees on approximation quality. This is in contrast to other
methods that rely on summary statistics to estimate the true posterior [33, 36] and that require
additional assumptions. In fact, our algorithm works with arbitrary measures and on manifolds.
2
Background
Let (X , d) be a metric space. Given two probability measures ? 2 P(X ) and ? 2 P(X ) and a cost
function c : X ? X ! [0, 1), the Kantorovich optimal transport problem asks for a solution to
?Z
inf
c(x, y)d (x, y) : 2 ?(?, ?)
(1)
X ?X
where ?(?, ?) is the set of measures on the product space X ? X whose marginals evaluate to ? and
?, respectively.
Under mild conditions on the cost function (lower semi-continuity) and the underlying space (completeness and separability), problem (1) admits a solution [42]. Moreover, if the cost function is
of the form c(x, y) = d(x, y)p , the optimal transportation cost is a distance metric on the space of
probability measures. This is known as the Wasserstein distance and is given by
?
?1/p
Z
p
Wp (?, ?) =
inf
d(x, y) d (x, y)
.
(2)
2?(?,?)
X ?X
Optimal transport has recently attracted much attention in machine learning and adjacent communities [21, 34, 14, 39, 41, 5]. When ? and ? are discrete measures, problem (2) is a linear program,
although faster regularized methods based on Sinkhorn iteration are used in practice [15]. Optimal
transport can also be computed using stochastic first-order methods [22].
Now let ?1 , . . . , ?J be measures on X . The Wasserstein barycenter problem, introduced by Agueh
and Carlier [1], is to find a measure ? 2 P(X ) that minimizes the functional
F [?] :=
J
1X 2
W (?j , ?).
J j=1 2
(3)
Finding the barycenter ? is the primary problem we address in this paper. When each ?j is a
discrete measure, the exact barycenter can be found via linear programming [2], and many of the
regularization techniques apply for approximating it [16, 17]. However, the problem size grows
quickly with the size of the support. When the measures ?j are truly continuous, we are aware of
only one strategy: sample from each ?j in order to approximate it by the empirical measure, and then
solve the discrete barycenter problem.
We directly address the problem of computing the barycenter when the input measures can be
continuous. We solve a semi-discrete problem, where the target measure is a finite set of points, but
we do not discretize any other distribution.
3
Algorithm
We first provide some background on the dual formulation of optimal transport. Then we derive
a useful form of the barycenter problem, provide an algorithm to solve it, and prove convergence
guarantees. Finally, we demonstrate how our algorithm can easily be parallelized.
3.1
Mathematical preliminaries
The primal optimal transport problem (1) admits a dual problem [42]:
OTc (?, ?) = sup {EY ?? [v(Y )] + EX?? [v c (X)]} ,
v 1-Lipschitz
3
(4)
where v c (x) = inf y2X {c(x, y) v(y)} is the c-transform of v [49]. When ? =
discrete, problem (4) becomes the semi-discrete problem
OTc (?, ?) = maxn {hw, vi + EX?? [h(X, v)]} ,
Pn
i=1
v2R
wi
yi
is
(5)
where we define h(x, v) = v c (x) = mini=1,...,n {c(x, yi ) vi }. Semi-discrete optimal transport
admits efficient algorithms [31, 29]; Genevay et al. [22] in particular observed that given sample
oracle access to ?, the semi-discrete problem can be solved via stochastic gradient ascent. Hence
optimal transport distances can be estimated even in the semi-discrete setting.
3.2
Deriving the optimization problem
Absolutely continuous measures can be approximated arbitrarily well by discrete distributions with
respect to Wasserstein distance [30]. Hence one natural approach to the barycenter problem (3) is to
approximate the true barycenter via discrete approximation: we fix n support points {yi }ni=1 2 X
and search over assignments
of the mass wi on each point yi . In this way we wish to find the discrete
Pn
distribution ?n = i=1 wi yi with support on those n points which optimizes
J
1X 2
W2 (?j , ?n )
(6)
w2 n
w2 n J
j=1
8
9
J
<1 X
=
j
j
= min
max
hw,
v
i
+
E
[h(X
,
v
)]
.
(7)
X
??
j
j
j
w2 n : J
;
v j 2Rn
j=1
Pn
where we have defined F (w) := F [?n ] = F [ i=1 wi yi ] and used the dual formulation from
equation (5). in Section 4, we discuss the effect of different choices for the support points {yi }ni=1 .
min F (w) = min
Noting that the variables v j are uncoupled, we can rearrange to get the following problem:
min
w2
n
max
v 1 ,...,v J
J
?
1 X?
hw, v j i + EXj ??j [h(Xj , v j )] .
J j=1
(8)
Problem (8) is convex in w and jointly concave in the v j , and we can compute an unbiased gradient
estimate for each by sampling Xj ? ?j . Hence, we could solve this saddle-point problem via
simultaneous (sub)gradient steps as in Nemirovski and Rubinstein [37]. Such methods are simple
to implement, but in the current form we must project onto the simplex n at each iteration. This
requires only O(n log n) time [24, 32, 19] but makes it hard to decouple the problem across each
distribution ?j . Fortunately, we can reformulate the problem in a way that avoids projection entirely.
By strong duality, Problem (8) can be written as
8*
9
+
J
J
< 1X
=
X
1
max min
vj , w +
EXj ??j [h(Xj , v j )]
(9)
;
J j=1
v 1 ,...,v J w2 n : J
j=1
8
8
9
9
J
J
<
<1 X
= 1X
=
= max
min
vij +
EXj ??j [h(Xj , v j )] .
(10)
; J
;
v 1 ,...,v J : i : J
j=1
j=1
Note how the variable w disappears: for any fixed vector b, minimization of hb, wi over w 2 n is
equivalent to finding the minimum element of b. The optimal w can also be computed in closed form
when the barycentric cost is entropically regularized as in [9], which may yield better convergence
rates but requires dense updates that, e.g., need more communication in the parallel setting. In either
case, we are left with a concave maximization problem in v 1 , . . . , v J , to which we can directly apply
stochastic gradient ascent. Unfortunately the gradients are still not sparse and decoupled. We obtain
PJ
sparsity after one final transformation of the problem: by replacing each j=1 vij with a variable si
and enforcing this equality with a constraint, we turn problem (10) into the constrained problem
J ?
J
X
1X 1
max
min si + EXj ??j [h(Xj , v j )]
s.t. s =
vj .
(11)
i
J
s,v 1 ,...,v J J
j=1
j=1
3.3
Algorithm and convergence
4
We can now solve this problem via stochastic pro- Algorithm 1 Subgradient Ascent
jected subgradient ascent. This is described in Als, v 1 , . . . , v J
0n
gorithm 1; note that the sparse adjustments after the
loop
gradient step are actually projections onto the conDraw j ? Unif[1, . . . , J]
straint set with respect to the `1 norm. Derivation
Draw x ? ?j
of this sparse projection step is given rigorously in
Appendix A. Not only do we have an optimization aliW
argmini {c(x, yi ) vij }
gorithm with sparse updates, but we can even recover
iM
argmini si
the optimal weights w from standard results in online
vijW
vijW
. Gradient update
learning [20]. Specifically, in a zero-sum game where
s iM
siM + /J . Gradient update
one player plays a no-regret learning algorithm and
vijW
vijW + /2
. Projection
the other plays a best-response strategy, the average
j
j
v
v
+
/(2J)
. Projection
strategies of both players converge to optimal:
iM
iM
s iW
s iW
/2
. Projection
Theorem 3.1. Perform T iterations of stochastic
s iM
s iM
/(2J)
. Projection
subgradient ascent on u = (s, v 1 , . . . , v J ) as in
end
loop
R
Algorithm 1, and use step size = 4pT , assuming kut
u? k1 ? R for all t. Let it be the
minimizing index chosen at iteration t, and write
PT
wT = T1 t=1 eit . Then we can bound
p
E[F (wT ) F (w? )] ? 4R/ T .
(12)
The expectation is with respect to the randomness in the subgradient estimates gt .
Theorem 3.1 is proved in Appendix B. The proof combines the zero-sum game idea above, which
itself comes from [20], with a regret bound for online gradient descent [54, 23].
3.4
Parallel Implementation
The key realization which makes our barycenter algorithm truly scalable is that the variables
s, v 1 , . . . , v J can be separated across different machines. In particular, the ?sum? or ?coupling?
variable s is maintained on a master thread which runs Algorithm 2, and each v j is maintained on a
worker thread running Algorithm 3. Each projected gradient step requires first selecting distribution j.
The algorithm then requires computing only iW = argmini {c(xj , yi ) vij } and iM = argmini si ,
and then updating s and v j in only those coordinates. Hence only a small amount of information (iW
and iM ) need pass between threads.
Note also that this algorithm can be adapted to the parallel shared-memory case, where s is a variable
shared between threads which make sparse updates to it. Here we will focus on the first master/worker
scenario for simplicity.
Where are the bottlenecks? When there are n points in the discrete approximation, each worker?s
task of computing argmini {c(xj , yi ) vij } requires O(n) computations of c(x, y). The master must
iteratively find the minimum element siM in the vector s, then update siM , and decrease element siW .
These can be implemented respectively as the ?find min?, ?delete min? then ?insert,? and ?decrease
min? operations in a Fibonacci heap. All these operations together take amortized O(log n) time.
Hence, it takes O(n) time it for all J workers to each produce one gradient sample in parallel, and
only O(J log n) time for the master to process them all. Of course, communication is not free, but
the messages are small and our approach should scale well for J ? n.
This parallel algorithm is particularly well-suited to the Wasserstein posterior (WASP) [48] framework
for merging Bayesian subset posteriors. In this setting, we split the dataset X1 , . . . , Xk into J subsets
S1 , . . . , SJ each with k/J data points, distribute those subsets to J different machines, then each
machine runs Markov Chain Monte Carlo (MCMC) to sample from p(?|Si ), and we aggregate
these posteriors via their barycenter. The most expensive subroutine in the worker thread is actually
sampling from the posterior, and everything else is cheap in comparison. In particular, the machines
need not even share samples from their respective MCMC chains.
One subtlety is that selecting worker j truly uniformly at random each iteration requires more
synchronization, hence our gradient estimates are not actually independent as usual. Selecting worker
threads as they are available will fail to yield a uniform distribution over j, as at the moment worker
5
j finishes one gradient step, the probability that worker j is the next available is much less than 1/J:
worker j must resample and recompute iW , whereas other threads would have a head start. If workers
all took precisely the same amount of time, the ordering of worker threads would be determinstic, and
guarantees for without-replacement sampling variants of stochastic gradient ascent would apply [44].
In practice, we have no issues with our approach.
4
Consistency
Prior methods for estimating the Wasserstein barycenter
? ? of continuous measures ?j 2 P(X ) involve first apAlgorithm 2 Master Thread
proximating each ?j by a measure ?j,n that has finite
Input: index j, distribution ?, atoms
support on n points, then computing the barycenter ?n? of
{yi }i=1,...,N , number J of distribu{?j,n } as a surrogate for ? ? . This approach is consistent,
tions, step size
in that if ?j,n ! ?j as n ! 1, then also ?n? ! ? ? .
Output: barycenter weights w
This holds even if the barycenter is not unique, both in the
c
0n
Euclidean case [10, Theorem 3.1] as well as when X is
s
0n
a Riemannian manifold [28, Theorem 5.4]. However, it
iM
1
is not known how fast the approximation ?n? approaches
loop
the true barycenter ? ? , or even how fast the barycentric
iW
message from worker j
distance F [?n? ] approaches F [?n ].
Send
i
to worker j
M
In practice, not even the approximation ?n? is computed
c iM
c iM + 1
?
exactly: instead, support points are chosen and ?n is cons iM
siM + /(2J)
strained to have support on those points. There are various
s iW
s iW
/2
heuristic methods for choosing these support points, rangiM
argmini si
ing from mesh grids of the support, to randomly sampling
end loop
Pn
points from the convex hull of the supports of ?j , or even
return w
c/( i=1 ci )
optimizing over the support point locations. Yet we are
unaware of any rigorous guarantees on the quality of these
approximations.
Algorithm 3 Worker Thread
While our approach still involves approximating the
Input: index j, distribution ?, atoms
barycenter ? ? by a measure ?n? with fixed support, we
{yi }i=1,...,N , number J of distribuare able to provide bounds on the quality of this approxtions, step size
imation as n ! 1. Specifically, we bound the rate at
v
0n
which F [?n? ] ! F [?n ]. The result is intuitive, and appeals
loop
to the notion of an ?-cover of the support of the barycenter:
Draw x ? ?
iW
argmini {c(x, yi ) vi }
Definition 4.1 (Covering Number). The ?-covering numSend iW to master
ber of a compact set K ? X , with respect to the metric g,
iM
message from master
N? (K)
is the minimum number N? (K) of points {xi }i=1
2K
v iM
viM + /(2J)
needed so that for each y 2 K, there is some xi with
v iW
v iW
/2
g(xi , y) ? ?. The set {xi } is called an ?-covering.
end
loop
Definition 4.2 (Inverse Covering Radius). Fix n 2 Z+ .
We define the n-inverse covering radius of compact K ? X as the value ?n (K) = inf{? > 0 :
N? (K) ? n}, when n is large enough so the infimum exists.
Suppose throughout this section that K ? Rd is endowed with a Riemannian metric g, where K has
diameter D. In the specific case where g is the usual Euclidean metric, there is an ?-cover for K with
at most C1 ? d points, where C1 depends only on the diameter D and dimension d [43]. Reversing
the inequality, K has an n-inverse covering radius of at most ? ? C2 n 1/d when n takes the correct
form.
We now present and then prove our main result:
Theorem 4.1. Suppose the measures ?j are supported on K, and suppose ?1 is absolutely continuous
with respect to volume. Then the barycenter ? ? is unique. Moreover, for each empirical approximation
size n, if we choose support points {yi }i=1,...,n that constitute a 2?n (K)-cover of K, it follows that
Pn
F [?n? ] F [? ? ] ? O(?n (K) + n 1/d ), where ?n? = i=1 wi? yi for w? solving Problem (8).
Remark 4.1. Absolute continuity is only needed to reason about approximating the barycenter
with an N point discrete distribution. If the input distributions are themselves discrete distributions,
6
so is the barycenter, and we can strengthen our result. For large enough n, we actually have
W2 (?n? , ? ? ) ? 2?n (K) and therefore F [?n? ] F [? ? ] ? O(?n (K)).
Corollary 4.1 (Convergence to ? ? ). Suppose the measures ?j are supported on K, with ?1 absolutely
continuous with respect to volume. Let ? ? be the unique minimizer
of F . Then we can choose support
Pn
points {yi }i=1,...,n such that some subsequence of ?n? = i=1 wi? yi converges weakly to ? ? .
Proof. By Theorem 4.1, we can choose support points so that F [?n? ] ! F [? ? ]. By compactness, the
sequence ?n? admits a convergent subsequence ?n?k ! ? for some measure ?. Continuity of F allows
us to pass to the limit limk!1 F [?n?k ] = F [limk!1 ?n?k ]. On the other hand, limk!1 F [?n?k ] =
F [? ? ], and F is strictly convex [28], thus ?n?k ! ? ? weakly.
Before proving Theorem 4.1, we need smoothness of the barycenter functional F with respect to
Wasserstein-2 distance:
Lemma 4.1. Suppose we are given measures {?j }Jj=1 , ?, and {?n }1
n=1 supported on K, with
?n ! ?. Then, F [?n ] ! F [?], with |F [?n ] F [?]| ? 2D ? W2 (?n , ?).
Proof of Theorem 4.1. Uniqueness of ? ? follows from Theorem 2.4 of [28]. From Theorem 5.1
in [28] we know further that ? ? is absolutely continuous with respect to volume.
Let N > 0, and let ?N be the discrete distribution on N points, each with mass 1/N , which minimizes
W2 (?N , ? ? ). This distribution satisfies W2 (?N , ? ? ) ? CN 1/d [30], where C depends on K, the
dimension d, and the metric. With our ?budget? of n support points, we can construct a 2?n (K)-cover
as long as n is sufficiently large. Then define a distribution ?n,N with support on the 2?n (K)-cover
as follows: for each x in the support of ?N , map x to the closest point x0 in the cover, and add mass
1/N to x0 . Note that this defines not only the distribution ?n,N , but also a transport plan between ?N
and ?n,N . This map
pmoves N points of mass 1/N each a distance at most 2?n (K), so we may bound
W2 (?n,N , ?N ) ? N ? 1/N ? (2?n (K))2 = 2?n (K). Combining these two bounds, we see that
W2 (?n,N , ? ? ) ? W2 (?n,N , ?N ) + W2 (?N , ? ? )
? 2?n (K) + CN
1/d
(13)
(14)
.
For each n, we choose to set N = n, which yields W2 (?n,n , ? ? ) ? 2?n (K) + Cn
Lemma 4.1, and recalling that ? ? is the minimizer of J, we have
F [?n,n ]
F [? ? ] ? 2D ? (2?n (K) + Cn
1/d
) = O(?n (K) + n
1/d
).
1/d
. Applying
(15)
However, we must have
? F [?n,n ], because both are measures on the same n point 2?n (K)cover, but ?n? has weights chosen to minimize J. Thus we must also have
F [?n? ]
F [?n? ]
F [? ? ] ? F [?n,n ]
F [? ? ] ? O(?n (K) + n
1/d
).
The high-level view of the above result is that choosing support points yi to form an ?-cover with
respect to the metric g, and then optimizing over their weights wi via our stochastic algorithm, will
give us a consistent picture of the behavior of the true barycenter. Also note that the proof above
requires an ?-cover only of the support of v ? , not all of K. In particular, an ?-cover of the convex hull
of the supports of ?j is sufficient, as this must contain the barycenter. Other heuristic techniques to
efficiently focus a limited budget of n points only on the support of ? ? are advantageous and justified.
While Theorem 4.1 is a good start, ideally we would also be able to provide a bound on W2 (?n? , ? ? ).
This would follow readily from sharpness of the functional F [?], or even the discrete version F (w),
but it is not immediately clear how to achieve such a result.
5
Experiments
We demonstrate the applicability of our method on two experiments, one synthetic and one performing a real inference task. Together, these showcase the positive traits of our algorithm: speed,
parallelization, robustness to non-stationarity, applicability to non-Euclidean domains, and immediate
performance benefit to Bayesian inference. We implemented our algorithm in C++ using MPI, and our
code is posted at github.com/mstaib/stochastic-barycenter-code. Full experiment details
are given in Appendix D.
7
Figure 1: The Wasserstein barycenter of four von Mises-Fisher distributions on the unit sphere S 2 .
From left to right, the figures show the initial distributions merging into the Wasserstein barycenter.
As the input distributions are moved along parallel paths on the sphere, the barycenter accurately
tracks the new locations as shown in the final three figures.
5.1
Von Mises-Fisher Distributions with Drift
We demonstrate computation and tracking of the barycenter of four drifting von Mises-Fisher
distributions on the unit sphere S 2 . Note that W2 and the barycentric cost are now defined with
respect to geodesic distance on S 2 .
The distributions are randomly centered, and we move the center of each distribution 3 ? 10 5 radians
(in the same direction for all distributions) each time a sample is drawn. A snapshot of the results is
shown in Figure 1. Our algorithm is clearly able to track the barycenter as the distributions move.
5.2
Large Scale Bayesian Inference
We run logistic regression on the UCI skin segmentation
dataset [8]. The 245057 datapoints are colors represented
in R3 , each with a binary label determing whether that
color is a skin color. We split consecutive blocks of the
dataset into 127 subsets, and due to locality in the dataset,
the data in each subsets is not identically distributed. Each
subset is assigned one thread of an InfiniBand cluster on
which we simultaneously sample from the subset posterior
via MCMC and optimize the barycenter estimate. This is
in contrast to [47], where the barycenter can be computed
via a linear program (LP) only after all samplers are run.
45
40
35
30
Since the full dataset is tractable, we can compare the two
25
methods via W2 distance to the posterior of the full dataset,
50
100 150 200 250 300
which we can estimate via the large-scale optimal transport
algorithm in [22] or by LP depending on the support size. Figure 2: Convergence of our algorithm
For each method, we fix n barycenter support points on a with n ? 104 for different stepsizes. In
mesh determined by samples from the subset posteriors. each case we recover a better approximaAfter 317 seconds, or about 10000 iterations per subset tion than what was possible with the LP
posterior, our algorithm has produced a barycenter on for any n, in as little as ? 30 seconds.
n ? 104 support points with W2 distance about 26 from
the full posterior. Similarly competitive results hold even
for n ? 105 or 106 , though tuning the stepsize becomes more challenging. Even in the 106 case,
no individual 16 thread node used more than 2GB of memory. For n ? 104 , over a wide range of
stepsizes we can in seconds approximate the full posterior better than is possible with the LP as seen
in Figure 2 by terminating early.
In comparsion, in Table 1 we attempt to compute the barycenter LP as in [47] via Mosek [4],
for varying values of n. Even n = 480 is not possible on a system with 16GB of memory, and
feasible values of n result in meshes too sparse to accurately and reliably approximate the barycenter.
Specifically, there are several cases where n increases but the approximation quality actually decreases:
the subset posteriors are spread far apart, and the barycenter is so small relative to the required
bounding box that likely only one grid point is close to it, and how close this grid point is depends on
the specific mesh. To avoid this behavior, one must either use a dense grid (our approach), or invent a
better method for choosing support points that will still cover the barycenter. In terms of compute
time, entropy regularized methods may have faired better than the LP for finer meshes but would still
8
Table 1: Number of support points n versus computation time and W2 distance to the true posterior.
Compared to prior work, our algorithm handles much finer meshes, producing much better estimates.
Linear program from [47]
This paper
n
24
40
60
84
189
320
396
480
104
time (s)
W2
0.5
41.1
0.97
59.3
2.9
50.0
6.1
34.3
34
44.3
163
53.7
176
45
out of memory
out of memory
317
26.3
not give the same result as our method. Note also that the LP timings include only optimization time,
whereas in 317 seconds our algorithm produces samples and optimizes.
6
Conclusion and Future Directions
We have proposed an original algorithm for computing the Wasserstein barycenter of arbitrary
measures given a stream of samples. Our algorithm is communication-efficient, highly parallel,
easy to implement, and enjoys consistency results that, to the best of our knowledge, are new. Our
method has immediate impact on large-scale Bayesian inference and sensor fusion tasks: for Bayesian
inference in particular, we obtain far finer estimates of the Wasserstein-averaged subset posterior
(WASP) [47] than was possible before, enabling faster and more accurate inference.
There are many directions for future work: we have barely scratched the surface in terms of new
applications of large-scale Wasserstein barycenters, and there are still many possible algorithmic
improvements. One implication of Theorem 3.1 is that a faster algorithm for solving the concave
problem (11) immediately yields faster convergence to the barycenter. Incorporating variance reduction [18, 27] is a promising direction, provided we maintain communication-efficiency. Recasting
problem (11) as distributed consensus optimization [35, 11] would further help scale up the barycenter
computation to huge numbers of input measures.
Acknowledgements We thank the anonymous reviewers for their helpful suggestions. We also
thank MIT Supercloud and the Lincoln Laboratory Supercomputing Center for providing computational resources. M. Staib acknowledges Government support under and awarded by DoD, Air
Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG)
Fellowship, 32 CFR 168a. J. Solomon acknowledges funding from the MIT Research Support Committee (?Structured Optimization for Geometric Problems?), as well as Army Research Office grant
W911NF-12-R-0011 (?Smooth Modeling of Flows on Graphs?). This research was supported by
NSF CAREER award 1553284 and The Defense Advanced Research Projects Agency (grant number
N66001-17-1-4039). The views, opinions, and/or findings contained in this article are those of the
author and should not be interpreted as representing the official views or policies, either expressed or
implied, of the Defense Advanced Research Projects Agency or the Department of Defense.
References
[1] M. Agueh and G. Carlier. Barycenters in the Wasserstein Space. SIAM J. Math. Anal., 43(2):904?924,
January 2011. ISSN 0036-1410. doi: 10.1137/100805741.
[2] Ethan Anderes, Steffen Borgwardt, and Jacob Miller. Discrete Wasserstein barycenters: Optimal transport
for discrete data. Math Meth Oper Res, 84(2):389?409, October 2016. ISSN 1432-2994, 1432-5217. doi:
10.1007/s00186-016-0549-x.
[3] Elaine Angelino, Matthew James Johnson, and Ryan P. Adams. Patterns of scalable bayesian inference.
Foundations and Trends R in Machine Learning, 9(2-3):119?247, 2016. ISSN 1935-8237. doi: 10.1561/
2200000052. URL http://dx.doi.org/10.1561/2200000052.
[4] MOSEK ApS. The MOSEK optimization toolbox for MATLAB manual. Version 8.0.0.53., 2017. URL
http://docs.mosek.com/8.0/toolbox/index.html.
[5] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein GAN. 2017.
[6] M. Baum, P. K. Willett, and U. D. Hanebeck. On Wasserstein Barycenters and MMOSPA Estimation. IEEE
Signal Process. Lett., 22(10):1511?1515, October 2015. ISSN 1070-9908. doi: 10.1109/LSP.2015.2410217.
9
[7] J. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyr?. Iterative Bregman Projections for Regularized
Transportation Problems. SIAM J. Sci. Comput., 37(2):A1111?A1138, January 2015. ISSN 1064-8275.
doi: 10.1137/141000439.
[8] Rajen Bhatt and Abhinav Dhall. Skin segmentation dataset. UCI Machine Learning Repository.
[9] J?r?mie Bigot, Elsa Cazelles, and Nicolas Papadakis. Regularization of barycenters in the Wasserstein
space. arXiv:1606.01025 [math, stat], June 2016.
[10] Emmanuel Boissard, Thibaut Le Gouic, and Jean-Michel Loubes. Distribution?s template estimate with
Wasserstein metrics. Bernoulli, 21(2):740?759, May 2015. ISSN 1350-7265. doi: 10.3150/13-BEJ585.
[11] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization
and statistical learning via the alternating direction method of multipliers. Foundations and Trends R in
Machine Learning, 3(1):1?122, 2011.
[12] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michael I Jordan. Streaming
Variational Bayes. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger,
editors, Advances in Neural Information Processing Systems 26, pages 1727?1735. Curran Associates,
Inc., 2013.
[13] Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In Eric P.
Xing and Tony Jebara, editors, Proceedings of the 31st International Conference on Machine Learning,
volume 32 of Proceedings of Machine Learning Research, pages 1683?1691, Bejing, China, 22?24 Jun
2014. PMLR. URL http://proceedings.mlr.press/v32/cheni14.html.
[14] N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. Optimal Transport for Domain Adaptation.
IEEE Trans. Pattern Anal. Mach. Intell., PP(99):1?1, 2016. ISSN 0162-8828. doi: 10.1109/TPAMI.2016.
2615921.
[15] Marco Cuturi. Sinkhorn Distances: Lightspeed Computation of Optimal Transport. In C. J. C. Burges,
L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information
Processing Systems 26, pages 2292?2300. Curran Associates, Inc., 2013.
[16] Marco Cuturi and Arnaud Doucet. Fast Computation of Wasserstein Barycenters. pages 685?693, 2014.
[17] Marco Cuturi and Gabriel Peyr?. A Smoothed Dual Approach for Variational Wasserstein Problems. SIAM
J. Imaging Sci., 9(1):320?343, January 2016. doi: 10.1137/15M1032600.
[18] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with
support for non-strongly convex composite objectives. In Advances in Neural Information Processing
Systems, pages 1646?1654, 2014.
[19] John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l
1-ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine
Learning, pages 272?279. ACM, 2008.
[20] Yoav Freund and Robert E. Schapire. Adaptive Game Playing Using Multiplicative Weights. Games and
Economic Behavior, 29(1):79?103, October 1999. ISSN 0899-8256. doi: 10.1006/game.1999.0738.
[21] Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. Learning
with a Wasserstein Loss. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors,
Advances in Neural Information Processing Systems 28, pages 2053?2061. Curran Associates, Inc., 2015.
[22] Aude Genevay, Marco Cuturi, Gabriel Peyr?, and Francis Bach. Stochastic Optimization for Large-scale
Optimal Transport. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances
in Neural Information Processing Systems 29, pages 3440?3448. Curran Associates, Inc., 2016.
[23] Elad Hazan. Introduction to Online Convex Optimization. OPT, 2(3-4):157?325, August 2016. ISSN
2167-3888, 2167-3918. doi: 10.1561/2400000013.
[24] Michael Held, Philip Wolfe, and Harlan P. Crowder. Validation of subgradient optimization. Mathematical
Programming, 6(1):62?88, December 1974. ISSN 0025-5610, 1436-4646. doi: 10.1007/BF01580223.
[25] Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variational
inference. Journal of Machine Learning Research, 14(1):1303?1347, 2013.
10
[26] Matthew Johnson, James Saunderson, and Alan Willsky.
Analyzing hogwild parallel gaussian gibbs sampling.
In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani,
and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26,
pages 2715?2723. Curran Associates, Inc., 2013.
URL http://papers.nips.cc/paper/
5043-analyzing-hogwild-parallel-gaussian-gibbs-sampling.pdf.
[27] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[28] Young-Heon Kim and Brendan Pass. Wasserstein barycenters over Riemannian manifolds. Advances in
Mathematics, 307:640?683, February 2017. ISSN 0001-8708. doi: 10.1016/j.aim.2016.11.026.
[29] Jun Kitagawa, Quentin M?rigot, and Boris Thibert. Convergence of a Newton algorithm for semi-discrete
optimal transport. arXiv:1603.05579 [cs, math], March 2016.
[30] Beno?t Kloeckner. Approximation by finitely supported measures. ESAIM Control Optim. Calc. Var., 18
(2):343?359, 2012. ISSN 1292-8119.
[31] Bruno L?vy. A Numerical Algorithm for L2 Semi-Discrete Optimal Transport in 3D. ESAIM Math. Model.
Numer. Anal., 49(6):1693?1715, November 2015. ISSN 0764-583X, 1290-3841. doi: 10.1051/m2an/
2015055.
[32] C. Michelot. A finite algorithm for finding the projection of a point onto the canonical simplex of /n. J
Optim Theory Appl, 50(1):195?200, July 1986. ISSN 0022-3239, 1573-2878. doi: 10.1007/BF00938486.
[33] Stanislav Minsker, Sanvesh Srivastava, Lizhen Lin, and David Dunson. Scalable and Robust Bayesian
Inference via the Median Posterior. In PMLR, pages 1656?1664, January 2014.
[34] Gr?goire Montavon, Klaus-Robert M?ller, and Marco Cuturi. Wasserstein Training of Restricted Boltzmann
Machines. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in
Neural Information Processing Systems 29, pages 3718?3726. Curran Associates, Inc., 2016.
[35] Angelia Nedic and Asuman Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE
Transactions on Automatic Control, 54(1):48?61, 2009.
[36] Willie Neiswanger, Chong Wang, and Eric P. Xing. Asymptotically exact, embarrassingly parallel
mcmc. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, UAI?14,
pages 623?632, Arlington, Virginia, United States, 2914. AUAI Press. ISBN 978-0-9749039-1-0. URL
http://dl.acm.org/citation.cfm?id=3020751.3020816.
[37] Arkadi Nemirovski and Reuven Y. Rubinstein. An Efficient Stochastic Approximation Algorithm for
Stochastic Saddle Point Problems. In Moshe Dror, Pierre L?Ecuyer, and Ferenc Szidarovszky, editors, Modeling Uncertainty, number 46 in International Series in Operations Research & Management Science, pages 156?184. Springer US, 2005. ISBN 978-0-7923-7463-3 978-0-306-48102-4. doi:
10.1007/0-306-48102-2_8.
[38] David Newman, Padhraic Smyth, Max Welling, and Arthur U. Asuncion. Distributed inference for latent
dirichlet allocation. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1081?1088. Curran Associates, Inc., 2008. URL http://papers.
nips.cc/paper/3330-distributed-inference-for-latent-dirichlet-allocation.pdf.
[39] Gabriel Peyr?, Marco Cuturi, and Justin Solomon. Gromov-Wasserstein Averaging of Kernel and Distance
Matrices. In PMLR, pages 2664?2672, June 2016.
[40] Julien Rabin, Gabriel Peyr?, Julie Delon, and Marc Bernot. Wasserstein Barycenter and Its Application to
Texture Mixing. In Scale Space and Variational Methods in Computer Vision, pages 435?446. Springer,
Berlin, Heidelberg, May 2011. doi: 10.1007/978-3-642-24785-9_37.
[41] Antoine Rolet, Marco Cuturi, and Gabriel Peyr?. Fast Dictionary Learning with a Smoothed Wasserstein
Loss. In PMLR, pages 630?638, May 2016.
[42] Filippo Santambrogio. Optimal Transport for Applied Mathematicians, volume 87 of Progress in Nonlinear
Differential Equations and Their Applications. Springer International Publishing, Cham, 2015. ISBN
978-3-319-20827-5 978-3-319-20828-2. doi: 10.1007/978-3-319-20828-2.
[43] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms.
Cambridge university press, 2014.
11
[44] Ohad Shamir. Without-replacement sampling for stochastic gradient methods. In D. D. Lee, M. Sugiyama,
U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 46?54. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/
6245-without-replacement-sampling-for-stochastic-gradient-methods.pdf.
[45] Justin Solomon, Fernando de Goes, Gabriel Peyr?, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du,
and Leonidas Guibas. Convolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric
Domains. ACM Trans Graph, 34(4):66:1?66:11, July 2015. ISSN 0730-0301. doi: 10.1145/2766963.
[46] Suvrit Sra. Directional Statistics in Machine Learning: A Brief Review. arXiv:1605.00316 [stat], May
2016.
[47] Sanvesh Srivastava, Volkan Cevher, Quoc Dinh, and David Dunson. WASP: Scalable Bayes via barycenters
of subset posteriors. In Guy Lebanon and S. V. N. Vishwanathan, editors, Proceedings of the Eighteenth
International Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine
Learning Research, pages 912?920, San Diego, California, USA, 09?12 May 2015. PMLR. URL http:
//proceedings.mlr.press/v38/srivastava15.html.
[48] Sanvesh Srivastava, Cheng Li, and David B. Dunson. Scalable Bayes via Barycenter in Wasserstein Space.
arXiv:1508.05880 [stat], August 2015.
[49] C?dric Villani. Optimal Transport: Old and New. Number 338 in Grundlehren der mathematischen
Wissenschaften. Springer, Berlin, 2009. ISBN 978-3-540-71049-3. OCLC: ocn244421231.
[50] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings
of the 28th International Conference on Machine Learning (ICML-11), pages 681?688, 2011.
[51] J. Ye, P. Wu, J. Z. Wang, and J. Li. Fast Discrete Distribution Clustering Using Wasserstein Barycenter
With Sparse Support. IEEE Trans. Signal Process., 65(9):2317?2332, May 2017. ISSN 1053-587X. doi:
10.1109/TSP.2017.2659647.
[52] Yuchen Zhang, John C Duchi, and Martin J Wainwright. Communication-efficient algorithms for statistical
optimization. Journal of Machine Learning Research, 14:3321?3363, 2013.
[53] Yuchen Zhang, John Duchi, and Martin Wainwright. Divide and conquer kernel ridge regression: A
distributed algorithm with minimax optimal rates. Journal of Machine Learning Research, 16:3299?3340,
2015. URL http://jmlr.org/papers/v16/zhang15d.html.
[54] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 928?936, 2003.
12
| 6858 |@word mild:1 repository:1 version:2 norm:1 advantageous:2 villani:2 unif:1 adrian:1 gradual:1 jacob:1 asks:1 reduction:2 initial:1 moment:1 series:1 selecting:3 united:1 recovered:1 current:2 optim:2 discretization:3 com:2 si:6 yet:1 dx:1 chu:1 must:8 attracted:1 written:1 mesh:6 readily:1 john:4 recasting:1 numerical:1 shape:1 cheap:1 update:6 aps:1 intelligence:2 v2r:1 xk:1 hamiltonian:1 volkan:1 blei:1 completeness:1 math:5 recompute:1 node:1 location:2 org:3 zhang:4 mathematical:2 along:1 c2:1 become:3 differential:1 viable:1 agueh:2 prove:2 combine:1 manner:1 x0:2 tomaso:1 themselves:1 behavior:3 multi:2 steffen:1 soumith:1 little:1 becomes:5 provided:1 project:3 underlying:2 estimating:2 moreover:2 mass:5 what:1 interpreted:1 minimizes:3 dror:1 developed:1 mathematician:1 finding:4 transformation:1 impractical:1 guarantee:6 auai:1 concave:3 exactly:1 platt:1 control:2 unit:2 grant:2 ozdaglar:1 mauricio:1 producing:1 before:2 t1:1 engineering:1 timing:1 aggregating:2 positive:1 limit:1 understood:2 minsker:1 mach:1 analyzing:2 id:1 path:1 interpolation:1 china:1 conversely:1 challenging:2 appl:1 limited:1 nemirovski:2 graduate:1 range:1 averaged:1 unique:3 practical:2 vi:3 practice:3 regret:2 implement:2 block:1 wasp:3 empirical:2 nenna:1 thought:1 composite:1 projection:12 boyd:2 get:1 onto:4 close:2 applying:1 yee:1 optimize:1 equivalent:1 map:2 reviewer:1 transportation:3 eighteenth:1 zinkevich:1 send:1 attention:1 go:1 independently:1 emily:1 survey:1 sharpness:1 communicationefficient:1 simplicity:1 convex:7 immediately:2 michelot:1 deriving:1 datapoints:1 enabled:1 quentin:1 proving:1 handle:1 notion:1 beno:1 coordinate:1 target:2 play:2 pt:2 user:1 shamir:1 strengthen:1 diego:1 smyth:1 curran:8 suppose:5 programming:3 exact:2 associate:8 element:3 amortized:1 trend:2 approximated:1 updating:1 expensive:1 v32:1 showcase:1 wolfe:1 particularly:2 gorithm:2 observed:2 wang:3 solved:1 connected:1 elaine:1 ordering:1 decrease:3 monograph:1 principled:2 agency:2 respecting:1 broderick:1 cuturi:9 ideally:1 rigorously:1 comparsion:1 geodesic:1 dynamic:1 terminating:1 weakly:2 solving:3 ferenc:1 predictive:1 minimum:3 efficiency:2 eric:3 easily:1 joint:1 differently:2 eit:1 represented:1 various:1 derivation:1 separated:1 fast:7 monte:2 doi:20 artificial:2 rubinstein:2 newman:1 aggregate:2 klaus:1 choosing:3 shalev:2 jean:1 whose:1 elad:1 heuristic:2 solve:5 ecuyer:1 statistic:4 jointly:1 tsp:1 transform:1 final:2 online:4 itself:4 sequence:1 tpami:1 isbn:4 took:1 propose:1 product:1 bigot:1 adaptation:2 uci:2 loop:6 combining:1 realization:1 mixing:1 poorly:2 achieve:1 lincoln:1 roweis:1 flamary:1 intuitive:1 moved:1 convergence:9 cluster:1 y2x:1 produce:3 adam:1 tianqi:1 converges:2 boris:1 tions:1 coupling:1 ben:1 help:1 derive:1 depending:1 stat:3 finitely:1 ex:2 progress:1 sim:4 strong:1 tion:1 implemented:2 c:1 involves:1 come:2 differ:1 direction:5 radius:3 correct:1 mie:1 stochastic:20 hull:2 centered:1 opinion:1 everything:1 require:2 government:1 benamou:1 fix:3 preliminary:1 anonymous:1 opt:1 ryan:1 im:14 kitagawa:1 insert:1 strictly:1 hold:2 marco:8 sufficiently:1 guibas:1 lawrence:1 algorithmic:1 cfm:1 proximating:1 scope:1 matthew:4 strained:1 chiyuan:1 consecutive:1 heap:1 early:1 resample:1 dictionary:1 uniqueness:1 estimation:1 label:1 iw:12 hoffman:1 minimization:1 mit:10 clearly:1 sensor:7 gaussian:2 aim:1 dric:1 rather:1 pn:6 avoid:1 varying:1 stepsizes:2 thirtieth:1 wilson:1 office:2 corollary:1 focus:2 june:2 improvement:1 bernoulli:1 contrast:3 rigorous:1 a1111:1 brendan:1 kim:1 helpful:1 inference:25 streaming:5 compactness:1 koller:1 subroutine:1 interested:1 tao:1 issue:2 hossein:1 html:4 dual:4 plan:1 constrained:1 v38:1 otc:2 field:1 aware:2 construct:1 beach:1 sampling:11 atom:2 icml:2 unsupervised:1 mosek:4 future:2 simplex:2 inherent:2 randomly:3 preserve:1 national:1 intell:1 simultaneously:1 comprehensive:1 individual:3 geometry:2 replacement:3 maintain:2 william:1 recalling:1 attempt:1 stationarity:1 huge:1 message:4 highly:1 a1138:1 numer:1 chong:2 truly:3 primal:1 rearrange:1 held:1 chain:2 implication:1 accurate:1 andy:1 bregman:1 calc:1 worker:16 arthur:1 poggio:1 ohad:1 respective:1 decoupled:1 fox:1 yuchen:2 old:1 euclidean:3 divide:1 re:1 delete:1 cevher:1 rabin:1 earlier:1 modeling:2 cover:11 w911nf:1 tuia:1 yoav:1 assignment:1 maximization:1 applicability:2 cost:6 rakotomamonjy:1 subset:23 uniform:1 dod:1 johnson:3 peyr:7 too:1 virginia:1 gr:1 reuven:1 crowder:1 angelia:1 synthetic:1 combined:2 st:2 borgwardt:1 international:7 siam:3 kut:1 csail:4 santambrogio:2 lee:4 butscher:1 together:2 quickly:2 michael:2 von:3 squared:1 ndseg:1 management:1 solomon:4 choose:4 padhraic:1 guy:1 return:1 michel:1 oper:1 li:2 account:1 distribute:1 de:1 matter:1 configured:1 inc:8 scratched:1 leonidas:1 stream:1 piece:1 multiplicative:1 hogwild:2 view:4 depends:3 closed:1 hazan:1 francis:2 sup:1 start:2 aggregation:4 bayes:3 parallel:16 competitive:1 recover:2 option:1 shai:3 arkadi:1 asuncion:1 carlos:1 simon:1 contribution:1 minimize:1 air:1 ni:2 convolutional:1 variance:2 efficiently:4 miller:1 yield:9 asuman:1 directional:2 bayesian:17 accurately:2 produced:1 carlo:2 cc:3 finer:6 randomness:1 simultaneous:1 manual:1 sebastian:1 andre:1 definition:2 infinitesimal:1 pp:1 tamara:1 james:2 obvious:1 chintala:1 proof:4 mi:3 riemannian:3 con:1 saunderson:1 radian:1 proved:1 treatment:1 dataset:7 ask:1 color:3 knowledge:3 segmentation:2 embarrassingly:1 actually:5 faired:1 supervised:1 follow:1 arlington:1 response:1 rie:1 formulation:2 though:1 strongly:1 generality:1 box:1 hand:1 transport:26 replacing:1 nonlinear:1 continuity:3 defines:1 logistic:1 infimum:1 quality:7 scientific:1 grows:1 aude:1 usa:2 effect:1 ye:1 contain:1 true:7 unbiased:1 multiplier:1 willie:1 hence:7 assigned:1 regularization:2 arnaud:1 alternating:1 wp:1 laboratory:1 equality:1 neal:1 iteratively:1 staib:2 adjacent:1 game:5 covering:6 maintained:2 mpi:1 generalized:1 pdf:3 stress:1 ridge:1 demonstrate:5 complete:1 fibonacci:1 duchi:3 pro:1 ranging:1 variational:4 thibaut:1 recently:1 parikh:1 funding:1 lsp:1 functional:3 mlr:2 volume:6 lizhen:1 mathematischen:1 trait:1 willett:1 marginals:1 dinh:1 refer:2 measurement:1 cambridge:1 gibbs:3 paisley:1 smoothness:1 automatic:1 rd:1 grid:4 mathematics:1 consistency:2 similarly:2 tuning:1 sugiyama:4 exj:4 bruno:1 moving:3 access:2 europe:1 surface:1 sinkhorn:3 gt:1 add:1 closest:1 own:1 recent:2 confounding:1 posterior:23 optimizing:2 optimizes:2 apart:3 awarded:1 scenario:2 inf:4 server:2 incremental:1 inequality:1 suvrit:1 binary:1 arbitrarily:1 yi:18 der:1 cham:1 guestrin:1 preserving:1 fortunately:1 additional:1 seen:1 wasserstein:36 ey:1 parallelized:2 arjovsky:1 converge:1 fernando:1 ller:1 july:2 semi:11 stephen:1 multiple:2 full:6 signal:2 needing:1 ing:1 alan:1 borja:1 smooth:1 faster:4 bach:2 sphere:5 long:2 lin:1 award:1 boissard:1 papadakis:1 impact:1 scalable:7 regression:2 variant:1 invent:1 metric:9 chandra:1 vision:1 arxiv:4 iteration:7 kernel:2 expectation:2 c1:2 justified:1 affecting:1 fellowship:1 whereas:2 fine:2 background:2 else:1 median:1 source:6 biased:1 w2:21 operate:1 parallelization:1 limk:3 ascent:7 finely:1 grundlehren:1 sent:1 december:1 flow:1 effectiveness:2 jordan:1 integer:1 nonstationary:3 noting:1 presence:1 split:5 enough:2 easy:1 hb:1 identically:3 xj:7 finish:1 variety:1 lightspeed:1 economic:1 idea:2 cn:4 bottleneck:1 whether:1 thread:12 defazio:1 url:9 accelerating:1 defense:4 gb:2 akin:1 carlier:3 jj:2 remark:1 matlab:1 constitute:1 gabriel:6 useful:1 detailed:1 clear:1 involve:1 amount:3 diameter:2 http:9 schapire:1 straint:1 vy:1 canonical:1 nsf:1 estimated:1 per:1 track:3 write:1 discrete:31 key:2 infiniband:1 four:2 drawn:1 pj:1 utilize:1 lacoste:1 n66001:1 imaging:1 fuse:3 graph:2 asymptotically:1 subgradient:6 year:2 merely:1 rolet:1 luxburg:3 run:4 inverse:3 uncertainty:2 apalgorithm:1 master:7 sum:3 throughout:1 reader:2 reasonable:1 wu:1 guyon:3 draw:2 doc:1 summarizes:1 scaling:2 appendix:3 entirely:1 bound:10 convergent:1 cheng:1 oracle:1 adapted:1 constraint:1 precisely:1 vishwanathan:1 filippo:1 speed:1 min:10 performing:3 relatively:2 martin:4 department:1 structured:1 maxn:1 ball:1 march:1 across:4 slightly:1 smaller:1 separability:1 wi:8 lp:7 making:1 s1:1 quoc:1 intuitively:1 restricted:1 computationally:1 equation:2 resource:1 discus:1 turn:1 fail:1 committee:1 singer:2 r3:1 neiswanger:1 imation:1 tractable:2 needed:3 know:1 jected:1 end:3 available:2 operation:3 endowed:1 apply:5 nicholas:1 pmlr:5 pierre:1 stepsize:1 robustness:1 weinberger:3 drifting:1 original:2 running:1 dirichlet:2 clustering:3 tony:1 include:1 charlie:1 gan:1 publishing:1 newton:1 yoram:1 ghahramani:3 build:1 k1:1 approximating:4 february:1 emmanuel:1 conquer:1 implied:1 move:3 skin:3 objective:1 quantity:1 occurs:1 question:1 strategy:3 barycenter:76 moshe:1 primary:2 sanvesh:3 kantorovich:1 antoine:1 surrogate:1 gradient:23 usual:2 distance:17 thank:2 berlin:2 sci:2 philip:1 topic:1 manifold:5 collected:2 cfr:1 xing:2 reason:2 provable:1 enforcing:1 consensus:1 assuming:1 barely:1 issn:16 willsky:1 code:2 index:4 mini:1 reformulate:1 minimizing:1 providing:1 difficult:1 dunson:3 unfortunately:1 october:3 robert:2 potentially:1 ashia:1 anal:3 implementation:1 reliably:1 policy:1 boltzmann:1 perform:2 teh:1 discretize:3 snapshot:1 datasets:1 markov:1 finite:3 enabling:1 november:1 descent:3 january:4 immediate:3 situation:1 langevin:1 communication:7 head:1 rn:1 barycentric:3 smoothed:2 arbitrary:5 august:2 jebara:1 community:1 drift:1 hanebeck:1 peleato:1 david:6 introduced:1 eckstein:1 required:1 toolbox:2 optimized:1 ethan:1 california:2 uncoupled:1 nip:4 trans:3 address:4 justin:3 beyond:1 able:3 parallelism:1 pattern:2 sparsity:1 challenge:1 program:7 max:7 memory:5 including:1 wainwright:2 natural:2 rely:1 force:1 regularized:6 nedic:1 advanced:2 representing:1 minimax:1 meth:1 github:1 esaim:2 brief:1 abhinav:1 julien:2 picture:2 disappears:1 acknowledges:2 jun:2 stefanie:1 coupled:1 entropically:1 nice:1 prior:2 geometric:2 understanding:1 review:1 acknowledgement:1 l2:1 relative:1 stanislav:1 freund:1 fully:1 araya:1 synchronization:1 loss:2 suggestion:1 allocation:2 var:1 versus:1 validation:1 foundation:2 agent:1 jegelka:1 sufficient:1 consistent:3 article:1 editor:11 vij:5 playing:1 share:1 course:1 summary:2 placed:2 supported:5 free:1 enjoys:2 distribu:1 burges:3 ber:1 wide:2 template:1 absolute:1 sparse:9 julie:1 distributed:12 benefit:1 dimension:3 complicating:1 lett:1 avoids:1 unaware:1 author:1 collection:2 adaptive:1 san:1 projected:1 nguyen:1 far:4 supercomputing:1 welling:5 transaction:1 lebanon:1 sj:1 citation:1 approximate:5 emphasize:1 compact:3 doucet:1 uai:1 xi:4 shwartz:2 subsequence:2 continuous:15 iterative:1 latent:2 search:1 table:2 promising:2 robust:3 ca:1 career:1 sra:1 nicolas:1 reversing:1 improving:1 heidelberg:1 genevay:2 du:1 bottou:4 posted:1 domain:5 official:1 vj:2 garnett:4 wissenschaften:1 dense:2 marc:1 main:1 spread:1 bounding:1 courty:1 determing:1 center:2 x1:1 baum:1 tong:1 sub:1 saga:1 wish:1 comput:1 vim:1 jmlr:1 montavon:1 hw:3 young:1 angelino:2 theorem:12 specific:3 mobahi:1 appeal:1 admits:4 cortes:1 fusion:3 incorporating:1 dl:1 exists:1 frogner:1 merging:2 ci:1 texture:2 budget:2 chen:1 locality:1 suited:1 entropy:1 simply:2 likely:2 army:1 saddle:2 expressed:1 contained:1 adjustment:1 tracking:4 subtlety:1 springer:4 minimizer:2 satisfies:1 gromov:1 acm:3 lipschitz:1 fisher:3 shared:2 change:2 argmini:7 hard:1 specifically:3 feasible:1 determined:1 uniformly:1 wt:2 sampler:1 decouple:1 tushar:1 averaging:1 total:1 lens:1 pas:3 duality:1 lemma:2 called:2 player:2 stefje:1 aaron:1 support:34 jonathan:1 absolutely:4 wibisono:1 goire:1 evaluate:1 mcmc:4 later:1 srivastava:3 |
6,477 | 6,859 | ELF: An Extensive, Lightweight and Flexible
Research Platform for Real-time Strategy Games
Yuandong Tian1
Qucheng Gong1
Wenling Shang2
Yuxin Wu1
C. Lawrence Zitnick1
1
2
Facebook AI Research
Oculus
1
2
{yuandong, qucheng, yuxinwu, zitnick}@fb.com
[email protected]
Abstract
In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform
for fundamental reinforcement learning research. Using ELF, we implement a
highly customizable real-time strategy (RTS) engine with three game environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a miniature version of StarCraft, captures key game dynamics and runs at 40K frameper-second (FPS) per core on a laptop. When coupled with modern reinforcement
learning methods, the system can train a full-game bot against built-in AIs endto-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in
terms of environment-agent communication topologies, choices of RL methods,
changes in game parameters, and can host existing C/C++-based game environments like ALE [4]. Using ELF, we thoroughly explore training parameters and
show that a network with Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive curriculum beats the rule-based
built-in AI more than 70% of the time in the full game of Mini-RTS. Strong performance is also achieved on the other two games. In game replays, we show
our agents learn interesting strategies. ELF, along with its RL platform, is open
sourced at https://github.com/facebookresearch/ELF.
1
Introduction
Game environments are commonly used for research in Reinforcement Learning (RL), i.e. how to
train intelligent agents to behave properly from sparse rewards [4, 6, 5, 14, 29]. Compared to the
real world, game environments offer an infinite amount of highly controllable, fully reproducible,
and automatically labeled data. Ideally, a game environment for fundamental RL research is:
? Extensive: The environment should capture many diverse aspects of the real world, such
as rich dynamics, partial information, delayed/long-term rewards, concurrent actions with
different granularity, etc. Having an extensive set of features and properties increases the
potential for trained agents to generalize to diverse real-world scenarios.
? Lightweight: A platform should be fast and capable of generating samples hundreds or
thousands of times faster than real-time with minimal computational resources (e.g., a single machine). Lightweight and efficient platforms help accelerate academic research of RL
algorithms, particularly for methods which are heavily data-dependent.
? Flexible: A platform that is easily customizable at different levels, including rich choices
of environment content, easy manipulation of game parameters, accessibility of internal
variables, and flexibility of training architectures. All are important for fast exploration of
different algorithms. For example, changing environment parameters [35], as well as using
internal data [15, 19] have been shown to substantially accelerate training.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
To our knowledge, no current game platforms satisfy all criteria. Modern commercial games (e.g.,
StarCraft I/II, GTA V) are extremely realistic, but are not customizable and require significant resources for complex visual effects and for computational costs related to platform-shifting (e.g., a
virtual machine to host Windows-only SC I on Linux). Old games and their wrappers [4, 6, 5, 14])
are substantially faster, but are less realistic with limited customizability. On the other hand, games
designed for research purpose (e.g., MazeBase [29], ?RTS [23]) are efficient and highly customizable, but are not very extensive in their capabilities. Furthermore, none of the environments consider
simulation concurrency, and thus have limited flexibility when different training architectures are
applied. For instance, the interplay between RL methods and environments during training is often
limited to providing simplistic interfaces (e.g., one interface for one game) in scripting languages
like Python.
In this paper, we propose ELF, a research-oriented platform that offers games with diverse properties, efficient simulation, and highly customizable environment settings. The platform allows for
both game parameter changes and new game additions. The training of RL methods is deeply and
flexibly integrated into the environment, with an emphasis on concurrent simulations. On ELF,
we build a real-time strategy (RTS) game engine that includes three initial environments including
Mini-RTS, Capture the Flag and Tower Defense. Mini-RTS is a miniature custom-made RTS game
that captures all the basic dynamics of StarCraft (fog-of-war, resource gathering, troop building,
defense/attack with troops, etc). Mini-RTS runs at 165K FPS on a 4 core laptop, which is faster than
existing environments by an order of magnitude. This enables us for the first time to train end-toend a full-game bot against built-in AIs. Moreover, training is accomplished in only one day using
6 CPUs and 1 GPU. The other two games can be trained with similar (or higher) efficiency.
Many real-world scenarios and complex games (e.g. StarCraft) are hierarchical in nature. Our RTS
engine has full access to the game data and has a built-in hierarchical command system, which
allows training at any level of the command hierarchy. As we demonstrate, this allows us to train
a full-game bot that acts on the top-level strategy in the hierarchy while lower-level commands are
handled using build-in tactics. Previously, most research on RTS games focused only on lower-level
scenarios such as tactical battles [34, 25]. The full access to the game data also allows for supervised
training with small-scale internal data.
ELF is resilient to changes in the topology of the environment-actor communication used for training, thanks to its hybrid C++/Python framework. These include one-to-one, many-to-one and oneto-many mappings. In contrast, existing environments (e.g., OpenAI Gym [6] and Universe [33])
wrap one game in one Python interface, which makes it cumbersome to change topologies. Parallelism is implemented in C++, which is essential for simulation acceleration. Finally, ELF is capable
of hosting any existing game written in C/C++, including Atari games (e.g., ALE [4]), board games
(e.g. Chess and Go [32]), physics engines (e.g., Bullet [10]), etc, by writing a simple adaptor.
Equipped with a flexible RL backend powered by PyTorch, we experiment with numerous baselines,
and highlight effective techniques used in training. We show the first demonstration of end-toend trained AIs for real-time strategy games with partial information. We use the Asynchronous
Advantagous Actor-Critic (A3C) model [21] and explore extensive design choices including frameskip, temporal horizon, network structure, curriculum training, etc. We show that a network with
Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive
curriculum beats the rule-based built-in AI more than 70% of the time in full-game Mini-RTS. We
also show stronger performance in others games. ELF and its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF.
2
Architecture
ELF follows a canonical and simple producer-consumer paradigm (Fig. 1). The producer plays N
games, each in a single C++ thread. When a batch of M current game states are ready (M < N ), the
corresponding games are blocked and the batch are sent to the Python side via the daemon. The consumers (e.g., actor, optimizer, etc) get batched experience with history information via a Python/C++
interface and send back the replies to the blocked batch of the games, which are waiting for the next
action and/or values, so that they can proceed. For simplicity, the producer and consumers are in
the same process. However, they can also live in different processes, or even on different machines.
Before the training (or evaluation) starts, different consumers register themselves for batches with
2
Game 1
History buffer
Game 2
History buffer
Game N
Daemon
(batch
collector)
Batch with
history info
Actor
Model
Reply
Optimizer
History buffer
Producer (Games in C++)
Consumers (Python)
Figure 1: Overview of ELF.
different history length. For example, an actor might need a batch with short history, while an optimizer (e.g., T -step actor-critic) needs a batch with longer history. During training, the consumers
use the batch in various ways. For example, the actor takes the batch and returns the probabilties
of actions (and values), then the actions are sampled from the distribution and sent back. The batch
received by the optimizer already contains the sampled actions from the previous steps, and can be
used to drive reinforcement learning algorithms such as A3C. Here is a sample usage of ELF:
1
2
3
4
5
6
7
8
9
10
11
12
13
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# We run 1024 games concurrently .
num games = 1024
# Wait for a batch of 256 games.
batchsize = 256
# The return states contain key ?s ?, ? r ? and ? terminal ?
# The reply contains key ?a? to be filled from the Python side .
# The definitions of the keys are in the wrapper of the game.
input spec = dict (s=?? , r=?? , terminal =?? )
reply spec = dict (a=?? )
context = Init (num games, batchsize , input spec , reply spec )
Initialization of ELF
# Start all game threads and enter main loop .
context . Start ()
while True:
# Wait for a batch of game states to be ready
# These games will be blocked, waiting for replies .
batch = context .Wait()
# Apply a model to the game state . The output has key ? pi ?
output = model(batch)
# Sample from the output to get the actions of this batch .
reply [ ?a? ][:] = SampleFromDistribution(output )
# Resume games.
context . Steps ()
# Stop all game threads .
context .Stop()
Main loop of ELF
Parallelism using C++ threads. Modern reinforcement learning methods often require heavy parallelism to obtain diverse experiences [21, 22]. Most existing RL environments (OpenAI Gym [6]
and Universe [33], RLE [5], Atari [4], Doom [14]) provide Python interfaces which wrap only single game instances. As a result, parallelism needs to be built in Python when applying modern RL
methods. However, thread-level parallelism in Python can only poorly utilize multi-core processors,
due to the Global Interpreter Lock (GIL)1 . Process-level parallelism will also introduce extra data
exchange overhead between processes and increase complexity to framework design. In contrast,
our parallelism is achieved with C++ threads for better scaling on multi-core CPUs.
Flexible Environment-Model Configurations. In ELF, one or multiple consumers can be used.
Each consumer knows the game environment identities of samples from received batches, and typically contains one neural network model. The models of different consumers may or may not share
parameters, might update the weights, might reside in different processes or even on different machines. This architecture offers flexibility for switching topologies between game environments and
models. We can assign one model to each game environment, or one-to-one (e.g, vanilla A3C [21]),
in which each agent follows and updates its own copy of the model. Similarly, multiple environments can be assigned to a single model, or many-to-one (e.g., BatchA3C [35] or GA3C [1]), where
the model can perform batched forward prediction to better utilize GPUs. We have also incorporated
forward-planning methods (e.g., Monte-Carlo Tree Search (MCTS) [7, 32, 27]) and Self-Play [27],
in which a single environment might emit multiple states processed by multiple models, or one-tomany. Using ELF, these training configurations can be tested with minimal changes.
Highly customizable and unified interface. Games implemented with our RTS engine can be
trained using raw pixel data or lower-dimensional internal game data. Using internal game data is
1
The GIL in Python forbids simultaneous interpretations of multiple statements even on multi-core CPUs.
3
Pong
Breakout
Mini-RTS
Board Games
ELF
ALE
Capture the Flag
RTS Engine
Tower Defense
Figure 2: Hierarchical layout of ELF. In the current repository (https://github.com/
facebookresearch/ELF, master branch), there are board games (e.g., Go [32]), Atari learning environment [4], and a customized RTS engine that contains three simple games.
(a)
Worker
Resource
(b)
Your base
Your barracks
Enemy unit
Game Name
Descriptions
Avg Game Length
Mini-RTS
Gather resource and build
troops to destroy
opponent?s base.
1000-6000 ticks
Capture the Flag Capture the flag and bring
it to your own base
1000-4000 ticks
Tower Defense
1000-2000 ticks
Selected unit
Enemy base
Builds defensive towers to
block enemy invasion.
Figure 3: Overview of Real-time strategy engine. (a) Visualization of current game state. (b) The
three different game environments and their descriptions.
typically more convenient for research focusing on reasoning tasks rather than perceptual ones. Note
that web-based visual renderings is also supported (e.g., Fig. 3(a)) for case-by-case debugging.
ELF allows for a unified interface capable of hosting any existing game written in C/C++, including
Atari games (e.g., ALE [4]), board games (e.g. Go [32]), and a customized RTS engine, with a
simple adaptor (Fig. 2). This enables easy multi-threaded training and evaluation using existing RL
methods. Besides, we also provide three concrete simple games based on RTS engine (Sec. 3).
Reinforcement Learning backend. We propose a Python-based RL backend. It has a flexible
design that decouples RL methods from models. Multiple baseline methods (e.g., A3C [21], Policy
Gradient [30], Q-learning [20], Trust Region Policy Optimization [26], etc) are implemented, mostly
with very few lines of Python codes.
3
Real-time strategy Games
Real-time strategy (RTS) games are considered to be one of the next grand AI challenges after Chess
and Go [27]. In RTS games, players commonly gather resources, build units (facilities, troops, etc),
and explore the environment in the fog-of-war (i.e., regions outside the sight of units are invisible)
to invade/defend the enemy, until one player wins. RTS games are known for their exponential and
changing action space (e.g., 510 possible actions for 10 units with 5 choices each, and units of each
player can be built/destroyed when game advances), subtle game situations, incomplete information
due to limited sight and long-delayed rewards. Typically professional players take 200-300 actions
per minute, and the game lasts for 20-30 minutes.
Very few existing RTS engines can be used directly for research. Commercial RTS games (e.g.,
StarCraft I/II) have sophisticated dynamics, interactions and graphics. The game play strategies
have been long proven to be complex. Moreover, they are close-source with unknown internal states,
and cannot be easily utilized for research. Open-source RTS games like Spring [12], OpenRA [24]
and Warzone 2100 [28] focus on complex graphics and effects, convenient user interface, stable
network play, flexible map editors and plug-and-play mods (i.e., game extensions). Most of them
use rule-based AIs, do not intend to run faster than real-time, and offer no straightforward interface
with modern machine learning architectures. ORTS [8], BattleCode [2] and RoboCup Simulation
League [16] are designed for coding competitions and focused on rule-based AIs. Research-oriented
platforms (e.g., ?RTS [23], MazeBase [29]) are fast and simple, often coming with various baselines,
4
Realistic Code Resource Rule AIs Data AIs
StarCraft I/II
High
No
High
Yes
No
TorchCraft
High
Yes
High
Yes
Yes
ORTS, BattleCode
Mid
Yes
Low
Yes
No
?RTS, MazeBase
Low
Yes
Low
Yes
Yes
Mini-RTS
Mid
Yes
Low
Yes
Yes
Table 1: Comparison between different RTS engines.
RL backend
No
No
No
No
Yes
Platform
ALE [4]
RLE [5]
Universe [33]
Malmo [13]
Frame per second
6000
530
60
120
Platform
DeepMind Lab [3] VizDoom [14]
TorchCraft [31]
Mini-RTS
Frame per second
287(C)/866(G)
? 7,000
2,000 (frameskip=50)
40,000
Table 2: Frame rate comparison. Note that Mini-RTS does not render frames, but save game information into a C structure which is used in Python without copying. For DeepMind Lab, FPS is 287
(CPU) and 866 (GPU) on single 6CPU+1GPU machine. Other numbers are in 1CPU core.
but often with much simpler dynamics than RTS games. Recently, TorchCraft [31] provides APIs for
StarCraft I to access its internal game states. However, due to platform incompatibility, one docker
is used to host one StarCraft engine, and is resource-consuming. Tbl. 1 summarizes the difference.
3.1
Our approach
Many popular RTS games and its variants (e.g., StarCraft, DoTA, Leagues of Legends, Tower Defense) share the same structure: a few units are controlled by a player, to move, attack, gather or cast
special spells, to influence their own or an enemy?s army. With our command hierarchy, a new game
can be created by changing (1) available commands (2) available units, and (3) how each unit emits
commands triggered by certain scenarios. For this, we offer simple yet effective tools. Researchers
can change these variables either by adding commands in C++, or by writing game scripts (e.g.,
Lua). All derived games share the mechanism of hierarchical commands, replay, etc. Rule-based
AIs can also be extended similarly. We provide the following three games: Mini-RTS, Capture the
Flag and Tower Defense (Fig. 3(b)). These games share the following properties:
Gameplay. Units in each game move with real coordinates, have dimensions and collision checks,
and perform durative actions. The RTS engine is tick-driven. At each tick, AIs make decisions
by sending commands to units based on observed information. Then commands are executed, the
game?s state changes, and the game continues. Despite a fair complicated game mechanism, MiniRTS is able to run 40K frames-per-second per core on a laptop, an order of magnitude faster than
most existing environments. Therefore, bots can be trained in a day on a single machine.
Built-in hierarchical command levels. An agent could issue strategic commands (e.g., more aggressive expansion), tactical commands (e.g., hit and run), or micro-command (e.g., move a particular unit backward to avoid damage). Ideally strong agents master all levels; in practice, they may
focus on a certain level of command hierarchy, and leave others to be covered by hard-coded rules.
For this, our RTS engine uses a hierarchical command system that offers different levels of controls
over the game. A high-level command may affect all units, by issuing low-level commands. A
low-level, unit-specific durative command lasts a few ticks until completion during which per-tick
immediate commands are issued.
Built-in rule-based AIs. We have designed rule-based AIs along with the environment. These AIs
have access to all the information of the map and follow fixed strategies (e.g., build 5 tanks and
attack the opponent base). These AIs act by sending high-level commands which are then translated
to low-level ones and then executed.
With ELF, for the first time, we are able to train full-game bots for real-time strategy games and
achieve stronger performance than built-in rule-based AIs. In contrast, existing RTS AIs are either
rule-based or focused on tactics (e.g., 5 units vs. 5 units). We run experiments on the three games to
justify the usability of our platform.
5
KFPS per CPU core for Mini-RTS
70
60
50
40
30
6
1 core
2 cores
4 cores 5
8 cores
4
16 cores
3
20
2
10
1
0
64 threads
128 threads
256 threads
512 threads
1024 threads
0
KFPS per CPU core for Pong (Atari)
1 core
2 cores
4 cores
8 cores
16 cores
OpenAI Gym
ELF
64 threads
128 threads
256 threads
512 threads
1024 threads
Figure 4: Frame-per-second per CPU core (no hyper-threading) with respect to CPUs/threads. ELF
(light-shaded) is 3x faster than OpenAI Gym [6] (dark-shaded) with 1024 threads. CPU involved in
testing: Intel [email protected].
4
Experiments
4.1
Benchmarking ELF
We run ELF on a single server with a different number of CPU cores to test the efficiency of parallelism. Fig. 4(a) shows the results when running Mini-RTS. We can see that ELF scales well with
the number of CPU cores used to run the environments. We also embed Atari emulator [4] into
our platform and check the speed difference between a single-threaded ALE and paralleled ALE per
core (Fig. 4(b)). While a single-threaded engine gives around 5.8K FPS on Pong, our paralleled ALE
runs comparable speed (5.1K FPS per core) with up to 16 cores, while OpenAI Gym (with Python
threads) runs 3x slower (1.7K FPS per core) with 16 cores 1024 threads, and degrades with more
cores. Number of threads matters for training since they determine how diverse the experiences
could be, with the same number of CPUs. Apart from this, we observed that Python multiprocessing
with Gym is even slower, due to heavy communication of game frames among processes. Note that
we used no hyperthreading for all experiments.
4.2
Baselines on Real-time Strategy Games
We focus on 1-vs-1 full games between trained AIs and built-in AIs. Built-in AIs have access to
full information (e.g., number of opponent?s tanks), while trained AIs know partial information in
the fog of war, i.e., game environment within the sight of its own units. There are exceptions: in
Mini-RTS, the location of the opponent?s base is known so that the trained AI can attack; in Capture
the Flag, the flag location is known to all; Tower Defense is a game of complete information.
Details of Built-in AI. For Mini-RTS there are two rule-based AIs: SIMPLE gathers, builds five
tanks and then attacks the opponent base. HIT N RUN often harasses, builds and attacks. For
Capture the Flag, we have one built-in AI. For Tower Defense (TD), no AI is needed. We tested our
built-in AIs against a human player and find they are strong in combat but exploitable. For example,
SIMPLE is vulnerable to hit-and-run style harass. As a result, a human player has a win rate of 90%
and 50% against SIMPLE and HIT N RUN, respectively, in 20 games.
Action Space. For simplicity, we use 9 strategic (and thus global) actions with hard-coded execution
details. For example, AI may issue BUILD BARRACKS, which automatically picks a worker to
build barracks at an empty location, if the player can afford. Although this setting is simple, detailed
commands (e.g., command per unit) can be easily set up, which bear more resemblance to StarCraft.
Similar setting applies to Capture the Flag and Tower Defense. Please check Appendix for detailed
descriptions.
Rewards. For Mini-RTS, the agent only receives a reward when the game ends (?1 for win/loss).
An average game of Mini-RTS lasts for around 4000 ticks, which results in 80 decisions for a
frame skip of 50, showing that the game is indeed delayed in reward. For Capturing the Flag, we
give intermediate rewards when the flag moves towards player?s own base (one score when the flag
?touches down?). In Tower Defense, intermediate penalty is given if enemy units are leaked.
4.2.1
A3C baseline
Next, we describe our baselines and their variants. Note that while we refer to these as baseline, we
are the first to demonstrate end-to-end trained AIs for real-time strategy (RTS) games with partial
information. For all games, we randomize the initial game states for more diverse experience and
6
Frameskip
SIMPLE
HIT N RUN
Capture Flag Tower Defense
50
68.4(?4.3) 63.6(?7.9)
Random
0.7 (? 0.9)
36.3 (? 0.3)
20
61.4(?5.8) 55.4(?4.7)
Trained AI
59.9 (? 7.4)
91.0 (? 7.6)
10
52.8(?2.4) 51.1(?5.0)
Table 3: Win rate of A3C models competing with built-in AIs over 10k games. Left: Mini-RTS.
Frame skip of the trained AI is 50. Right: For Capture the Flag, frame skip of trained AI is 10,
while the opponent is 50. For Tower Defense the frame skip of trained AI is 50, no opponent AI.
Game
Mini-RTS SIMPLE
Mini-RTS HIT N RUN
Median Mean (? std) Median Mean (? std)
ReLU
52.8
54.7 (? 4.2)
60.4
57.0 (? 6.8)
Leaky ReLU
59.8
61.0 (? 2.6)
60.2
60.3 (? 3.3)
BN
61.0
64.4 (? 7.4 )
55.6
57.5 (? 6.8)
Leaky ReLU + BN
72.2
68.4 (? 4.3)
65.5
63.6 (? 7.9)
Table 4: Win rate in % of A3C models using different network architectures. Frame skip of both
sides are 50 ticks. The fact that the medians are better than the means shows that different instances
of A3C could converge to very different solutions.
use A3C [21] to train AIs to play the full game. We run all experiments 5 times and report mean
and standard deviation. We use simple convolutional networks with two heads, one for actions and
the other for values. The input features are composed of spatially structured (20-by-20) abstractions
of the current game environment with multiple channels. At each (rounded) 2D location, the type
and hit point of the unit at that location is quantized and written to their corresponding channels.
For Mini-RTS, we also add an additional constant channel filled with current resource of the player.
The input feature only contains the units within the sight of one player, respecting the properties of
fog-of-war. For Capture the Flag, immediate action is required at specific situations (e.g., when the
opponent just gets the flag) and A3C does not give good performance. Therefore we use frame skip
10 for trained AI and 50 for the opponent to give trained AI a bit advantage. All models are trained
from scratch with curriculum training (Sec. 4.2.2).
Note that there are several factors affecting the AI performance.
Frame-skip. A frame skip of 50 means that the AI acts every 50 ticks, etc. Against an opponent with
low frame skip (fast-acting), A3C?s performance is generally lower (Fig. 3). When the opponent has
high frame skip (e.g., 50 ticks), the trained agent is able to find a strategy that exploits the longdelayed nature of the opponent. For example, in Mini-RTS it will send two tanks to the opponent?s
base. When one tank is destroyed, the opponent does not attack the other tank until the next 50divisible tick comes. Interestingly, the trained model could be adaptive to different frame-rates and
learn to develop different strategies for faster acting opponents. For Capture the Flag, the trained bot
learns to win 60% over built-in AI, with an advantage in frame skip. For even frame skip, trained
AI performance is low.
Network Architectures. Since the input is sparse and heterogeneous, we experiment on CNN architectures with Batch Normalization [11] and Leaky ReLU [18]. BatchNorm stabilizes the gradient
flow by normalizing the outputs of each filter. Leaky ReLU preserves the signal of negative linear
responses, which is important in scenarios when the input features are sparse. Tbl. 4 shows that
these two modifications both improve and stabilize the performance. Furthermore, they are complimentary to each other when combined.
History length. History length T affects the convergence speed, as well as the final performance
of A3C (Fig. 5). While Vanilla A3C [21] uses T = 5 for Atari games, the reward in Mini-RTS
is more delayed (? 80 actions before a reward). In this case, the T -step estimation of reward
PT
R1 = t=1 ? t?1 rt + ? T V (sT ) used in A3C does not yield a good estimation of the true reward if
V (sT ) is inaccurate, in particular for small T . For other experiments we use T = 6.
Interesting behaviors The trained AI learns to act promptly and use sophisticated strategies (Fig.
6). Multiple videos are available in https://github.com/facebookresearch/ELF.
7
Best win rate in evaluation
Best win rate in evaluation
AI_SIMPLE
0.75
0.55
0.35
0.15
0
200
400
600
Samples used (in thousands)
T=4
T=8
T=12
T=16
T=20
800
AI_HIT_AND_RUN
0.75
0.55
0.35
0.15
0
200
400
600
Samples used (in thousands)
T=4
T=8
T=12
T=16
T=20
800
Figure 5: Win rate in Mini-RTS with respect to the amount of experience at different steps T in
A3C. Note that one sample (with history) in T = 2 is equivalent to two samples in T = 1. Longer
T shows superior performance to small step counterparts, even if their samples are more expensive.
Trained AI (Blue)
AI_SIMPLE (Red)
Worker
Short-range Tank
Long-range Tank
(a)
(b)
(c)
(d)
(e)
Figure 6: Game screenshots between trained AI (blue) and built-in SIMPLE (red). Player colors are
shown on the boundary of hit point gauges. (a) Trained AI rushes opponent using early advantage.
(b) Trained AI attacks one opponent unit at a time. (c) Trained AI defends enemy invasion by
blocking their ways. (d)-(e) Trained AI uses one long-range attacker (top) to distract enemy units
and one melee attacker to attack enemy?s base.
4.2.2
Curriculum Training
We find that curriculum training plays an important role in training AIs. All AIs shown in Tbl. 3
and Tbl. 4 are trained with curriculum training. For Mini-RTS, we let the built-in AI play the first
k ticks, where k ? Uniform(0, 1000), then switch to the AI to be trained. This (1) reduces the
difficulty of the game initially and (2) gives diverse situations for training to avoid local minima.
During training, the aid of the built-in AIs is gradually reduced until no aid is given. All reported
win rates are obtained by running the trained agents alone with greedy policy.
We list the comparison with and without curriculum training in Tbl. 6. It is clear that the performance
improves with curriculum training. Similarly, when fine-tuning models pre-trained with one type
of opponent towards a mixture of opponents (e.g., 50%SIMPLE + 50%HIT N RUN), curriculum
training is critical for better performance (Tbl. 5). Tbl. 5 shows that AIs trained with one built-in AI
cannot do very well against another built-in AI in the same game. This demonstrates that training
with diverse agents is important for training AIs with low-exploitability.
4.2.3
Monte-Carlo Tree Search
Monte-Carlo Tree Search (MCTS) can be used for planning when complete information about the
game is known. This includes the complete state s without fog-of-war, and the precise forward
model s0 = s0 (s, a). Rooted at the current game state, MCTS builds a game tree that is biased
Game
Mini-RTS
SIMPLE
HIT N RUN
Combined
SIMPLE
68.4 (?4.3)
26.6(?7.6)
47.5(?5.1)
HIT N RUN
34.6(?13.1) 63.6 (?7.9) 49.1(?10.5)
Combined(No curriculum) 49.4(?10.0) 46.0(?15.3) 47.7(?11.0)
Combined
51.8(?10.6) 54.7(?11.2) 53.2(?8.5)
Table 5: Training with a specific/combined AIs. Frame skip of both sides is 50. When against
combined AIs (50%SIMPLE + 50%HIT N RUN), curriculum training is particularly important.
8
Game
Mini-RTS SIMPLE Mini-RTS HIT N RUN Capture the Flag
no curriculum training
66.0(?2.4)
54.4(?15.9)
54.2(?20.0)
with curriculum training
68.4 (?4.3)
63.6 (?7.9)
59.9 (?7.4)
Table 6: Win rate of A3C models with and without curriculum training. Mini-RTS: Frame skip of
both sides are 50 ticks. Capture the Flag: Frame skip of trained AI is 10, while the opponent is
50. The standard deviation of win rates are large due to instability of A3C training. For example in
Capture the Flag, highest win rate reaches 70% while lowest win rate is only 27%.
Game
Mini-RTS SIMPLE Mini-RTS HIT N RUN
Random
24.2(?3.9)
25.9(?0.6)
MCTS
73.2(?0.6)
62.7(?2.0)
Table 7: Win rate using MCTS over 1000 games. Both players use a frameskip of 50.
towards paths with high win rate. Leaves are expanded with all candidate moves and the win rate
estimation is computed by random self-play until the game ends. We use 8 threads, each with 100
rollouts. We use root parallelization [9] in which each thread independently expands a tree, and are
combined to get the most visited action. As shown in Tbl. 7, MCTS achieves a comparable win rate
to models trained with RL. Note that the win rates of the two methods are not directly comparable,
since RL methods have no knowledge of game dynamics, and its state knowledge is reduced by
the limits introduced by the fog-of-war. Also, MCTS runs much slower (2-3sec per move) than the
trained RL AI (? 1msec per move).
5
Conclusion and Future Work
In this paper, we propose ELF, a research-oriented platform for concurrent game simulation which
offers an extensive set of game play options, a lightweight game simulator, and a flexible environment. Based on ELF, we build a RTS game engine and three initial environments (Mini-RTS,
Capture the Flag and Tower Defense) that run 40KFPS per core on a laptop. As a result, a fullgame bot in these games can be trained end-to-end in one day using a single machine. In addition
to the platform, we provide throughput benchmarks of ELF, and extensive baseline results using
state-of-the-art RL methods (e.g, A3C [21]) on Mini-RTS and show interesting learnt behaviors.
ELF opens up many possibilities for future research. With this lightweight and flexible platform, RL
methods on RTS games can be explored in an efficient way, including forward modeling, hierarchical
RL, planning under uncertainty, RL with complicated action space, and so on. Furthermore, the
exploration can be done with an affordable amount of resources. As future work, we will continue
improving the platform and build a library of maps and bots to compete with.
References
[1] Mohammad Babaeizadeh, Iuri Frosio, Stephen Tyree, Jason Clemons, and Jan Kautz. Reinforcement learning through asynchronous advantage actor-critic on a gpu. International
Conference on Learning Representations (ICLR), 2017.
[2] BattleCode. Battlecode, mit?s ai programming competition: https://www.battlecode.org/.
2000. URL https://www.battlecode.org/.
[3] Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich
K?uttler, Andrew Lefrancq, Simon Green, V??ctor Vald?es, Amir Sadik, Julian Schrittwieser,
Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen
King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801,
2016. URL http://arxiv.org/abs/1612.03801.
[4] Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning
environment: An evaluation platform for general agents. CoRR, abs/1207.4708, 2012. URL
http://arxiv.org/abs/1207.4708.
9
[5] Nadav Bhonker, Shai Rozenberg, and Itay Hubara. Playing SNES in the retro learning environment. CoRR, abs/1611.02205, 2016. URL http://arxiv.org/abs/1611.02205.
[6] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang,
and Wojciech Zaremba. Openai gym. CoRR, abs/1606.01540, 2016. URL http://arxiv.
org/abs/1606.01540.
[7] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon
Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational
Intelligence and AI in games, 4(1):1?43, 2012.
[8] Michael Buro and Timothy Furtak. On the development of a free rts game engine. In GameOnNA Conference, pages 23?27, 2005.
[9] Guillaume MJ-B Chaslot, Mark HM Winands, and H Jaap van Den Herik. Parallel monte-carlo
tree search. In International Conference on Computers and Games, pages 60?71. Springer,
2008.
[10] Erwin Coumans. Bullet physics engine. Open Source Software: http://bulletphysics.org, 2010.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. ICML, 2015.
[12] Stefan Johansson and Robin Westberg. Spring: https://springrts.com/. 2008. URL https:
//springrts.com/.
[13] Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for
artificial intelligence experimentation. In International joint conference on artificial intelligence (IJCAI), page 4246, 2016.
[14] Micha? Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja?skowski.
Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint
arXiv:1605.02097, 2016.
[15] Guillaume Lample and Devendra Singh Chaplot. Playing fps games with deep reinforcement
learning. arXiv preprint arXiv:1609.05521, 2016.
[16] RoboCup
Simulation
League.
Robocup
simulation
https://en.wikipedia.org/wiki/robocup simulation league.
1995.
URL
//en.wikipedia.org/wiki/RoboCup_Simulation_League.
league:
https:
[17] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural
network acoustic models. In Proc. ICML, volume 30, 2013.
[18] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural
network acoustic models. 2013.
[19] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and
Raia Hadsell. Learning to navigate in complex environments. ICLR, 2017.
[20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G
Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al.
Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[21] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep
reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
[22] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De
Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane
Legg, Volodymyr Mnih, Koray Kavukcuoglu, and David Silver. Massively parallel methods
for deep reinforcement learning. CoRR, abs/1507.04296, 2015. URL http://arxiv.org/
abs/1507.04296.
10
[23] Santiago Ontan?on. The combinatorial multi-armed bandit problem and its application to realtime strategy games. In Proceedings of the Ninth AAAI Conference on Artificial Intelligence
and Interactive Digital Entertainment, pages 58?64. AAAI Press, 2013.
[24] OpenRA. Openra: http://www.openra.net/. 2007. URL http://www.openra.net/.
[25] Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long, and Jun Wang.
Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games. CoRR,
abs/1703.10069, 2017. URL http://arxiv.org/abs/1703.10069.
[26] John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust
region policy optimization. In ICML, pages 1889?1897, 2015.
[27] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van
Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529
(7587):484?489, 2016.
[28] Pumpkin Studios. Warzone 2100: https://wz2100.net/. 1999. URL https://wz2100.
net/.
[29] Sainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, and Rob Fergus.
Mazebase: A sandbox for learning from games. CoRR, abs/1511.07401, 2015. URL http:
//arxiv.org/abs/1511.07401.
[30] Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99,
pages 1057?1063, 1999.
[31] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timoth?ee Lacroix, Zeming Lin, Florian Richoux, and Nicolas Usunier. Torchcraft: a library for machine learning research on real-time strategy games. CoRR, abs/1611.00625, 2016. URL http:
//arxiv.org/abs/1611.00625.
[32] Yuandong Tian and Yan Zhu. Better computer go player with neural network and long-term
prediction. arXiv preprint arXiv:1511.06410, 2015.
[33] Universe. 2016. URL universe.openai.com.
[34] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, and Soumith Chintala. Episodic exploration
for deep deterministic policies: An application to starcraft micromanagement tasks. ICLR,
2017.
[35] Yuxin Wu and Yuandong Tian. Training agent for first-person shooter game with actor-critic
curriculum learning. International Conference on Learning Representations (ICLR), 2017.
11
| 6859 |@word katja:1 repository:1 version:1 cnn:1 stronger:2 johansson:1 open:5 adrian:1 pieter:1 simulation:9 bn:2 pick:1 initial:3 wrapper:2 lightweight:6 score:1 configuration:2 contains:5 daniel:1 interestingly:1 existing:10 current:7 com:9 yet:1 issuing:1 guez:1 gpu:5 written:3 john:2 realistic:3 cant:1 hofmann:1 christian:1 enables:2 reproducible:1 designed:3 update:2 v:2 alone:1 sukhbaatar:1 selected:1 greedy:1 leaf:1 amir:1 intelligence:4 spec:4 rts:64 core:29 short:2 aja:1 oneto:1 yuxin:2 num:2 provides:1 pascanu:1 lua:1 quantized:1 philipp:2 attack:9 org:13 location:5 denis:1 five:1 simpler:1 along:2 jonas:1 fps:7 yuan:1 overhead:1 introduce:1 peng:2 indeed:1 andrea:1 themselves:1 planning:3 behavior:2 multi:5 simulator:1 terminal:2 td:1 automatically:2 soumith:3 cpu:15 equipped:1 armed:1 window:1 moreover:2 laptop:4 lowest:1 atari:7 complimentary:1 dharshan:1 substantially:2 deepmind:3 interpreter:1 unified:2 temporal:1 combat:2 every:1 starcraft:12 act:4 expands:1 interactive:1 zaremba:1 decouples:1 demonstrates:1 hit:14 control:2 szlam:1 unit:23 vald:1 alcicek:1 before:2 local:1 limit:1 switching:1 despite:1 sutton:1 soyer:1 laurent:2 path:1 might:4 emphasis:1 initialization:1 shaded:2 micha:1 limited:4 tian:2 range:3 testing:1 practice:1 block:1 implement:1 razvan:1 jan:1 demis:1 episodic:1 riedmiller:1 yan:1 convenient:2 pre:1 wait:3 petersen:2 arcade:1 get:4 cannot:2 close:1 context:5 doom:2 applying:1 bellemare:2 writing:2 instability:1 www:4 troop:4 equivalent:1 map:3 deterministic:1 send:2 straightforward:1 go:6 layout:1 independently:1 backend:4 focused:3 survey:1 hadsell:1 simplicity:2 defensive:1 flexibly:1 rule:12 live:1 gta:1 coordinate:1 banino:1 customizable:6 pt:1 commercial:2 heavily:1 diego:1 hierarchy:4 play:11 user:1 us:3 itay:1 programming:1 facebookresearch:4 expensive:1 particularly:2 utilized:1 continues:1 std:2 labeled:1 blocking:1 observed:2 role:1 levine:1 preprint:4 wang:1 capture:20 thousand:3 region:3 mirza:1 highest:1 deeply:1 alessandro:1 environment:37 pong:3 respecting:1 complexity:1 reward:11 ideally:2 heinrich:1 dynamic:6 trained:36 singh:2 concurrency:1 efficiency:2 translated:1 accelerate:2 easily:3 joint:1 various:2 lacroix:1 mazebase:4 train:6 fast:4 effective:2 describe:1 monte:5 artificial:3 vicki:1 sc:1 hyper:1 outside:1 sourced:2 enemy:9 multiprocessing:1 ward:1 final:1 interplay:1 triggered:1 advantage:4 net:5 propose:4 interaction:1 coming:1 loop:2 flexibility:3 poorly:1 ludwig:1 achieve:1 building:1 coumans:1 description:3 breakout:1 competition:2 ijcai:1 convergence:1 empty:1 r1:1 nadav:1 generating:1 silver:4 leave:1 adam:1 tim:2 sarah:1 batchnorm:1 andrew:6 develop:1 help:1 completion:1 adaptor:2 received:2 defends:1 keith:1 strong:3 edward:1 implemented:3 skip:15 come:1 goroshin:1 screenshots:1 filter:1 exploration:3 human:3 mcallester:1 virtual:1 require:2 resilient:1 exchange:1 assign:1 ja:1 abbeel:1 sandbox:1 sainbayar:1 awni:2 extension:1 pytorch:1 batchsize:2 around:2 considered:1 influence:1 lawrence:1 mapping:1 snes:1 matthew:1 miniature:2 stabilizes:1 optimizer:4 tavener:1 early:1 achieves:1 purpose:1 estimation:3 proc:1 combinatorial:1 visited:1 hubara:1 ross:1 concurrent:3 gauge:1 arun:1 tool:1 stefan:1 mit:1 concurrently:1 spyridon:1 sight:4 rather:1 denil:1 avoid:2 incompatibility:1 rusu:1 command:23 derived:1 focus:3 maria:1 legg:2 properly:1 check:3 contrast:3 defend:1 baseline:8 abstraction:1 dependent:1 inaccurate:1 integrated:1 typically:3 initially:1 bandit:1 tank:8 issue:2 among:1 pixel:1 flexible:10 lucas:1 development:1 platform:25 art:1 special:1 having:1 beach:1 ng:2 veness:2 piotr:1 progressive:2 koray:4 icml:3 throughput:1 future:3 elf:34 brockman:1 others:2 micro:1 richard:1 wen:1 modern:5 oriented:3 report:1 composed:1 preserve:1 powley:1 producer:4 few:4 delayed:4 rollouts:1 ab:16 harley:1 ostrovski:1 gaffney:1 possibility:1 mnih:3 highly:5 custom:1 evaluation:5 joel:3 mixture:1 winands:1 fog:6 perez:1 light:1 misha:1 hubert:1 emit:1 capable:3 worker:3 arthur:2 experience:5 partial:4 filled:2 tree:8 old:1 puigdomenech:1 incomplete:1 a3c:18 rozenberg:1 rush:1 minimal:2 nardelli:1 instance:3 modeling:1 decision:2 cost:1 strategic:2 deviation:2 hundred:1 uniform:1 johnson:1 graphic:2 reported:1 learnt:1 combined:7 thoroughly:1 st:3 thanks:1 grand:1 international:4 person:1 fundamental:2 physic:2 rounded:1 michael:3 concrete:1 zeming:2 linux:1 aaai:2 huang:1 style:1 return:2 wojciech:2 szegedy:1 volodymyr:3 potential:1 nonlinearities:2 de:1 aggressive:1 sec:3 coding:1 includes:2 stabilize:1 tactical:2 santiago:1 satisfy:1 matter:1 ioannis:1 register:1 coordinated:1 invasion:2 script:1 root:1 jason:1 lab:3 red:2 start:3 jaap:1 option:1 parallel:2 kautz:1 capability:1 complicated:2 shai:1 simon:3 robocup:4 greg:1 convolutional:1 synnaeve:3 yield:1 resume:1 yes:13 generalize:1 raw:1 kavukcuoglu:4 none:1 carlo:5 researcher:1 drive:1 processor:1 promptly:1 history:11 simultaneous:1 reach:1 cumbersome:1 facebook:1 definition:1 against:7 involved:1 chintala:3 sampled:2 stop:2 emits:1 popular:1 color:1 knowledge:3 improves:1 subtle:1 sophisticated:2 back:2 focusing:1 lample:1 higher:1 supervised:1 day:4 tom:1 response:1 follow:1 done:1 anderson:1 furthermore:3 just:1 reply:7 until:5 hand:1 receives:1 web:1 touch:1 mehdi:1 trust:2 invade:1 resemblance:1 bullet:2 usage:1 effect:2 lillicrap:1 contain:1 true:2 usa:1 counterpart:1 spell:1 facility:1 assigned:1 moritz:1 spatially:1 whitehouse:1 leaked:1 game:147 self:2 bowling:1 during:4 rooted:1 please:1 criterion:1 tactic:2 complete:3 demonstrate:2 mohammad:1 invisible:1 bring:1 oculus:2 interface:9 reasoning:1 recently:1 charles:2 superior:1 wikipedia:2 rl:22 overview:2 volume:2 interpretation:1 yuandong:4 significant:1 blocked:3 refer:1 ai:67 enter:1 tuning:1 vanilla:2 league:5 similarly:3 language:1 stable:1 badia:1 helen:1 access:5 actor:9 etc:9 base:10 add:1 mirowski:1 longer:2 durative:2 own:5 driven:1 apart:1 scenario:5 massively:1 manipulation:1 certain:2 issued:1 server:1 buffer:3 continue:1 accomplished:1 cain:1 minimum:1 george:1 additional:1 florian:1 schneider:1 determine:1 converge:1 paradigm:1 ale:8 ii:3 branch:1 stephen:3 full:11 multiple:8 reduces:1 signal:1 usability:1 academic:1 plug:1 offer:7 long:10 lin:2 faster:7 host:3 cameron:1 coded:2 raia:1 controlled:1 prediction:2 variant:2 simplistic:1 basic:1 heterogeneous:1 arxiv:16 erwin:1 sergey:2 affordable:1 normalization:4 achieved:2 addition:3 affecting:1 fine:1 median:3 source:3 suleyman:1 biased:1 extra:1 parallelization:1 micromanagement:1 shane:2 sent:2 quan:1 legend:1 flow:1 mod:1 name:1 jordan:1 ee:1 fearon:1 yang:1 granularity:1 intermediate:2 easy:2 destroyed:2 rendering:1 divisible:1 switch:1 relu:7 browne:1 affect:2 architecture:8 topology:4 competing:1 wu1:1 andreas:1 shift:1 pettersson:1 thread:23 handled:1 retro:1 veda:1 war:6 accelerating:1 url:14 defense:14 penalty:1 render:1 peter:1 proceed:1 afford:1 york:1 action:17 yishay:1 deep:7 jie:1 generally:1 gabriel:3 detailed:2 wenling:1 clear:1 collision:1 covered:1 hosting:2 amount:3 dark:1 mid:2 processed:1 reduced:2 http:23 wiki:2 canonical:1 hyperthreading:1 toend:2 bot:8 gil:2 intelligent:1 wendy:1 per:18 blue:2 diverse:8 naddaf:1 georg:1 srinivasan:1 waiting:2 key:5 openai:7 tbl:8 changing:3 leibo:1 utilize:2 backward:1 destroy:1 compete:1 run:25 uncertainty:1 master:2 orts:2 daemon:2 wu:1 realtime:1 lanctot:1 rory:1 scaling:1 summarizes:1 cowling:1 bit:1 comparable:3 appendix:1 bolton:1 capturing:1 your:3 alex:3 software:1 aspect:1 speed:3 extremely:1 spring:2 yavar:1 expanded:1 martin:1 gpus:1 structured:1 debugging:1 battle:1 sam:1 mastering:1 rob:1 pumpkin:1 modification:1 chess:2 praveen:1 den:2 gradually:1 gathering:1 resource:10 visualization:1 previously:1 hannun:2 mechanism:2 needed:1 know:2 antonoglou:1 end:9 sending:2 rohlfshagen:1 usunier:2 panneershelvam:2 available:3 experimentation:1 opponent:20 apply:1 hierarchical:7 stig:2 save:1 gym:7 batch:21 professional:1 hassabis:1 slower:3 top:2 running:2 entertainment:1 include:1 lock:1 exploit:1 build:13 threading:1 move:7 intend:1 already:1 degrades:1 strategy:19 damage:1 randomize:1 rt:1 gradient:3 win:19 wrap:2 iclr:4 fabio:1 fidjeland:1 accessibility:1 chris:1 tower:14 maddison:1 threaded:3 samothrakis:1 marcus:1 consumer:9 length:4 besides:1 copying:1 code:2 mini:35 julian:2 providing:1 demonstration:1 ying:1 schrittwieser:2 mostly:1 executed:2 bulletphysics:1 statement:1 info:1 negative:1 design:3 policy:6 unknown:1 perform:2 attacker:2 herik:1 ctor:1 kumaran:1 benchmark:1 behave:1 beat:2 immediate:2 scripting:1 situation:3 incorporated:1 communication:3 precise:1 extended:1 frame:23 viola:1 head:1 mansour:1 ninth:1 buro:1 grzegorz:1 david:6 introduced:1 cast:1 required:1 blackwell:1 extensive:8 engine:19 acoustic:2 nip:2 able:3 gameplay:1 parallelism:8 challenge:1 built:22 max:1 including:6 green:1 marek:1 endto:1 wainwright:1 dict:2 video:1 difficulty:1 hybrid:1 shifting:1 critical:1 curriculum:16 customized:2 zhu:1 improve:3 github:4 library:2 numerous:1 mcts:7 created:1 ready:2 hm:1 jun:1 coupled:3 schulman:2 python:16 powered:1 graf:2 fully:1 multiagent:1 loss:1 highlight:1 bear:1 interesting:3 proven:1 digital:1 agent:13 gather:4 s0:2 editor:1 tyree:1 emulator:1 playing:2 share:4 heavy:2 critic:4 pi:1 maas:2 supported:1 last:3 copy:1 asynchronous:3 free:1 tick:14 side:5 chaslot:1 sparse:3 leaky:6 ghz:1 van:2 boundary:1 dimension:1 world:4 rich:2 fb:1 forward:4 commonly:2 reinforcement:13 made:1 sifre:2 reside:1 adaptive:1 avg:1 transaction:1 apis:1 satinder:1 global:2 colton:1 mustafa:1 ioffe:1 consuming:1 fergus:1 forbids:1 search:6 shooter:1 table:7 hutton:1 robin:1 channel:3 ballard:1 exploitability:1 ca:1 controllable:1 learn:2 init:1 nature:4 improving:1 mj:1 e5:1 distract:1 nicolas:2 expansion:1 complex:5 zitnick:1 marc:3 main:2 universe:5 fair:1 collector:1 exploitable:1 fig:9 intel:1 benchmarking:1 en:2 board:4 batched:2 andrei:1 aid:2 msec:1 exponential:1 replay:2 candidate:1 perceptual:1 learns:2 tang:2 minute:2 vizdoom:2 embed:1 down:1 specific:3 rectifier:2 covariate:1 showing:1 navigate:1 jakub:1 list:1 explored:1 normalizing:1 vedavyas:1 essential:1 adding:1 corr:8 magnitude:2 execution:1 studio:1 horizon:3 timothy:2 army:1 explore:3 bidirectionally:1 visual:3 vulnerable:1 sadik:1 applies:1 driessche:1 springer:1 nair:1 identity:1 cheung:1 king:1 acceleration:1 adria:1 towards:3 content:1 change:7 hard:2 rle:2 infinite:1 reducing:1 acting:2 justify:1 flag:22 shang:1 beattie:2 e:1 player:14 exception:1 guillaume:2 internal:8 mark:1 paralleled:2 probabilties:1 tested:2 scratch:1 |
6,478 | 686 | Self-Organizing Rules for Robust
Principal Component Analysis
Lei Xu l ,2"'and Alan Yuille l
1. Division of Applied Sciences, Harvard University, Cambridge, MA 02138
2. Dept. of Mathematics, Peking University, Beijing, P.R.China
Abstract
In the presence of outliers, the existing self-organizing rules for
Principal Component Analysis (PCA) perform poorly. Using statistical physics techniques including the Gibbs distribution, binary
decision fields and effective energies, we propose self-organizing
PCA rules which are capable of resisting outliers while fulfilling
various PCA-related tasks such as obtaining the first principal component vector, the first k principal component vectors, and directly
finding the subspace spanned by the first k vector principal component vectors without solving for each vector individually. Comparative experiments have shown that the proposed robust rules
improve the performances of the existing PCA algorithms significantly when outliers are present.
1
INTRODUCTION
Principal Component Analysis (PCA) is an essential technique for data compression
and feature extraction, and has been widely used in statistical data analysis, communication theory, pattern recognition and image processing. In the neural network
literature, a lot of studies have been made on learning rules for implementing PCA
or on networks closely related to PCA (see Xu & Yuille, 1993 for a detailed reference
list which contains more than 30 papers related to these issues). The existing rules
can fulfil various PCA-type tasks for a number of application purposes.
"'Present address: Dept. of Brain and Cognitive Sciences, E10-243, Massachusetts
Institute of Technology, Cambridge, MA 02139.
467
468
Xu and Yuille
However, almost all the previously mentioned peA algorithms are based on the
assumption that the data has not been spoiled by outliers (except Xu, Oja&Suen
1992, where outliers can be resisted to some extent.). In practice, real data often
contains some outliers and usually they are not easy to separate from the data set.
As shown by the experiments described in this paper, these outliers will significantly
worsen the performances of the existing peA learning algorithms. Currently, little
attention has been paid to this problem in the neural network literature, although
the problem is very important for real applications.
Recently, there have been some success in applying t:te statistical physics approach
to a variety of computer vision problems (Yuille, 1990; Yuille, Yang&Geiger 1990;
Yuille, Geiger&Bulthoff, 1991). In particular, it has also been shown that some
techniques developed in robust statistics (e.g., redescending M-estimators, leasttrimmed squares estimators) appear naturally within the Bayesian formulation by
the use of the statistical physics approach. In this paper we adapt this approach
to tackle the problem of robust PCA. Robust rules are proposed for various PCArelated tasks such as obtaining the first principal component vector, the first k
principal component vectors, and principal subspaces. Comparative experiments
have been made and the results show that our robust rules improve the performances
of the existing peA algorithms significantly when outliers are present.
2
peA LEARNING AND ENERGY MINIMIZATION
There exist a number of self-organizing rules for finding the first principal component. Three of them are listed as follows (Oja 1982, 85; Xu, 1991,93):
m(t + 1) = m(t) + aa(t)(xy - m(t)y2),
(1)
+ 1) = m(t) + aa(t)(xy - m(~~~(t)y2),
m(t + 1) = m(t) + aa(t)[y(x - iI) + (y - y')X].
m(t
(2)
(3)
where y = m(t)T x, iI = ym(t), y' = m(tf iI and aa(t) 2:: 0 is the learning rate which
decreases to zero as t -- 00 while satisfying certain conditions, e.g., Lt aa(t) =
00,
Lt aa(t)q < 00 for some q> 1.
i
Each of the three rules will converge to the principal component vector
almost
surely under some mild conditions which are studied in detail in by Oja (1982&85)
and Xu (1991&93). Regarding mas the weight vector of a linear neuron with output
y = T x, all the three rules can be considered as modifications of the well known
Hebbian rule m(t + 1) = m(t) + aa(t)xy through introducing additional terms for
preventing IIm(t)1I from going to 00 as t -- 00.
m
The performances of these rules deteriorate considerably when data contains outliers. Although some outlier-resisting versions of eq.(l) and eq.(2) have also been
recently proposed (Xu, Oja & Suen, 1992), they work well only for data which is not
severely spoiled by outliers. In this paper, we adopt a totally different approach-we
generalize eq.(1),eq.(2) and eq.(3) into more robust versions by using the statistical
physics approach.
To do so, first we need to connect these rules to energy functions. It follows from Xu
(1991&93) and Xu & Yuille(1993) that the rules eq.(2) and eq.(3) are respectively
Self-Organizing Rules for Robust Principal Component Analysis
on-line gradient descent rules for minimizing J 1 (m), J 2 (m) respectivelyl:
-T - ::::T I N
-. _ m -T
Xixi_ m)
J 1 ( m-) = _ "'(-'!'
L..J x, X,
N i=l
m m
(4)
N
hem)
=~ L
!Iii -
uill 2 .
(5)
i=1
It has also been proved that the rule given by eq.(l) satisfies (Xu, 1991, 93):
(a) hTh2 2: 0,E(hJ) T JJ(h1) 2: 0, with hI
iy-my2, h2
iy- mo/.m y2 ; (b)
E(hl)TE(h3) > 0, with h3 = y(i-iI)+(y-y')i; (c) Both J1 and h have only one
and all the other critical points (i.e.,
local (also global) minimum tr(~) m
the points satisfy 8J ) = 0, i = 1,2) are saddle points. Here ~ = E{ii t}, and
is the eigenvector of r- corresponding to the largest eigenvalue.
=
=
iI'r-i,
ak
i
That is, the rule eq.(l) is a downhill algorithm for minimizing J 1 in both the on
line sense and the average sense, and for minimizing J 2 in the average sense.
3
GENERALIZED ENERGY AND ROBUST peA
We further regard J 1 (m), J2(m) as special cases of the following general energy:
N
= ~L Z(ii, m),
Z(ii' m) 2: 0.
i=1
where Z(ii' m) is the portion of energy contributed by the sample ii, and
J(m)
(6)
(7)
Following (Yuille, 1990 a& b), we now generalize energy eq.(6) into
E(V, m)
=
= L:f:1 Vi
Z(ii' m)
+ Eprior(V)
(8)
=
where V {Vi, i
1, .. " N} is a binary field {\Ii} with each \Ii being a random
variable taking value either 0 or 1. \Ii acts as a decision indicator for deciding
whether ii is an outlier or a sample. When \Ii = 1, the portion of energy contributed
by the sample ii is taken into consideration; otherwise, it is equivalent to discarding
ii as an outlier. Eprior(V) is the a priori portion of energy contributed by the a
priori distribution of {Vi}. A natural choice is
N
EpriorCV)
= 11 1:(1- Vi)
(9)
i=1
This choice of priori has a natural interpretation: for fixed m it is energetically
favourable to set \Ii
1 (i.e., not regarding ii as an outlier) if Z(ii' m) < yfii (i.e.,
=
lWe have J1(ffi)
2: 0, since
iTi - m"fm =
lIiW sin 2 (Jxm 2: o.
469
470
Xu and Yuille
the portion of energy contributed by Xi is smaller than a prespecified threshold)
and to set it to 0 otherwise.
Based on E(V, m), we define a Gibbs distribution (Parisi 1988):
1
[= _e-{3E
V,m-]
- m]
P[V
'z
where Z is the partition function which ensures
compute
(10)
'
Lv Lm pry, m] = 1.
Then we
L
-{3 ~ {V,z(x"m)+T/(l-V,)}
Z _ e L..J,
-1
Pmargin(m)
v
!
Z
II L
.
,
e-{3{V,z(x"m)+T/(l- V,)} = _1_ e-{3EeJJ (m). (11)
Zm
V,={O,l}
EeJj(m) = -1 Llog{1
(3 i
+ e-{3{z(x"m)-T/}}.
(12)
Eel! is called the effective energy. Each term in the sum for Eel I is approximately
z(xi,m) for small values of Z but becomes constant as z(xi,m) -+ 00. In this way
outliers, which are more likely to yield large values of z( Xi, m), are treated differently
from samples, and thus the estimation m obtained by minimizing EeJj(m) will be
robust and able to resist outliers.
Ee! f (m) is usually not a convex function and may have many local minima. The
statistical physics framework suggests using deterministic annealing to minimize
EeJj(m). That is, by the following gradient descent rule eq.(13), to minimize
EeJj(m) for small (3 and then track the minimum as (3 increases to infinity (the
zero temperature limit):
_(
m t
)
+1
_()
= m t -
(~
lYb
1
t) ~ 1 + e{3(z(x"m(f))-T/)
,
oz(xi,m(t))
om(t)
.
(13)
More specifically, with z's chosen to correspond to the energies hand J2 respectively, we have the following batch-way learning rules for robust peA:
_(
m t
met
)
_( )
t
+ 1 =m
+ lYb
( )~
1
(_
m( t)
2)
t ~ 1 + e{3(z(x"m(t))-T/) XiYi - m(t)Tm(t)Yi'
()
14
z
+ 1) = met) + abet) ~ 1 + e{3(Z(;"m(f))-T/) [Yi(Xi - ild + (Yi
,
- yDXi].
(15)
For data that comes incrementally or in the on-line way, we correspondingly have
the following adaptive or stochastic approximation versions
-(
m t
met
()
1
(+ 1) = m-C)
t + aa t 1 + e{3(z(x"m(t))-17) XiYi
+ 1) = met) + aa(t) 1 + e{3(Z(;"m(t))-17) [Yi(Xi
met)
2)
- m(t)T met) Yi ,
- iii)
+ (Yi
- YDXi].
(16)
(17)
Self-Organizing Rules for Robust Principal Component Analysis
It can be observed that the difference between eq.(2) and eq.(16) or eq.(3) and
eq.(17) is that the learning rate G'a(t) has been modified by a multiplicative factor
1
G'm(t) = 1 + e{j(Z(tri,m(t))-")'
(18)
which adaptively modifies the learning rate to suit the current input Xi. This
modifying factor has a similar function as that used in Xu, Oja&Suen(1992) for
robust line fitting. But the modifying factor eq.(18) is more sophisticated and
performs better.
Based on the connecticn between the rule eq.(I) and J 1 or J2 , given in sec.2, we
can also formally use t il e modifying factor G'm(t) to turn the rule eq.(I) into the
following robust version:
met
4
+ 1) = met) + G'a(t) 1 + e{j(Z(;.,m(t))-,,) (iiYi
- m(t)yi),
(19)
ROBUST RULES FOR k PRINCIPAL COMPONENTS
In a similar way to SGA (Oja, 1992) and GHA (Sanger, 1989) we can generalize the
robust rules eq.(19), eq.(16) and eq.(17) into the following general form of robust
rules for finding the first k principal components:
mj(t + 1)
= mj(t) + G'a(t) 1 + e{j(Z(tr)n,m;(t))-,,) ~mj(xi(j), mj(t?,
j-l
Xi(O) = ii,
ii(j + 1) = Xi(j) -
L Yi(r)mr(t),
Yi(j) =
mJ (t)ii(j),
(20)
(21)
r=l
where ~mj(ii(j), mj(t?, Z(Xi(j), mj(t? have four possibilities (Xu & Yuille, 1993).
As an example, one of them is given here
dmj(xi(j), mj(t?
.. (.) .. (t?
Z(Xi J ,mj
= (Xi(j)Yi(j) -
- (.)
= Xi.. (')T
J Xi J -
mj(t)Yi(j)2),
Yi(j)2
mj(t)Tmj(t)'
In this case, eq.(20) can be regarded as the generalization of GHA (Sanger, 1989).
We can also develop an alternative set of rules for a type of nets with asymmetric
lateral weights as used in (Rubner&Schulten, 1990). The rules can also get the first
k principal components robustly in the presence of outliers (Xu & Yuille, 1993).
5
ROBUST RULES FOR PRINCIPAL SUBSPACE
=
=
=
=
Let M [ml, .. " mk], ~ [?1, .. " ?k], Y [Yl, .. " Ykf and y MT X, it follows
from Oja(1989) and Xu(1991) the rules eq.(l), eq.(3) can be generalized into eq.(22)
and eq.(23) respectively:
(22)
471
472
Xu and Yuille
u-
= M-y, y = MTa
(23)
In the case without outliers, by both the rules, the weight matrix M(t) will converge
to a matrix MOO whose column vectors mj, j = 1,"" k span the k-dimensional
principal subspace (Oja, 1989; Xu, 1991&93), although the vectors are, in general,
not equal to the k principal component vectors ?j, j = 1, ... , k.
Similar to the previously used procedure, we have the following results:
(1). We can SllOW that eq.(23) is an on-line or stochastic approximation rule which
minimizes the energy 13 in the gradient descent way (Xu, 1991& 93):
N
J 3 (ffi)
= ~ L: IIXi - ai ll 2 ,
a = My,
Y' = MT iI.
(24)
i=l
and that in the average sense the subspace rule eq.(22) is also an on-line "down-hill"
rule for minimizing the energy function Ja.
(2). We can also generalize the non-robust rules eq.(22) and eq.(23) into robust
versions by using the statistical physics approach again:
M(t
+ 1) = M(t) + GA(t) 1 + e!3(I//-U.1I2_'1) [Yi(Xi - ild T
M(t
6
+ 1) =
M(t)
Y1)iT]'
(25)
1
-,..fJ'-~
+ GA(t) 1 + e!3(l/x.-u;1/2_'1)
[y,Xi - YiY, M(t)]
(26)
-
(fii -
EXAMPLES OF EXPERIMENTAL RESULTS
Let x from a population of 400 samples with zero mean. These samples are located
on an elliptic ring centered at the origin of R3 , with its largest elliptic axis being
along the direction (-1,1,0), the plane of its other two axes intersecting the x - Y
plane with an acute angle (30?). Among the 400 samples, 10 points (only 2.5%) are
randomly chosen and replaced by outliers. The obtained data set is shown in Fig.1.
Before the outliers were introduced, either the conventional simple-variance-matrix
L~l iiX[) or the unrobust rules
based approach (i.e., solving S? = A?, S =
eqs.(I)(2)(3) can find the correct 1st principal component vector of this data set.
k
On the data set contaminated by outliers, shown in Fig.l, the result of the simplevariance-matrix based approach has an angular error of ?p by 71.04?-a result
definitely unacceptable. The results of using the proposed robust rules eq.(19),
eq.(16) and eq.(17) are shown in Fig.2(a) in comparison with those of their unrobust
counterparts- the rules eq.(I), eq.(2) and eq.(3). We observe that all the unrobust
rules get the solutions with errors of more than 21? from the correct direction of
?p. By contrast, the robust rules can still maintain a very good accuracy-the
error is about 0.36?. Fig.2(b) gives the results of solving for the first two principal
component vectors. Again, the unrobust rule produce large errors of around 23?,
while the robust rules have an error of about 1. 7? . Fig.3 shows the results of
soIling for the 2-dimensional principal subspace, it is easy to see the significant
improvements obtained by using the robus.t rules.
Self-Organizing Rules for Robust Principal Component Analysis
"
,
, '
, ,
\\'
,~
"~"114"
.,
~
?
?
J
t
?
,
f
.,
~
?
I
2
?
?
J
?
Figure 1: The projections of the data on the x - y, y - z and z - x planes, with 10
outliers.
Acknowledgements
We would like to thank DARPA and the Air Force for support with contracts
AFOSR-89-0506 and F4969092-J-0466.
We like to menta ion that some further issues about the proposed robust rules are studied
in Xu & Yuille (1993), including the selection of parameters 0', j3 and 1], the extension of
the rules for robust Minor Component Analysis (MCA) , the relations between the rules
to the two main types of existing robust peA algorithms in the literature of statistics, as
well as to Maximal Likelihood (ML) estimation of finite mixture distributions.
References
E. Oja, J. Math. Bio. 16, 1982,267-273.
E. Oja & J. Karhunen, J. Math. Anal. Appl. 106,1985,69-84.
E. Oja, Int. J. Neural Systems 1, 1989,61-68.
E. Oja, Neural Networks 5, 1992, 927-935.
G. Parisi, Statistical Field Theory, Addison-Wesley, Reading, Mass., 1988.
J. Rubner & K. Schulten, Biological Cybernetics, 62, 1990, 193-199.
T.D. Sanger, Neural Networks, 2, 1989,459-473.
L. Xu, Proc. of IJCNN'91-Singapore, Nov., 1991,2368-2373.
L. Xu, Least mean square error reconstruction for self-organizing neural-nets, Neural
Networks 6, 1993, in press.
L. Xu, E. Oja & C.Y. Suen, Neural Networks 5, 1992,441-457.
L. Xu & A.L. Yuille, Robust principal component analysis by self-organizing rules
based on statistical physics approach, IEEE Trans. Neural Networks, 1993, in press.
A.L. Yuille, Neural computation 2, 1990, 1-24.
A.L. Yuille, D. Geiger and H.H. Bulthoff,Networks 2, 1991. 423-442.
473
474
Xu and Yuille
--.
..
(b)
(a)
Figure 2: The learning curves obtained in the comparative experiments for principal
component vectors. (a) for the first principal component vector, RAl, RA2, RA3 denote the robust rules eq.(19), eq.(16) and eq.(17) respectively, and U AI, U A2, U A3
denote the rules eq.(l), eq.(2) and eq.(3) respectively. The horizontal axis denotes
the learning steps, and the vertical axis is (Jm(t)?Pl with (Jx,y denoting the acute
angle between x and y. (b) for the first two principal component vectors, by the
robust rule eq.(20) and its unrobust counterpart GHA. U Akl, U Ak2 denote the
learning curves of angles (Jml(t)?Pl and (Jm2(t)?P2 respectively, obtained by GHA .
RAk 1, RAk2 denote the learning curves of the angles obtained by using the robust
rule eq.(20). In both (a) & (b), pj , j = 1,2 is the correct 1st and 2nd principal
component vector respectively.
i
t
t
1_ _ _ _ _ _ _
_
........
Figure 3: The learning curves obtained in the comparative experiments for for solving the 2-dimensional principal subspace. Each learning curve expresses the change
of the residual er(t) = L:J=ll!mj(t) - L:;=l(mj(tf pr)?prI12 with learning steps.
The smaller the residual, the closer the estimated principal subspace to the correct
one. SU Bl, SU B2 denote the unrobust rules eq.(22) and eq.(23) respectively, and
RSU Bl, RSU B2 denote the robust rules eq.(26) and eq.(25) respectively.
i
| 686 |@word mild:1 version:5 compression:1 nd:1 paid:1 tr:2 contains:3 denoting:1 existing:6 current:1 xiyi:2 moo:1 partition:1 j1:2 plane:3 prespecified:1 math:2 along:1 unacceptable:1 fitting:1 deteriorate:1 brain:1 little:1 jm:1 totally:1 becomes:1 mass:1 minimizes:1 eigenvector:1 akl:1 developed:1 finding:3 act:1 tackle:1 bio:1 appear:1 before:1 local:2 limit:1 severely:1 ak:1 approximately:1 china:1 studied:2 suggests:1 appl:1 practice:1 dmj:1 procedure:1 significantly:3 sga:1 projection:1 get:2 ga:2 selection:1 applying:1 equivalent:1 deterministic:1 conventional:1 modifies:1 attention:1 convex:1 rule:55 estimator:2 regarded:1 spanned:1 population:1 fulfil:1 spoiled:2 origin:1 harvard:1 recognition:1 satisfying:1 located:1 asymmetric:1 observed:1 ensures:1 decrease:1 mentioned:1 solving:4 yuille:17 division:1 darpa:1 differently:1 various:3 effective:2 whose:1 my2:1 widely:1 otherwise:2 statistic:2 eigenvalue:1 parisi:2 net:2 propose:1 reconstruction:1 maximal:1 zm:1 j2:3 organizing:9 poorly:1 oz:1 produce:1 comparative:4 ring:1 develop:1 minor:1 h3:2 eq:49 p2:1 come:1 met:8 direction:2 closely:1 correct:4 modifying:3 pea:7 stochastic:2 centered:1 implementing:1 ja:1 generalization:1 biological:1 extension:1 pl:2 around:1 considered:1 deciding:1 mo:1 lm:1 adopt:1 a2:1 jx:1 purpose:1 estimation:2 proc:1 currently:1 individually:1 largest:2 tf:2 minimization:1 suen:4 modified:1 hj:1 ax:1 improvement:1 ral:1 likelihood:1 contrast:1 sense:4 relation:1 going:1 issue:2 among:1 priori:3 special:1 ak2:1 field:3 equal:1 extraction:1 contaminated:1 yiy:1 randomly:1 oja:13 replaced:1 maintain:1 suit:1 possibility:1 mixture:1 uill:1 capable:1 closer:1 xy:3 mk:1 lwe:1 column:1 introducing:1 hem:1 connect:1 considerably:1 my:1 adaptively:1 st:2 definitely:1 contract:1 physic:7 eel:2 yl:1 ym:1 iy:2 intersecting:1 again:2 cognitive:1 sec:1 b2:2 int:1 satisfy:1 vi:4 multiplicative:1 h1:1 lot:1 portion:4 worsen:1 minimize:2 air:1 square:2 om:1 il:1 variance:1 accuracy:1 yield:1 correspond:1 generalize:4 bayesian:1 cybernetics:1 energy:14 naturally:1 resisting:2 proved:1 massachusetts:1 sophisticated:1 wesley:1 formulation:1 angular:1 hand:1 bulthoff:2 horizontal:1 ykf:1 su:2 ild:2 incrementally:1 lei:1 y2:3 counterpart:2 sin:1 ll:2 self:9 generalized:2 hill:1 performs:1 temperature:1 fj:1 image:1 consideration:1 recently:2 mt:2 interpretation:1 significant:1 cambridge:2 gibbs:2 ai:2 mathematics:1 acute:2 fii:1 tmj:1 certain:1 binary:2 success:1 yi:13 minimum:3 additional:1 mca:1 mr:1 surely:1 converge:2 ii:27 hebbian:1 alan:1 adapt:1 peking:1 j3:1 vision:1 ion:1 annealing:1 jml:1 tri:1 ee:1 presence:2 yang:1 iii:2 easy:2 variety:1 fm:1 regarding:2 tm:1 whether:1 pca:9 energetically:1 jj:1 detailed:1 listed:1 iixi:1 exist:1 singapore:1 estimated:1 track:1 express:1 four:1 threshold:1 pj:1 sum:1 beijing:1 angle:4 almost:2 geiger:3 decision:2 ydxi:2 hi:1 ijcnn:1 infinity:1 span:1 mta:1 smaller:2 modification:1 hl:1 outlier:22 fulfilling:1 pr:1 taken:1 previously:2 turn:1 ffi:2 r3:1 addison:1 observe:1 elliptic:2 robustly:1 batch:1 alternative:1 denotes:1 iix:1 sanger:3 bl:2 gradient:3 subspace:8 separate:1 thank:1 lateral:1 extent:1 minimizing:5 anal:1 perform:1 redescending:1 contributed:4 vertical:1 neuron:1 iti:1 finite:1 descent:3 communication:1 y1:1 introduced:1 resist:1 trans:1 address:1 able:1 usually:2 pattern:1 reading:1 including:2 critical:1 natural:2 treated:1 force:1 indicator:1 residual:2 improve:2 technology:1 pry:1 axis:3 literature:3 acknowledgement:1 afosr:1 rak:1 lv:1 h2:1 rubner:2 iim:1 institute:1 taking:1 correspondingly:1 regard:1 curve:5 preventing:1 made:2 adaptive:1 nov:1 ml:2 global:1 xi:19 mj:15 robust:32 obtaining:2 main:1 xu:24 fig:5 gha:4 downhill:1 schulten:2 down:1 discarding:1 er:1 favourable:1 list:1 a3:1 essential:1 resisted:1 te:2 karhunen:1 lt:2 saddle:1 likely:1 aa:9 satisfies:1 ma:3 change:1 specifically:1 except:1 llog:1 principal:30 called:1 experimental:1 e10:1 formally:1 support:1 iiyi:1 dept:2 |
6,479 | 6,860 | Dual Discriminator Generative Adversarial Nets
Tu Dinh Nguyen, Trung Le, Hung Vu, Dinh Phung
Deakin University, Geelong, Australia
Centre for Pattern Recognition and Data Analytics
{tu.nguyen, trung.l, hungv, dinh.phung}@deakin.edu.au
Abstract
We propose in this paper a novel approach to tackle the problem of mode collapse
encountered in generative adversarial network (GAN). Our idea is intuitive but
proven to be very effective, especially in addressing some key limitations of GAN.
In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into
a unified objective function, thus it exploits the complementary statistical properties
from these divergences to effectively diversify the estimated density in capturing
multi-modes. We term our method dual discriminator generative adversarial nets
(D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high
scores for samples from data distribution whilst another discriminator, conversely,
favoring data from the generator, and the generator produces data to fool both two
discriminators. We develop theoretical analysis to show that, given the maximal
discriminators, optimizing the generator of D2GAN reduces to minimizing both
KL and reverse KL divergences between data distribution and the distribution
induced from the data generated by the generator, hence effectively avoiding the
mode collapsing problem. We conduct extensive experiments on synthetic and
real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we
have made our best effort to compare our D2GAN with the latest state-of-the-art
GAN?s variants in comprehensive qualitative and quantitative evaluations. The
experimental results demonstrate the competitive and superior performance of our
approach in generating good quality and diverse samples over baselines, and the
capability of our method to scale up to ImageNet database.
1
Introduction
Generative models are a subarea of research that has been rapidly growing in recent years, and
successfully applied in a wide range of modern real-world applications (e.g., see chapter 20 in [9]).
Their common approach is to address the density estimation problem where one aims to learn a
model distribution pmodel that approximates the true, but unknown, data distribution pdata . Methods in
this approach deal with two fundamental problems. First, the learning behaviors and performance
of generative models depend on the choice of objective functions to train them [29, 15]. The most
widely-used objective, considered the de-facto standard one, is to follow the principle of maximum
likelihood estimate that seeks model parameters to maximize the likelihood of training data. This is
equivalent to minimizing the Kullback-Leibler (KL) divergence between data and model distributions:
DKL (pdata kpmodel ). It has been observed that this minimization tends to result in pmodel that covers
multiple modes of pdata , but may produce completely unseen and potentially undesirable samples [29].
By contrast, another approach is to swap the arguments and instead, minimize: DKL (pmodel kpdata ),
which is usually referred to as the reverse KL divergence [23, 11, 15, 29]. It is observed that
optimization towards the reverse KL divergence criteria mimics the mode-seeking process where
pmodel concentrates on a single mode of pdata while ignoring other modes, known as the problem of
mode collapse. These behaviors are well-studied in [29, 15, 11].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The second problem is the choice of formulation for the density function of pmodel [9]. One might
choose to define an explicit density function, and then straightforwardly follow maximum likelihood
framework to estimate the parameters. Another idea is to estimate the data distribution using an
implicit density function, without the need for analytical forms of pmodel (e.g., see [11] for further
discussions). One of the most notably pioneered class of the latter is the generative adversarial
network (GAN) [10], an expressive generative model that is capable of producing sharp and realistic
images for natural scenes. Different from most generative models that maximize data likelihood
or its lower bound, GAN takes a radical approach that simulates a game between two players: a
generator G that generates data by mapping samples from a noise space to the input space; and a
discriminator D that acts as a classifier to distinguish real samples of a dataset from fake samples
produced by the generator G. Both G and D are parameterized via neural networks, thus this method
can be categorized into the family of deep generative models or generative neural models [9].
The optimization of GAN formulates a minimax problem, wherein given an optimal D, the learning
objective turns into finding G that minimizes the Jensen-Shannon divergence (JSD): DJS (pdata kpmodel ).
The behavior of JSD minimization has been empirically proven to be more similar to reverse KL
than to KL divergence [29, 15]. This, however, leads to the aforementioned issue of mode collapse,
which is indeed a notorious failure of GAN [11] where the generator only produces similarly looking
images, yielding a low entropy distribution with poor variety of samples.
Recent attempts have been made to solve the mode collapsing problem by improving the training
of GAN. One idea is to use the minibatch discrimination trick [27] to allow the discriminator to
detect samples that are unusually similar to other generated samples. Although this heuristics helps
to generate visually appealing samples very quickly, it is computationally expensive, thus normally
used in the last hidden layer of discriminator. Another approach is to unroll the optimization of
discriminator by several steps to create a surrogate objective for the update of generator during
training [20]. The third approach is to train many generators that discover different modes of the
data [14]. Alternatively, around the same time, there are various attempts to employ autoencoders as
regularizers or auxiliary losses to penalize missing modes [5, 31, 4, 30]. These models can avoid the
mode collapsing problem to a certain extent, but at the cost of computational complexity with the
exception of DFM in [31], rendering them unscalable up to ImageNet, a large-scale and challenging
visual dataset.
Addressing these challenges, we propose a novel approach to both effectively avoid mode collapse
and efficiently scale up to very large datasets (e.g., ImageNet). Our approach combines the KL
and reverse KL divergences into a unified objective function, thus it exploits the complementary
statistical properties from these divergences to effectively diversify the estimated density in capturing
multi-modes. We materialize our idea using GAN?s framework, resulting in a novel generative
adversarial architecture containing three players: a discriminator D1 that rewards high scores for
data sampled from pdata rather than generated from the generator distribution pG whilst another
discriminator D2 , conversely, favoring data from pG rather pdata , and a generator G that generates
data to fool both two discriminators. We term our proposed model dual discriminator generative
adversarial network (D2GAN).
It turns out that training D2GAN shares the same minimax problem as in GAN, which can be solved
by alternatively updating the generator and discriminators. We provide theoretical analysis showing
that, given G, D1 and D2 with enough capacity, i.e., in the nonparametric limit, at the optimal points,
the training criterion indeed results in the minimal distance between data and model distribution with
respect to both their KL and reverse KL divergences. This helps the model place fair distribution of
probability mass across the modes of the data generating distribution, thus allowing one to recover
the data distribution and generate diverse samples using the generator in a single shot. In addition, we
further introduce hyperparameters to stabilize the learning and control the effect of each divergence.
We conduct extensive experiments on one synthetic dataset and four real-world large-scale datasets
(MNIST, CIFAR10, STL-10, ImageNet) of very different nature. Since evaluating generative models
is notoriously hard [29], we have made our best effort to adopt a number of evaluation metrics from
literature to quantitatively compare our proposed model with the latest state-of-the-art baselines
whenever possible. The experimental results reveal that our method is capable of improving the
diversity while keeping good quality of generated samples. More importantly, our proposed model
can be scaled up to train on the large-scale ImageNet database, obtain a competitive variety score and
generate reasonably good quality images.
2
In short, our main contributions are: (i) a novel generative adversarial model that encourages the
diversity of samples produced by the generator; (ii) a theoretical analysis to prove that our objective is
optimized towards minimizing both KL and reverse KL divergence and has a global optimum where
pG = pdata ; and (iii) a comprehensive evaluation on the effectiveness of our proposed method using a
wide range of quantitative criteria on large-scale datasets.
2
Generative Adversarial Nets
We first review the generative adversarial network (GAN) that was introduced in [10] to formulate a
game of two players: a discriminator D and a generator G. The discriminator, D (x), takes a point x
in data space and computes the probability that x is sampled from data distribution Pdata , rather than
generated by the generator G. At the same time, the generator first maps a noise vector z drawn from a
prior P (z) to the data space, obtaining a sample G (z) that resembles the training data, and then uses
this sample to challenge the discriminator. The mapping G (z) induces a generator distribution PG in
data domain with probability density function pG (x). Both G and D are parameterized by neural
networks (see Fig. 1a for an illustration) and learned by solving the following minimax optimization:
min max J (G, D) = Ex?Pdata (x) [log (D (x))] + Ez?Pz [log (1 ? D (G (z)))]
G
D
The learning follows an iterative procedure wherein the discriminator and generator are alternatively
updated. Given a fixed G, the maximization subject to D results in the optimal discriminator
pdata (x)
D? (x) = pdata (x)+p
, whilst given this optimal D? , the minimization of G turns into minimizing
G (x)
the Jensen-Shannon (JS) divergence between the data and model distributions: DJS (Pdata kPG ) [10].
At the Nash equilibrium of a game, the model distribution recovers the data distribution exactly:
PG = Pdata , thus the discriminator D now fails to differentiate real or fake data as D (x) = 0.5, ?x.
(a) GAN.
(b) D2GAN.
Figure 1: An illustration of the standard GAN and our proposed D2GAN.
Since the JS divergence has been empirically proven to have the same nature as that of the reverse
KL divergence [29, 15, 11], GAN suffers from the model collapsing problem, and thus its generated
data samples have low level of diversity [20, 5].
3
Dual Discriminator Generative Adversarial Nets
To tackle GAN?s problem of mode collapse, in what follows we present our main contribution of a
framework that seeks an approximated distribution to effectively cover many modes of the multimodal
data. Our intuition is based on GAN, but we formulate a three-player game that consists of two
different discriminators D1 and D2 , and one generator G. Given a sample x in data space, D1 (x)
rewards a high score if x is drawn from the data distribution Pdata , and gives a low score if generated
from the model distribution PG . In contrast, D2 (x) returns a high score for x generated from PG
whilst giving a low score for a sample drawn from Pdata . Unlike GAN, the scores returned by our
discriminators are values in R+ rather than probabilities in [0, 1]. Our generator G performs a similar
role to that of GAN, i.e., producing data mapped from a noise space to synthesize the real data and
then fool both two discriminators D1 and D2 . All three players are parameterized by neural networks
wherein D1 and D2 do not share their parameters. We term our proposed model dual discriminator
generative adversarial network (D2GAN). Fig. 1b shows an illustration of D2GAN.
3
More formally, D1 , D2 and G now play the following three-player minimax optimization game:
min max J (G, D1 , D2 ) = ? ? Ex?Pdata [log D1 (x)] + Ez?Pz [?D1 (G (z))]
G D1 ,D2
+ Ex?Pdata [?D2 (x)] + ? ? Ez?Pz [log D2 (G (z))]
(1)
wherein we have introduced hyperparameters 0 < ?, ? ? 1 to serve two purposes. The first is
to stabilize the learning of our model. As the output values of two discriminators are positive
and unbounded, D1 (G (z)) and D2 (x) in Eq. (1) can become very large and have exponentially
stronger impact on the optimization than log D1 (x) and log D2 (G (z)) do, rendering the learning
unstable. To overcome this issue, we can decrease ? and ?, in effect making the optimization penalize
D1 (G (z)) and D2 (x), thus helping to stabilize the learning. The second purpose of introducing ?
and ? is to control the effect of KL and reverse KL divergences on the optimization problem. This
will be discussed in the following part once we have the derivation of our optimal solution.
Similar to GAN [10], our proposed network can be trained by alternatively updating D1 , D2 and G.
We refer to the supplementary material for the pseudo-code of learning parameters for D2GAN.
3.1
Theoretical analysis
We now provide formal theoretical analysis of our proposed model, that essentially shows that, given
G, D1 and D2 are of enough capacity, i.e., in the nonparametric limit, at the optimal points, G can
recover the data distributions by minimizing both KL and reverse KL divergences between model and
data distributions. We first consider the optimization problem with respect to (w.r.t) discriminators
given a fixed generator.
Proposition 1. Given a fixed G, maximizing J (G, D1 , D2 ) yields to the following closed-form
optimal discriminators D1? , D2? :
?pdata (x)
?pG (x)
D1? (x) =
and D2? (x) =
pG (x)
pdata (x)
Proof. According to the induced measure theorem [12], two expectations are equal:
Ez?Pz [f (G (z))] = Ex?PG [f (x)] where f (x) = ?D1 (x) or f (x) = log D2 (x). The objective function can be rewritten as below:
J (G, D1 , D2 ) = ? ? Ex?Pdata [log D1 (x)] + Ex?PG [?D1 (x)]
+ Ex?Pdata [?D2 (x)] + ? ? Ex?PG [log D2 (x)]
?
=
[?pdata (x) log D1 (x) ? pG D1 (x) ? pdata (x) D2 (x) + ?pG log D2 (x)] dx
x
Considering the function inside the integral, given x, we maximize this function w.r.t two variables
D1 , D2 to find D1? (x) and D2? (x). Setting the derivatives w.r.t D1 and D2 to 0, we gain:
?pG (x)
?pdata (x)
? pG (x) = 0 and
? pdata (x) = 0
(2)
D1
D2
The second derivatives: ??pdata (x)/D12 and ??pG (x)/D22 are non-positive, thus verifying that we have
obtained the maximum solution and concluding the proof.
Next, we fix D1 = D1? , D2 = D2? and find the optimal solution G? for the generator G.
Theorem 2. Given D1? , D2? , at the Nash equilibrium point (G? , D1? , D2? ) for minimax optimization
problem of D2GAN, we have the following form for each component:
J (G? , D1? , D2? ) = ? (log ? ? 1) + ? (log ? ? 1)
D1? (x) = ? and D2? (x) = ?, ?x at pG? = pdata
Proof. Substituting D1? , D2? from Eq. (2) into the objective function in Eq. (1) of the minimax
problem, we gain:
?
pdata (x)
pdata (x)
J (G, D1? , D2? ) = ? ? Ex?Pdata log ? + log
? ? pG (x)
dx
pG (x)
pG (x)
x
?
pG (x)
pG (x)
? ? pdata
dx + ? ? Ex?PG log ? + log
pdata (x)
pdata (x)
x
= ? (log ? ? 1) + ? (log ? ? 1) + ?DKL (Pdata kPG ) + ?DKL (PG kPdata ) (3)
4
where DKL (Pdata kPG ) and DKL (PG kPdata ) is the KL and reverse KL divergences between data and
model (generator) distributions, respectively. These divergences are always nonnegative and only zero
when two distributions are equal: pG? = pdata . In other words, the generator induces a distribution
pG? that is identical to the data distribution pdata , and two discriminators now fail to recognize the real
or fake samples since they return the same score of 1 for both samples. This concludes the proof.
The loss of generator in Eq. (3) becomes an upper bound when the discriminators are not optimal.
This loss shows that increasing ? promotes the optimization towards minimizing the KL divergence
DKL (Pdata kPG ), thus helping the generative distribution cover multiple modes, but may include
potentially undesirable samples; whereas increasing ? encourages the minimization of the reverse
KL divergence DKL (PG kPdata ), hence enabling the generator capture a single mode better, but may
miss many modes. By empirically adjusting these two hyperparameters, we can balance the effect of
two divergences, and hence effectively avoid the mode collapsing issue.
3.2
Connection to f-GAN
Next we point out the relations between our proposed D2GAN and f-GAN ? the model extends the
Jensen-Shannon divergence (JSD) of GAN to more general divergences, specifically f -divergences
[23]. A divergence in the f -divergence family has the following form:
?
q (x)
Df (P kQ) =
q (x) f
dx
p (x)
X
where f : R+ ? R is a convex, lower-semicontinuous function satisfying f (1) = 0. This
function has a convex conjugate function f ? , also known as Fenchel conjugate [13] : f ? (t) =
supu?domf {ut ? f (u)}. The function f ? is again convex and lower-semicontinuous.
Considering P the true distribution and Q the generator distribution, we resemble the learning
problem in GAN by minimizing the f -divergence between P and Q. Based on the variational lower
bound of f -divergence proposed by Nguyen et al. [22], the objective function of f-GAN can be
derived as follows:
min max F (?, ?) = Ex?P [gf (V? (x))] + Ex?Q? [?f ? (gf (V? (x)))]
?
?
where Q is parameterized by ? (as the generator in GAN), V? : X ? R is a function parameterized
by ? (as the discriminator in GAN) and gf : R ? domf ? is an output activation function (i.e., the
discriminator?s decision function) specific to the f -divergence used. Using appropriate functions gf
and f ? (see Tab. 2 in [23]), we recover the minimization of corresponding divergences such as JSD
in GAN, KL (associated with discriminator D1 ) and reverse KL (associated with discriminator D2 )
of our D2GAN.
The f-GAN, however, only considers a single divergence. On the other hand, our proposed method
combines KL and reserve KL divergences. Our idea is conceived upon pondering the advantages and
disadvantages of these two divergences in covering multiple modes of data. Combining them into a
unified objective function as in Eq. (3) helps us reversely engineer to finally obtain the optimization
game in Eq. (1) that can be efficiently formulated and solved using the principle of GAN.
4
Experiments
In this section, we conduct comprehensive experiments to demonstrate the capability of improving
mode coverage and the scalability of our proposed model on large-scale datasets. We use a synthetic
2D dataset for both visual and numerical verification, and four datasets of increasing diversity and
size for numerical verification. We have made our best effort to compare the results of our method
with those of the latest state-of-the-art GAN?s variants by replicating experimental settings in the
original work whenever possible.
For each experiment, we refer to the supplementary material for model architectures and additional
results. Common points are: i) discriminators? outputs with softplus activations :f (x) = ln (1 + ex ),
i.e., positive version of ReLU; (ii) Adam optimizer [16] with learning rate 0.0002 and the first-order
momentum 0.5; (iii) minibatch size of 64 samples for training both generator and discriminators; (iv)
Leaky ReLU with the slope of 0.2; and (v) weights initialized from an isotropic Gaussian: N (0, 0.01)
5
Symmetric KL-div
30.0
GAN
Unrolled GAN
D2GAN
25.0
20.0
15.0
10.0
5.0
0.0
0
5000
15000
10000
Step
20000
25000
(a) Symmetric KL divergence.
Wasserstein estimate
3.0
2.5
2.0
GAN
Unrolled GAN
D2GAN
1.5
1.0
0.5
0.0
0
5000
10000
Step
15000
20000
(b) Wasserstein distance.
25000
(c) Evolution of data (in blue) generated from GAN (top row), UnrolledGAN (middle row) and our D2GAN (bottom row) on 2D data
of 8 Gaussians. Data sampled from the true mixture are red.
Figure 2: The comparison of standard GAN, UnrolledGAN and our D2GAN on 2D synthetic dataset.
and zero biases. Our implementation is in TensorFlow [1] and we have published a version for
reference1 . We now present our experiments on synthetic data followed by those on large-scale
real-world datasets.
4.1
Synthetic data
In the first experiment, we reuse the experimental design proposed in [20] to investigate how well our
D2GAN can deal with multiple modes in the data. More specifically, we sample training data from
a 2D mixture of 8 Gaussian distributions with a covariance matrix 0.02I and means arranged in a
circle of zero centroid and radius 2.0. Data in these low variance mixture components are separated
by an area of very low density. The aim is to examine properties such as low probability regions and
low separation of modes.
We use a simple architecture of a generator with two fully connected hidden layers and discriminators
with one hidden layer of ReLU activations. This setting is identical, thus ensures a fair comparison
with UnrolledGAN2 [20]. Fig. 2c shows the evolution of 512 samples generated by our models and
baselines through time. It can be seen that the regular GAN generates data collapsing into a single
mode hovering around the valid modes of data distribution, thus reflecting the mode collapse in
GAN. At the same time, UnrolledGAN and D2GAN distribute data around all 8 mixture components,
and hence demonstrating the abilities to successfully learn multimodal data in this case. At the last
steps, our D2GAN captures data modes more precisely than UnrolledGAN as, in each mode, the
UnrolledGAN generates data that concentrate only on several points around the mode?s centroid, thus
seems to produce fewer samples than D2GAN whose samples fairly spread out the entire mode.
Next we further quantitatively compare the quality of generated data. Since we know the true
distribution pdata in this case, we employ two measures, namely symmetric KL divergence and
Wasserstein distance. These measures compute the distance between the normalized histograms
of 10,000 points generated from our D2GAN, UnrolledGAN and GAN to true pdata . Figs. 2a and
2b again clearly demonstrate the superiority of our approach over GAN and UnrolledGAN w.r.t
both distances (lower is better); notably with Wasserstein metric, the distance from ours to the true
distribution almost reduces to zero. These figures also demonstrate the stability of our D2GAN
(red curves) during training as it is much less fluctuating compared with GAN (green curves) and
UnrolledGAN (blue curves).
4.2
Real-world datasets
We now examine the performance of our proposed method on real-world datasets with increasing
diversities and sizes. For networks containing convolutional layers, we closely follow the DCGAN?s
design [24]. We use strided convolutions for discriminators and fractional-strided convolutions
for generator instead of pooling layers. Batch normalization is applied for each layer, except the
1
2
https://github.com/tund/D2GAN
We obtain the code of UnrolledGAN for 2D data from the link authors provided in [20].
6
generator output layer and the discriminator input layers. We also use Leaky ReLU activations for
discriminators, and use ReLU for generator, except its output is tanh since we rescale the pixel
intensities into the range of [-1, 1] before feeding images to our model. Only one difference is
that, for our model, initializing the weights from N (0, 0.01) yields slightly better results than from
N (0, 0.02). We again refer to the supplementary material for detailed architectures.
4.2.1
Evaluation protocol
Evaluating the quality of image produced by generative models is a notoriously challenging due to
the variety of probability criteria and the lack of a perceptually meaningful image similarity metric
[29]. Even when a model can generate plausible images, it is not useful if those images are visually
similar. Therefore, in order to quantify the performance of covering data modes as well as producing
high quality samples, we use several different ad-hoc metrics for different experiments to compare
with other baselines.
First we adopt the Inception score proposed in [27], which are computed by:
exp (Ex [DKL (p (y | x) k p (y))]), where p (y | x) is the conditional label distribution for
image x estimated using a pretrained Inception model [28], and p (y) is the marginal distribution:
PN
p (y) ? 1/N n=1 p (y | xn = G (zn )). This metric rewards good and varied samples, but
sometimes is easily fooled by a model that collapses and generates to a very low quality image, thus
fails to measure whether a model has been trapped into one bad mode. To address this problem, for
labeled datasets, we further recruit the so-called MODE score introduced in [5]:
exp (Ex [DKL (p (y | x) k p? (y))] ? DKL (p (y) k p? (y)))
where p? (y) is the empirical distribution of labels estimated from training data. The score can
adequately reflect the variety and visual quality of images, which is discussed in [5].
4.2.2
Handwritten digit images
We start with the handwritten digit images ? MNIST [19] that consists of 60,000 training and 10,000
testing 28?28 grayscale images of digits from 0 to 9. Following the setting in [5], we first assume
that the MNIST has 10 modes, representing connected component in the data manifold, associated
with 10 digit classes. We then also perform an extensive grid search of different hyperparameter
configurations, wherein our two regularized constants ?, ? in Eq. (1) are varied in {0.01, 0.05, 0.1,
0.2}. For a fair comparison, we use the same parameter ranges and fully connected layers for our
network (c.f. the supplementary material for more details), and adopt results of GAN and mode
regularized GAN (Reg-GAN) from [5].
For evaluation, we first train a simple, yet effective 3-layer convolutional nets3 that can obtain 0.65%
error on MNIST testing set, and then employ it to predict the label probabilities and compute MODE
scores for generated samples. Fig. 3 (left) shows the distributions of MODE scores obtained by three
models. Clearly, our proposed D2GAN significantly outperforms the standard GAN and Reg-GAN
by achieving scores mostly around the maximum [8.0-9.0]. It is worthy to note that we did not
observe substantial differences in the average MODE scores obtained by varying the network size
through the parameter searching. We here report the result of the minimal network with the smallest
number of layers and hidden units.
To study the effect of ? and ?, we inspect the results obtained by this minimal network with varied
?, ? in Fig. 3 (right). There is a pattern that, given a fixed ?, our D2GAN obtains better MODE score
when increasing ? to a certain value, after which the score could significantly decrease.
MNIST-1K. The standard MNIST data with 10-mode assumption seems to be fairly trivial. Hence,
based on this data, we test our proposed model on a more challenging one. We continue following
the technique used in [5, 20] to construct a new 1000-class MNIST dataset (MNIST-1K) by stacking
three randomly selected digits to form an RGB image with a different digit image in each channel.
The resulting data can be assumed to contain 1,000 distinct modes, corresponding to the combinations
of digits in 3 channels from 000 to 999.
In this experiment, we use a more powerful model with convolutional layers for discriminators and
transposed convolutions for the generator. We measure the performance by the number of modes
3
Network architecture is similar to https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py.
7
????
'E
Z???'E
?'E
????
????
????
???
??? ???
???
??? ???
??? ??? ???
??? ??? ??? ??? ???
??? ??? ???
????????
????
????
???
???
???
???
???
DK?????
??
??
??
??
??
??
??
??
?
?
???
?
???
???
????
????
???
?
???
????
?????
?????
???
???
???
???
???
???
???
????
???
???
???
Figure 3: Distributions of MODE scores (left) and average MODE scores (right) with varied ?, ?.
for which the model generated at least one in total 25,600 samples, and the reverse KL divergence
between the model distribution (i.e., the label distribution predicted by the pretrained MNIST classifier
used in the previous experiment) and the expected data distribution. Tab. 1 reports the results of our
D2GAN compared with those of GAN, UnrolledGAN taken from [20], DCGAN and Reg-GAN from
[5]. Our proposed method again clearly demonstrates the superiority over baselines by covering all
modes and achieving the best distance that is close to zero.
Table 1: Numbers of modes covered and reverse KL divergence between model and data distributions.
Model
# modes covered
DKL (modelk data)
4.2.3
GAN [20]
628.0?140.9
2.58?0.75
UnrolledGAN [20]
817.4?37.9
1.43?0.12
DCGAN [5]
849.6?62.7
0.73?0.09
Reg-GAN [5]
955.5?18.7
0.64?0.05
D2GAN
1000.0?0.00
0.08?0.01
Natural scene images
We now extend our experiments to investigate the scalability of our proposed method on much more
challenging large-scale image databases from natural scenes. We use three widely-adopted datasets:
CIFAR-10 [17], STL-10 [6] and ImageNet [26]. CIFAR-10 is a well-studied dataset of 50,000 32?32
training images of 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck.
STL-10, a subset of ImageNet, contains about 100,000 unlabeled 96?96 images, which is more
diverse than CIFAR-10, but less so than the full ImageNet. We rescale all images down 3 times
and train our networks on 32?32 resolution. ImageNet is a very large database of about 1.2 million
natural images from 1,000 classes, normally used as the most challenging benchmark to validate
the scalability of deep models. We follow the preprocessing in [18], except subsampling to 32?32
resolution. We use the code provided in [27] to compute the Inception score for 10 independent
partitions of 50,000 generated samples.
Table 2: Inception scores on CIFAR-10.
Model
Real data
WGAN [2]
MIX+WGAN [3]
Improved-GAN [27]
ALI [8]
BEGAN [4]
MAGAN [30]
DCGAN [24]
DFM [31]
D2GAN
Score
11.24?0.16
3.82?0.06
4.04?0.07
4.36?0.04
5.34?0.05
5.62
5.67
6.40?0.05
7.72?0.13
7.15?0.07
Z???????
'E
&D
?'E
??
?????
?????
??
??
??
??
????
????
????
????
????
????
?
?
^d>???
/????E??
Figure 4: Inception scores on STL-10 and ImageNet.
Tab. 2 and Fig. 4 show the Inception scores on CIFAR-10, STL-10 and ImageNet datasets obtained
by our model and baselines collected from recent work in literature. It is worthy to note that we only
compare with methods trained in a completely unsupervised manner without label information. As the
result, there exist 8 baselines on CIFAR-10 whilst only DCGAN [24] and denoising feature matching
(DFM) [31] are available on STL-10 and ImageNet. We use our own TensorFlow implementation of
DCGAN with the same network architecture with our model for fair comparisons. In all 3 experiments,
the D2GAN fails to beat the DFM, but outperforms other baselines by large margins. The lower
results compared with DFM suggest that using autoencoders for matching high-level features appears
8
to be an effective way to encourage the diversity. This technique is compatible with our method, thus
integrating it could be a promising avenue for our future work.
Two discriminators D1 and D2 have almost identical architectures, thus they potentially can share
parameters in many different schemes. We explore this direction by creating two version of our
D2GAN with the same hyperparameter setting. The first version shares all parameters of D1 and D2
except the last (output) layer. This model has failed because the discriminator now contains much
fewer parameters, rendering it unable to capture two inverse ratios of two density functions. The
second one shares all parameters of D1 and D2 except the last two layers. This version performed
better than the previous one, and could obtain promising Inception scores (7.01 on CIFAR10, 7.44 on
STL10 and 7.81 on ImageNet), but these results are still worse than those of our proposed model
without sharing parameters.
Finally, we show several samples generated by our proposed model trained on these three datasets in
Fig. 5. Samples are fair random draws, not cherry-picked. It can be seen that our D2GAN is able to
produce visually recognizable images of cars, trucks, boats, horses on CIFAR-10. The objects are
getting harder to recognize, but the shapes of airplanes, cars, trucks and animals still can be identified
on STL-10, and images with various backgrounds such as sky, underwater, mountain, forest are
shown on ImageNet. This confirms the diversity of samples generated by our model.
(a) CIFAR-10.
(b) STL-10.
(c) ImageNet.
Figure 5: Samples generated by our proposed D2GAN trained on natural image datasets. Due to the
space limit, please refer to the supplementary material for larger plot.
5
Conclusion
To summarize, we have introduced a novel approach to combine Kullback-Leibler (KL) and reverse
KL divergences in a unified objective function of the density estimation problem. Our idea is to
exploit the complementary statistical properties of two divergences to improve both the quality and
diversity of samples generated from the estimator. To that end, we propose a novel framework
based on generative adversarial nets (GANs), which formulates a minimax game of three players:
two discriminators and one generator, thus termed dual discriminator GAN (D2GAN). Given two
discriminators fixed, the learning of generator moves towards optimizing both KL and reverse KL
divergences simultaneously, and thus can help avoid mode collapse, a notorious drawback of GANs.
We have established extensive experiments to demonstrate the effectiveness and scalability of our
proposed approach using synthetic and large-scale real-world datasets. Compared with the latest
state-of-the-art baselines, our model is more scalable, can be trained on the large-scale ImageNet
dataset, and obtains Inception scores lower than those of the combination of denoising autoencoder
and GAN (DFM), but significantly higher than the others. Finally, we note that our method is
orthogonal and could integrate techniques in those baselines such as semi-supervised learning [27],
conditional architectures [21, 7, 25] and autoencoder [5, 31].
Acknowledgments. This work was partially supported by the Australian Research Council (ARC)
Discovery Grant Project DP160109394.
9
References
[1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow,
Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser,
Manjunath Kudlur, Josh Levenberg, Dan Man?, Rajat Monga, Sherry Moore, Derek Murray,
Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul
Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi?gas, Oriol Vinyals, Pete Warden,
Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale
machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. 4
[2] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein gan. arXiv preprint
arXiv:1701.07875, 2017. 2
[3] Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and
equilibrium in generative adversarial nets (gans). arXiv preprint arXiv:1703.00573, 2017. 2
[4] David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative
adversarial networks. arXiv preprint arXiv:1703.10717, 2017. 1, 2
[5] Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized
generative adversarial networks. arXiv preprint arXiv:1612.02136, 2016. 1, 2, 4.2.1, 4.2.2,
4.2.2, 1, 5
[6] Adam Coates, Andrew Y Ng, and Honglak Lee. An analysis of single-layer networks in
unsupervised feature learning. In International Conference on Artificial Intelligence and
Statistics (AISTATS), pages 215?223, 2011. 4.2.3
[7] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using
a laplacian pyramid of adversarial networks. In Advances in neural information processing
systems (NIPS), pages 1486?1494, 2015. 5
[8] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier
Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint
arXiv:1606.00704, 2016. 2
[9] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
http://www.deeplearningbook.org. 1
[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani,
M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in 27th Neural
Information Processing Systems (NIPS), pages 2672?2680. Curran Associates, Inc., 2014. 1, 2,
3
[11] Ian J. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. CoRR, 2017. 1, 2
[12] Somesh Das Gupta and Jun Shao. Mathematical statistics, 2000. 3.1
[13] Jean-Baptiste Hiriart-Urruty and Claude Lemar?chal. Fundamentals of convex analysis. Springer
Science & Business Media, 2012. 3.2
[14] Quan Hoang, Tu Dinh Nguyen, Trung Le, and Dinh Phung. Multi-generator gernerative
adversarial nets. arXiv preprint arXiv:1708.02556, 2017. 1
[15] Ferenc Husz?r. How (not) to train your generative model: Scheduled sampling, likelihood,
adversary? arXiv preprint arXiv:1511.05101, 2015. 1, 2
[16] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014. 4
[17] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images.
Computer Science Department, University of Toronto, Tech. Rep, 1(4), 2009. 4.2.3
10
[18] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. ImageNet classification with deep convolutional neural networks. In Proceedings of the 26th Annual Conference on Neural Information
Processing Systems (NIPS), volume 2, pages 1097?1105, Lake Tahoe, United States, December
3?6 2012. printed;. 4.2.3
[19] Yann Lecun, Corinna Cortes, and Christopher J.C. Burges. The MNIST database of handwritten
digits. 1998. 4.2.2
[20] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial
networks. arXiv preprint arXiv:1611.02163, 2016. 1, 2, 4.1, 2, 4.2.2, 1
[21] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint
arXiv:1411.1784, 2014. 5
[22] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information
Theory, 56(11):5847?5861, 2010. 3.2
[23] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural
samplers using variational divergence minimization. In D. D. Lee, M. Sugiyama, U. V. Luxburg,
I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages
271?279. Curran Associates, Inc., 2016. 1, 3.2
[24] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with
deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
4.2, 2, 4.2.3
[25] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak
Lee. Generative adversarial text to image synthesis. In Proceedings of The 33rd International
Conference on Machine Learning (ICML), volume 3, 2016. 5
[26] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng
Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.
ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision
(IJCV), 115(3):211?252, 2015. 4.2.3
[27] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training gans. In Advances in Neural Information Processing Systems
(NIPS), pages 2226?2234, 2016. 1, 4.2.1, 4.2.3, 2, 5
[28] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), pages 2818?2826, 2016. 4.2.1
[29] Lucas Theis, A?ron van den Oord, and Matthias Bethge. A note on the evaluation of generative
models. arXiv preprint arXiv:1511.01844, 2015. 1, 2, 4.2.1
[30] Ruohan Wang, Antoine Cully, Hyung Jin Chang, and Yiannis Demiris. Magan: Margin
adaptation for generative adversarial networks. arXiv preprint arXiv:1704.03817, 2017. 1, 2
[31] D Warde-Farley and Y Bengio. Improving generative adversarial networks with denoising
feature matching. ICLR submissions, 8, 2017. 1, 2, 4.2.3, 5
11
| 6860 |@word version:5 middle:1 stronger:1 seems:2 d2:41 semicontinuous:2 seek:2 confirms:1 rgb:1 covariance:1 jacob:1 pg:30 shot:1 harder:1 configuration:1 contains:2 score:27 united:1 ours:1 outperforms:2 steiner:1 com:2 activation:4 yet:1 dx:4 diederik:1 devin:1 realistic:1 numerical:2 partition:1 shape:1 christian:1 plot:1 update:1 discrimination:1 intelligence:1 selected:1 fewer:2 generative:36 isard:1 trung:3 alec:2 isotropic:1 short:1 ron:1 toronto:1 tahoe:1 org:2 zhang:1 unbounded:1 mathematical:1 olah:1 become:1 abadi:1 yuan:1 prove:1 consists:2 ijcv:1 qualitative:1 combine:4 recognizable:1 inside:1 manner:1 dan:1 yingyu:1 introduce:1 notably:2 indeed:2 expected:1 behavior:3 examine:2 growing:1 multi:3 soumith:3 considering:2 increasing:5 becomes:1 provided:2 discover:1 project:1 estimating:1 mass:1 medium:1 what:1 mountain:1 minimizes:1 recruit:1 whilst:5 unified:4 finding:1 pmodel:6 pseudo:1 quantitative:2 sky:1 act:1 unusually:1 tackle:2 zaremba:1 exactly:1 wenjie:1 classifier:2 demonstrates:1 facto:1 control:2 normally:2 grant:1 scaled:1 sherjil:1 producing:3 unit:1 superiority:2 before:1 positive:3 tends:1 limit:3 might:1 bird:1 au:1 resembles:1 frog:1 studied:2 conversely:2 challenging:5 luke:3 logeswaran:1 collapse:8 analytics:1 range:4 acknowledgment:1 lecun:1 testing:2 vu:1 supu:1 digit:8 procedure:1 area:1 empirical:1 yan:1 significantly:3 printed:1 matching:3 word:1 integrating:1 regular:1 suggest:1 close:1 unlabeled:1 andrej:1 undesirable:2 risk:1 py:1 www:1 equivalent:1 dean:1 map:1 missing:1 maximizing:1 latest:4 jimmy:1 emily:1 d12:1 resolution:2 formulate:2 convex:5 d22:1 pouget:1 jascha:1 matthieu:1 estimator:1 importantly:1 shlens:2 stability:1 searching:1 underwater:1 updated:1 xinchen:1 play:1 pioneered:1 olivier:1 us:1 curran:2 goodfellow:5 subarea:1 kunal:1 associate:2 synthesize:1 trick:1 satisfying:1 recognition:3 updating:2 expensive:1 approximated:1 submission:1 database:5 labeled:1 observed:2 mike:1 role:1 bottom:1 preprint:13 wang:1 capture:3 verifying:1 solved:2 initializing:1 region:1 ensures:1 connected:3 decrease:2 substantial:1 intuition:1 nash:2 complexity:1 schiele:1 reward:4 warde:2 trained:5 depend:1 solving:1 ferenc:1 ali:1 serve:1 upon:1 completely:2 swap:1 shao:1 multimodal:2 easily:1 geoff:1 chapter:1 various:2 cat:1 derivation:1 train:6 separated:1 distinct:1 effective:3 artificial:1 vicki:1 horse:2 deer:1 jean:2 bernt:1 larger:1 plausible:1 cvpr:1 whose:1 heuristic:1 supplementary:5 solve:1 widely:2 ability:1 statistic:2 unseen:1 differentiate:1 advantage:1 hoc:1 claude:1 blob:1 net:9 analytical:1 propose:3 hiriart:1 matthias:1 maximal:1 adaptation:1 tu:3 combining:1 rapidly:1 stl10:1 intuitive:1 validate:1 scalability:4 getting:1 sutskever:2 optimum:1 produce:5 generating:2 reference1:1 adam:3 ben:2 object:1 tim:1 help:4 radical:1 andrew:2 develop:1 rescale:2 ex:15 eq:7 auxiliary:1 coverage:1 resemble:1 predicted:1 australian:1 quantify:1 direction:1 concentrate:2 radius:1 drawback:1 closely:1 stochastic:1 australia:1 jonathon:1 material:5 feeding:1 fix:1 generalization:1 proposition:1 rong:1 helping:2 around:5 considered:1 visually:3 exp:2 lawrence:1 mapping:2 equilibrium:4 predict:1 reserve:1 substituting:1 optimizer:1 adopt:3 smallest:1 purpose:2 estimation:2 label:5 tanh:1 council:1 create:1 successfully:2 minimization:7 mit:1 clearly:3 gaussian:2 always:1 aim:2 rather:4 husz:1 pn:1 avoid:4 varying:1 cseke:1 jsd:4 derived:1 df:1 likelihood:6 fooled:1 tech:1 contrast:2 adversarial:25 centroid:2 baseline:10 detect:1 inference:1 entire:1 hidden:4 relation:1 favoring:2 dfm:6 pixel:1 issue:3 classification:1 aforementioned:1 dual:6 lucas:1 animal:1 art:4 fairly:2 marginal:1 equal:2 construct:1 once:1 beach:1 ng:1 sampling:1 identical:3 adversarially:1 yu:1 icml:1 unsupervised:3 denton:1 jon:1 pdata:41 mimic:1 future:1 yoshua:3 report:2 others:1 quantitatively:2 employ:3 strided:2 modern:1 randomly:1 mirza:2 simultaneously:1 divergence:44 wgan:2 comprehensive:3 recognize:2 jeffrey:1 attempt:2 investigate:2 zheng:1 evaluation:6 benoit:1 mixture:4 farley:2 yielding:1 regularizers:1 cherry:1 andy:1 integral:1 encourage:1 capable:2 cifar10:2 orthogonal:1 conduct:3 iv:1 initialized:1 circle:1 theoretical:5 minimal:3 fenchel:1 cover:3 disadvantage:1 formulates:2 ishmael:1 zn:1 maximization:1 cost:1 introducing:1 addressing:2 subset:1 stacking:1 kq:1 krizhevsky:2 osindero:1 straightforwardly:1 kudlur:1 synthetic:7 st:1 density:10 international:3 fundamental:2 oord:1 lee:3 unscalable:1 synthesis:1 bethge:1 quickly:1 gans:4 together:1 ashish:1 again:4 reflect:1 michael:3 ilya:2 rafal:1 choose:1 huang:1 containing:2 collapsing:6 worse:1 lukasz:1 creating:1 derivative:2 return:2 wojciech:1 li:3 szegedy:1 distribute:1 de:1 diversity:8 stabilize:3 inc:2 ad:1 vi:1 performed:1 picked:1 closed:1 dumoulin:1 tab:3 red:2 start:1 competitive:2 recover:3 capability:2 metz:3 slope:1 simon:1 jia:2 contribution:2 minimize:1 botond:1 greg:1 convolutional:5 variance:1 efficiently:2 yield:2 handwritten:3 vincent:3 produced:3 craig:1 notoriously:2 russakovsky:1 published:1 suffers:1 whenever:2 sharing:1 sebastian:1 failure:1 derek:1 tucker:1 chintala:3 proof:4 associated:3 recovers:1 transposed:1 sampled:3 gain:2 dataset:8 adjusting:1 fractional:1 ut:1 car:2 wicke:1 sean:1 akata:1 reflecting:1 appears:1 higher:1 supervised:1 follow:4 tom:1 wherein:6 improved:2 arranged:1 formulation:1 inception:9 implicit:1 autoencoders:2 hand:1 expressive:1 mehdi:2 christopher:1 su:1 lack:1 minibatch:2 mode:52 quality:9 reveal:1 scheduled:1 usa:1 effect:5 contain:1 true:6 normalized:1 evolution:2 hence:5 adequately:1 unroll:1 vasudevan:1 symmetric:3 moore:1 leibler:3 deal:2 game:8 irving:1 encourages:2 during:2 covering:3 davis:1 levenberg:1 please:1 essence:1 criterion:4 demonstrate:5 performs:1 zhiheng:1 image:28 variational:2 novel:6 began:2 common:2 deeplearningbook:1 superior:1 empirically:3 exponentially:1 volume:2 million:1 discussed:2 extend:1 approximates:1 berthelot:1 dinh:5 refer:4 diversify:2 jozefowicz:1 honglak:2 rd:1 grid:1 similarly:1 centre:1 sugiyama:1 replicating:1 dj:2 cully:1 similarity:1 pete:1 j:2 own:1 recent:3 optimizing:2 wattenberg:1 ship:1 reverse:18 termed:1 certain:2 sherry:1 rep:1 continue:1 somesh:1 fernanda:1 yi:1 seen:2 arjovsky:2 additional:1 wasserstein:5 deng:1 maximize:3 corrado:1 ii:2 semi:1 multiple:5 full:1 mix:1 reduces:2 long:1 cifar:9 baptiste:1 promotes:1 dkl:12 laplacian:1 impact:1 scalable:1 variant:2 heterogeneous:1 essentially:1 metric:5 vision:3 expectation:1 arxiv:26 histogram:1 sergey:1 sometimes:1 geelong:1 agarwal:1 pyramid:1 monga:1 penalize:2 magan:2 addition:1 whereas:1 krause:1 background:1 lajanugen:1 unlike:2 warden:1 subject:1 pooling:1 induced:2 simulates:1 quan:1 december:1 effectiveness:2 jordan:1 kera:1 bernstein:1 bengio:4 enough:2 iii:2 rendering:3 mastropietro:1 variety:4 relu:5 architecture:9 identified:1 idea:6 barham:1 avenue:1 airplane:2 whether:1 reuse:1 effort:3 manjunath:1 returned:1 deep:6 useful:1 fake:3 xuanlong:1 detailed:1 fool:3 covered:2 karpathy:1 nonparametric:2 induces:2 generate:4 http:3 exist:1 coates:1 tutorial:1 estimated:4 trapped:1 conceived:1 materialize:1 blue:2 diverse:3 hyperparameter:2 dickstein:1 harp:1 four:2 key:1 demonstrating:1 yangqing:1 drawn:3 achieving:2 pondering:1 schumm:1 year:1 luxburg:1 talwar:1 inverse:1 powerful:1 master:1 parameterized:5 extends:1 place:1 family:2 lamb:1 almost:2 guyon:1 lake:1 yann:1 draw:1 separation:1 decision:1 capturing:2 bound:3 layer:16 followed:1 distinguish:1 courville:3 encountered:1 nonnegative:1 truck:3 phung:3 annual:1 precisely:1 fei:2 alex:3 your:1 software:1 scene:3 generates:5 argument:1 min:3 concluding:1 tengyu:1 martin:5 department:1 according:1 combination:2 poor:1 conjugate:2 across:1 slightly:1 appealing:1 rob:1 making:1 den:1 notorious:2 taken:1 computationally:1 ln:1 bing:1 turn:3 fail:1 urruty:1 know:1 ge:1 end:1 adopted:1 available:2 gaussians:1 rewritten:1 brevdo:1 observe:1 salimans:1 appropriate:1 fluctuating:1 batch:1 corinna:1 weinberger:1 original:1 top:1 include:1 subsampling:1 gan:56 exploit:3 giving:1 ghahramani:1 especially:1 murray:1 yanran:1 seeking:1 move:1 objective:12 kaiser:1 antoine:1 surrogate:1 div:1 che:1 iclr:1 distance:7 link:1 unable:1 mapped:1 capacity:2 rethinking:1 chris:1 manifold:1 zeynep:1 collected:1 considers:1 trivial:1 extent:1 unstable:1 sanjeev:2 kpg:4 ozair:1 code:3 reed:1 illustration:3 ratio:2 balance:1 minimizing:7 unrolled:3 liang:1 mostly:1 potentially:3 ryota:1 hao:1 ba:1 wojna:1 design:2 zbigniew:1 implementation:2 satheesh:1 unknown:1 perform:1 allowing:1 upper:1 inspect:1 convolution:3 datasets:15 benchmark:1 enabling:1 arc:1 jin:1 gas:1 beat:1 hinton:2 looking:1 worthy:2 varied:4 sharp:1 intensity:1 deakin:2 domf:2 david:3 introduced:4 namely:1 dog:1 kl:37 extensive:4 connection:1 discriminator:48 optimized:1 pfau:1 imagenet:19 xiaoqiang:1 learned:2 tensorflow:4 established:1 kingma:1 nip:6 address:2 able:1 adversary:1 poole:2 usually:1 below:1 pattern:3 sanjay:1 chal:1 scott:1 challenge:3 summarize:1 max:3 green:1 wainwright:1 business:1 natural:5 regularized:3 boat:1 minimax:8 representing:1 scheme:1 github:2 improve:1 arora:1 concludes:1 jun:1 autoencoder:2 gf:4 text:1 prior:1 literature:2 eugene:1 review:1 discovery:1 theis:1 fully:2 loss:3 limitation:1 proven:3 geoffrey:2 analogy:1 generator:39 hoang:1 integrate:1 vanhoucke:2 verification:2 principle:2 editor:2 tiny:1 share:5 nowozin:1 row:3 compatible:1 reversely:1 fchollet:1 supported:1 last:4 keeping:1 formal:1 bias:1 burges:1 allow:1 wide:2 leaky:2 van:1 athul:1 overcome:1 curve:3 boundary:1 world:7 valid:1 xn:1 evaluating:2 computes:1 author:1 made:4 preprocessing:1 nguyen:5 welling:1 transaction:1 functionals:1 obtains:2 kullback:3 belghazi:1 global:1 ioffe:1 assumed:1 xi:1 fergus:1 alternatively:4 grayscale:1 search:1 iterative:1 khosla:1 table:2 promising:2 channel:2 reasonably:1 learn:2 ca:1 nature:2 ignoring:1 obtaining:1 improving:4 forest:1 automobile:1 bottou:1 domain:1 garnett:1 protocol:1 did:1 da:1 aistats:1 main:2 spread:1 noise:3 hyperparameters:3 paul:3 fair:5 complementary:3 categorized:1 xu:1 fig:8 referred:1 tong:1 tomioka:1 fails:3 momentum:1 explicit:1 third:1 zhifeng:1 ian:5 theorem:2 down:1 bad:1 specific:1 normalization:1 showing:1 jensen:3 ghemawat:1 pz:4 dk:1 abadie:1 gupta:1 stl:9 cortes:2 mnist:11 sohl:1 effectively:6 corr:1 demiris:1 perceptually:1 margin:2 chen:2 vijay:1 entropy:1 explore:1 ez:4 visual:4 failed:1 josh:1 vinyals:1 dcgan:6 aditya:1 partially:1 pretrained:2 chang:1 radford:2 springer:1 mart:1 ma:2 conditional:3 cheung:1 formulated:1 towards:4 lemar:1 man:1 hard:1 specifically:2 except:5 sampler:1 olga:1 miss:1 engineer:1 total:1 denoising:3 called:1 experimental:4 player:7 shannon:3 meaningful:1 citro:1 exception:1 aaron:3 formally:1 berg:1 hovering:1 softplus:1 latter:1 jonathan:1 alexander:1 rajat:1 avoiding:1 oriol:1 reg:4 d1:41 schuster:1 hung:1 |
6,480 | 6,861 | Dynamic Revenue Sharing?
Santiago Balseiro
Columbia University
New York City, NY
[email protected]
Max Lin
Google
New York City, NY
[email protected]
Vahab Mirrokni
Google
New York City, NY
[email protected]
Song Zuo?
Tsinghua University
Beijing, China
[email protected]
Renato Paes Leme
Google
New York City, NY
[email protected]
Abstract
Many online platforms act as intermediaries between a seller and a set of buyers.
Examples of such settings include online retailers (such as Ebay) selling items
on behalf of sellers to buyers, or advertising exchanges (such as AdX) selling
pageviews on behalf of publishers to advertisers. In such settings, revenue sharing
is a central part of running such a marketplace for the intermediary, and fixedpercentage revenue sharing schemes are often used to split the revenue among the
platform and the sellers. In particular, such revenue sharing schemes require the
platform to (i) take at most a constant fraction ? of the revenue from auctions and
(ii) pay the seller at least the seller declared opportunity cost c for each item sold.
A straightforward way to satisfy the constraints is to set a reserve price at c/(1 ? ?)
for each item, but it is not the optimal solution on maximizing the profit of the
intermediary.
While previous studies (by Mirrokni and Gomes, and by Niazadeh et al) focused on
revenue-sharing schemes in static double auctions, in this paper, we take advantage
of the repeated nature of the auctions. In particular, we introduce dynamic revenue
sharing schemes where we balance the two constraints over different auctions
to achieve higher profit and seller revenue. This is directly motivated by the
practice of advertising exchanges where the fixed-percentage revenue-share should
be met across all auctions and not in each auction. In this paper, we characterize
the optimal revenue sharing scheme that satisfies both constraints in expectation.
Finally, we empirically evaluate our revenue sharing scheme on real data.
1
Introduction
The space of internet advertising can be divided in two large areas: search ads and display ads. While
similar at first glance, they are different both in terms of business constraints in the market as well as
algorithmic challenges. A notable difference is that in search ads the auctioneer and the seller are the
same party, as the same platform owns the search page and operates the auction. Thus search ads are
a one-sided market: the only agents outside the control of the auctioneer are buyers. In display ads,
on the other hand, the platform operates the auction but, in most cases, it does not own the pages in
?
We thank Jim Giles, Nitish Korula, Martin P?l, Rita Ren and Balu Sivan for the fruitful discussion and their
comments on early versions of this paper. We also thank the anonymous reviewers for their helpful comments.
A full version of this paper can be found at https://ssrn.com/abstract=2956715.
?
The work was done when this author was an intern at Google. This author was supported by the National
Basic Research Program of China Grant 2011CBA00300, 2011CBA00301, the Natural Science Foundation of
China Grant 61033001, 61361136003, 61303077, 61561146398, a Tsinghua Initiative Scientific Research Grant
and a China Youth 1000-talent program.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
which the ads are displayed, making the main problem the design of a two-sided market, referred to
as ad exchanges.
The problem of designing an ad exchange can be decomposed in two parts: the first is to design
an auction, which will specify how an ad impression will be allocated among different prospective
buyers (advertisers) and how they will be charged from it. The second component is a revenue sharing
scheme, which specifies how the revenue collected from buyers will be split between the seller (the
publisher) and the platform. Traditionally the problems of designing an auction and designing a
revenue sharing scheme have been merged in a single one called double auction design. This was the
traditional approach taken by [4], [3] and more recently in the algorithmic work of [2, 5]. The goals
in those approaches have been to maximize efficiency in the market, maximize profit of the platform
and to characterize when the profit maximizing policy is a simple one.
Those objectives however, do not entirely correspond to actual problem faced by advertising exchanges. Take platform-profit-maximization, for example. The ad-exchange business is a highly
competitive environment. A web publisher (seller) can send their ad impressions to a dozen of
different exchanges. If an exchange tries to extract all the surplus in the form of profit, web publishers
will surely migrate to a less greedy platform. In order to retain their inventory, exchanges must align
their incentives with the incentives of those of web publishers.
A good practical solution, which has been adopted by multiple real world platforms, is to declare a
fixed revenue sharing scheme. The exchange promises it will keep at most an ?-fraction of profits,
where the constant ? is typically the outcome of a business negotiation between the exchange and the
web publisher. After the fraction is agreed, the objective of the seller and the exchange are aligned.
The exchange maximizes profits by maximizing the seller?s revenue.
If revenue sharing was the only constraint, the exchange could simply ignore sellers and run an
optimal auction among buyers. In practice, however, web-publishers have outside options, typically
in the form of reservation contracts, which should be taken into account by the exchange. Reservation
contracts are a very traditional form of selling display ads that predates ad exchanges, where buyers
and sellers make agreements offline specifying a volume of impressions to be transacted, a price per
impression and a penalty for not satisfying the contract. Those agreements are entered in a system (for
example Google?s Doubleclick for Publishers) that manages reservations on behalf of the publisher.
This reservation system determines for each arriving impression the best matching offline contract
that impression could be allocated to as well as the cost of not allocating that impression. The cost of
not allocating an impression takes into account the potential revenue from allocating to a contract and
the probability of paying a penalty for not satisfying the contract.
From our perspective, it is irrelevant how a cost is computed by reservation systems. It is sufficient
to assume that for each impression, the publisher has an opportunity cost and it is only willing to
sell that particular impression in the exchange if its payout for that impression exceeds the cost.
Exchanges therefore, allow the publisher to submit a cost and only sell that impression if they are
able to pay the publisher at least the cost per that impression.
We design the following simple auction and revenue sharing scheme that we call the na?ve policy:
? seller sends to the exchange an ad impression with cost c.
? exchange runs a second price auction with reserve r ? c/(1 ? ?).
? if the item is sold the exchange keeps an ? fraction of the revenue and sends the remaining
1 ? ? fraction to the seller.
This scheme is pretty simple and intuitive for each participant in the market. It guarantees that if the
impression is sold, the revenue will be at least c/(1 ? ?) and therefore the seller?s payout will be at
least c. So both the minimum payout and revenue sharing constraints are satisfied with probability
1. This scheme has also the advantage of decoupling the auction and the revenue sharing problem.
The platform is free to use any auction among the buyers as long as it guarantees that whenever the
impression is matched, the revenue extracted from buyers is at least c/(1 ? ?).
Despite being simple, practical and allowing the exchange to experiment with the auction without
worrying about revenue sharing, this mechanism is sub-optimal both in terms of platform profit and
publisher payout. The exchange might be willing to accept a revenue share lower than ? if this grants
more freedom in optimizing the auction and extracting more revenue.
More generally, the exchange might exploit the repeated nature of the auction to improve revenue
even further by adjusting the revenue share dynamically based on the bids and the cost. In this setting,
2
we can think of the revenue share constraints to be enforced on average, i.e., over a sequence of
auctions the platform is required to bound by ? the ratio of the aggregate profit and the aggregate
revenue collected from buyers. This allows the platform to increase the revenue share on certain
queries and reduce in others.
In the repeated auctions setting, the exchange is also allowed to treat the minimum cost constraint on
aggregate: the payout for the seller needs to be at least as large as the sum of costs of the impressions
matched. The exchange can implement this in practice by always paying the seller at least his cost
even if the revenue collected from buyers is less than the cost. This would cause the exchange to
operate at a loss for some impressions. But this can be advantageous for the exchange on aggregate if
it is able to offset these losses by leveraging other queries with larger profit margins.
In this paper, we attempt to characterize the optimal scheme for repeated auctions and measure on
data the improvement with respect to the simple revenue sharing scheme discussed above.
Finally, while we discuss the main application of our results in the context of advertising exchanges,
our model and results apply to the broad space of platforms that serve as intermediaries between
buyers and sellers, and help run many repeated auctions over time. The issue of dynamic revenue
sharing also arises when Amazon or eBay act as a platform and splits revenues from a sale with
the sellers, or when ride-sharing services such as Uber or Lyft split the fare paid by the passenger
between the driver and the platform. Uber for example mentions in their website3 that: ?Drivers
using the partner app are charged an Uber Fee as a percentage of each trip fare. The Uber Fee varies
by city and vehicle type and helps Uber cover costs such as technology, marketing and development
of new features within the app.?
1.1
Our Results and Techniques
We propose different designs of auctions and revenue sharing policies in exchanges and analyze
them both theoretically and empirically on data from a major ad exchange. We compare against the
na?ve policy described above. We compare policies in terms of seller payout, exchange profit and
match-rate (number of impressions sold). We note that match-rate is an important metric in practice,
since it represents the volume of inventory transacted in the exchange and it is a proxy for the volume
of the ad market this particular exchange is able to capture.
For the auction, we restrict our attention to second price auctions with reserve prices, since we aim at
using theory as a guide to inform decisions about practical designs that can be implemented in real
ad-exchanges. To be implementable in practice the designs need to follow the industry practice of
running second-price auctions with reserves. This design will be automatically incentive compatible
for buyers. On the seller side, instead of enforcing incentive compatibility, we will assume that
impression costs are reported truthfully. Note that the revenue sharing contract guarantees, at least
partially, when the constraint binds (which always happens in practice), the goals of the seller and the
platform are partially aligned: maximizing profit is the same as maximizing revenue. Thus, sellers
have little incentive to misreport their costs. In fact, this is one of the main reason that so many
real-world platforms such as Uber adopt fixed revenue sharing contracts. In the ads market, moreover,
sellers are also typically viewed as less strategic and reactive agents. Thus, we believe that the latter
assumption is not too restrictive in practice.4
We will also assume Bayesian priors on buyer?s valuations and on seller?s costs. For the sake of
simplicity, we will start with the assumption that seller costs are constant and show in the full version
how to extend our results to the case where costs are sampled from a distribution.
We will focus on the exchange profit as our main objective function. While this paper will take the
perspective of the exchange, the policies proposed will also improve seller?s payout with respect
to the na?ve policy. The reason is simple: the na?ve policy keeps exactly ? fraction of the revenue
extracted from buyers as profit. Any policy that keeps at most ? and improves profit, should improve
revenue extracted from buyers at least at the same rate and hence improve seller?s payout.
Single Period Revenue Sharing. We first study the case where exchange is required to satisfy
the revenue sharing constraint in each period, i.e., for each impression at most an ?-fraction of the
3
See https://www.uber.com/info/how-much-do-drivers-with-uber-make/
While in this paper we focus on the dynamic optimization of revenue sharing schemes when agents report
truthfully, it is still an interesting avenue of research to study the broader market design question of designing
dynamic revenue sharing schemes while taking into account agents? incentives.
4
3
revenue can be retained as profit. We characterize the optimal policy. We first show that the optimal
policy always sets the reserve price above the seller?s cost, but not necessarily above c/(1 ? ?). The
exchange might voluntarily want to decrease its revenue share if this grants freedom to set lower
reserve prices and extract more revenue from buyers.
When the opportunity cost of the seller is low, the optimal policy for the exchange ignores the seller?s
cost and prices according to the optimal reserve price. When the opportunity cost is high, pricing
according to c/(1 ? ?) is again not optimal because demand is inelastic at that price. The exchange
internalizes the opportunity cost, prices between c and c/(1 ? ?), and reduces its revenue share if
necessary. For intermediate values of the opportunity cost, the exchange is better off employing the
na?ve policy and pricing according to c/(1 ? ?).
Multi Period Revenue Sharing. We then study the case where the revenue share constraint is
imposed over the aggregate buyers? payments. We provide intuition on the structure of the optimal
policy by first solving a Lagrangian relaxation and then constructing an asymptotically optimal heuristic policy (satisfying the original constraints) based on the optimal relaxation solution. In particular,
we introduce a Lagrange multiplier for the revenue sharing constraint to get the optimal solution
to the Lagrangian relaxation. The optimal revenue sharing policy obtained from the Lagrangian
relaxation pays the publisher a convex combination between his cost c and a fraction (1 ? ?) of the
revenue obtained from buyers. Depending on the value of the multiplier, the reserve price could be
below c, exposing the platform to the possibility of operating at a loss in some auctions.
The policy obtained from the Lagrangian relaxation, while intuitive, only satisfies the revenue sharing
and cost constraints in expectation. Because this is not feasible for the platform, we discuss heuristic
policies that approximate that policy in the limit, but satisfy the constraints surely in aggregate over
the T periods. Then we discuss an even stronger policy that satisfies the aggregate constraints for any
prefix, i.e., at any given time t, the constraints are satisfied in aggregate from time 1 to t.
Comparative Statics. We compare the structure of the single period and multi period policies. The
first insight is that the optimal multi-period policy uses lower reserve prices therefore matching more
queries. The key insight we obtain from the comparison is that multi-period revenue sharing policies
are particularly effective when markets are thick, i.e. when a second highest bid is above a rescaled
version of the cost often and cost are not too high.
Empirical Insights. To complement our theoretical results, we conduct an empirical study simulating our revenue sharing policies on real world data from a major ad exchange. The data comes
from bids in a second price auction with reserves (for a single-slot), which is truthful. Our study
confirms the effectiveness of the multi period revenue sharing policies and single period revenue
sharing policies over the na?ve policy. The results are consistent for different values of ?: the profit
lifts of single period revenue sharing policies are +1.23% ? +1.64% and the lifts of multi period
revenue sharing policies are roughly 5.5 to 7 times larger (+8.53% ? +9.55%).
We do an extended overview in Section 7, but leave the further details to the full version. We omit the
related work here, which can be can be found in the full version.
2
Preliminaries
Setting. We study a discrete-time finite horizon setting in which items arrive sequentially to an
intermediary. We index the sequence of items by t = 1, . . . , T . There are multiple buyers bidding in
the intermediary (the exchange) and the intermediary determines the winning bidder via a second
price auction. We assume that the bids from the buyers are drawn independently and identically
distributed across auctions, but potentially correlated across buyers for a given auction.
We will assume that the profit function of the joint distribution of bids is quasi-concave. The expected
profit function corresponds to the expected revenue of a second price auction with reserve price r and
opportunity cost c:
?(r, c) = E 1{bf ? r} (max(r, bs ) ? c) .
where bft and bst are the highest- and second-highest bid at time t. Our assumption on the bid
distribution will be as follows:
Assumption 2.1. The expected profit function ?(r, c) is quasi-concave in r for each c.
4
The previous assumption is satisfied, for example, if bids are independent and identically distributed
according to a distribution with increasing hazard rates (see, e.g., (author?) [1]).
Mechanism. The seller submitting the items sets an opportunity cost of c ? 0 for the items. The
profit of the intermediary is the difference between the revenue collected from the buyers and the
payments made to the seller. The intermediary has agreed to a revenue sharing scheme that limits the
profit of the intermediary to at most ? ? (0, 1) of the total revenue collected from the buyers.
The intermediary implements a non-anticipative adaptive policy ? that maps the history at time t
to a reserve price rt? ? R+ for the second price auction and a payment function p?t : R+ ? R+
that determines the amount to be paid to the seller as a function of the buyers? payments. That is,
the item is sold whenever the highest bid is above the reserve price, or equivalently bft ? rt? . The
intermediary?s revenue is equal to the buyers? payments of max(rt? , bst ) and the seller?s revenue
is given by p?t (max(rt? , bst )). The intermediary?s profit is given by the difference of the buyers?
payments and the payments to the seller, i.e., max(rt? , bst ) ? p?t (max(rt? , bst )). From the perspective
of the buyers, the mechanism implemented by the intermediary is a second price auction with
(potentially dynamic) reserve price rt? . The intermediary?s problem amounts to maximizing profits
subject to the revenue sharing constraint. The revenue sharing constraint can be imposed at every
single period or over multiple periods. We discuss each model at a time.
Na?ve revenue sharing scheme. The most straightforward revenue sharing scheme is the one that
sets a reserve above c/(1 ? ?) and pay the sellers a (1 ? ?)-fraction of the revenue:
c
(1)
rt? ? 1??
, p?t (x) = (1 ? ?)x.
Since the revenue sharing is fixed, the intermediary?s profit is given by ? max(rt? , bst ). Thus, the
intermediary optimizes profits by optimizing revenues, and the optimal reserve price is given by:
r? = arg maxr?c/(1??) ?(r, 0) .
The na?ve revenue sharing scheme sets a reserve above c/(1 ? ?) and pays the seller (1 ? ?) of
the buyers? payments. This guarantees that the payment to the seller is always no less than c, by
construction, because the payment of the buyers is at least the reserve price. Since the intermediary?s
profit is a fraction ? of the buyers? payment, the seller?s cost does not appear in the objective, and the
objective of the seller is ??(r, 0). Note, however, that the seller?s cost does appear as a constraint in
the intermediary?s optimization problem: the reserve price should be at least c/(1 ? ?).
This is the baseline that we will use to compare the proposed policies with in the experiment section.
This policy is suboptimal for various reasons. Consider for example the extreme case where the
buyers alway bid more than c and less than c/(1 ? ?). In this case, the profit from the na?ve revenue
sharing scheme is zero. However, the intermediary can still obtain a non-zero profit by setting the
reserve somewhere between c and c/(1 ? ?), which results in a revenue share less than ?. If the
revenue sharing constraint is imposed over multiple periods instead of each single period, we are able
to dynamically balance out the deficit and surplus of the revenue sharing constraint over time.
3
Single Period Revenue Sharing Scheme
In this case the revenue sharing scheme imposes that in every single period the profit of the intermediary is at most ? of the buyers? payment. We start by formulating the profit maximization problem
faced by the intermediary as a mathematical program with optimal value J S .
PT
f
?
? s
?
? s
J S , max
(2a)
t=1 E 1{bt ? rt } (max(rt , bt ) ? pt (max(rt , bt )))
?
s.t. p?t (x) ? (1 ? ?)x , ?x
(2b)
p?t (x) ? c , ?x .
(2c)
The objective (2a) gives the profit of the intermediary as the difference between the payments collected
from the buyers and the payments made to the seller. The revenue sharing constraint (2b) imposes that
intermediary?s profit is at most a fraction ? of the total revenue, or equivalently (x ? p?t (x))/x ? ?
where x is the payment from the buyers. The floor constraint (2c) imposes that the seller is paid at
least c. These constraints are imposed at every auction.
We next characterize the optimal decisions of the seller in the single period model. Some definitions
are in order. Let r? (c) be an optimal reserve price in the second price auction if the seller?s cost is c:
r? (c) = arg maxr?0 ?(r, c).
5
To avoid trivialities we assume that the optimal reserve price is unique. Because the profit function
?(r, c) has increasing differences in (r, c) then the optimal reserve price is non-decreasing with the
cost, that is, r? (c) ? r? (c0 ) for c ? c0 .
Our main result in this section characterizes the optimal decision of the intermediary in this model.
Theorem 3.1. The optimal decision of the intermediary is to set p?t (x) = max(c, (1 ? ?)x) and
rt? = max{min{?
c, r? (c)}, r? (0)} where c? = c/(1 ? ?).
The reserve price c? = c/(1 ? ?) in the above theorem is the na?ve reserve price that satisfies the
revenue sharing scheme by inflating the opportunity cost by 1/(1 ? ?). When the opportunity cost
c is very low (?
c ? r? (0)), pricing according to c? is not optimal because demand is elastic at c? and
the intermediary can improve profits by increasing the reserve price. Here the intermediary ignores
the opportunity cost, prices optimally according to rt? = r? (0) and pays the seller according to
p?t (x) = (1 ? ?)x. When the opportunity cost c is very high (?
c ? r? (c)), pricing according to c?
is again not optimal because demand is inelastic at c? and the intermediary can improve profits by
decreasing the reserve price. Here the intermediary internalizes the opportunity cost, prices optimally
according to rt? = r? (c) and pays the seller according to p?t (x) = max(c, (1 ? ?)x).
4
Multi Period Revenue Sharing Scheme
In this case the revenue sharing scheme imposes that the aggregate profit of the intermediary is at most
? of the buyers? aggregate payment. Additionally, in this model the opportunity costs are satisfied on
an aggregate fashion over all actions, that is, the payments to the seller need to be at least the floor
price times the number of items sold. The intermediary decision?s problem can be characterized by
the following mathematical program with optimal value J M , where x?t = max(rt? , bst )
PT
f
?
?
?
?
J M , max?
(3a)
t=1 E 1{bt ? rt } (xt ? pt (xt ))
PT
f
?
?
?
?
s.t. t=1 1{bt ? rt } (pt (xt ) ? (1 ? ?)xt ) ? 0 ,
(3b)
PT
f
?
?
?
(3c)
t=1 1{bt ? rt } (pt (xt ) ? c) ? 0 , .
The objective (3a) gives the profit of the intermediary as the difference between the payments
collected from the buyers and the payments made to the seller. The revenue sharing constraint (3b)
imposes that intermediary?s profit is at most a fraction ? of the total revenue. The floor constraint (3c)
imposes that the seller is paid at least c. These constraints are imposed over the whole horizon.
The stochastic decision problem (3) can be solved via Dynamic Programming. To provide some
intuition of the structure of the optimal solution we solve a Lagrangian relaxation of the problem
where we introduce a dual variable ? ? 0 for the floor constraint (3c) and a dual variable ? ? 0 for
the revenue sharing constraint (3b). Lagrangian relaxations provide upper bounds on the optimal
objective value and introduce heuristic policies of provably good performance in many settings (e.g.,
see [7]). Moreover, we shall see the optimal policy derived from the Lagrangian relaxation is optimal
for problem (3) if constraints (3c) and (3b) are imposed in expectation instead of almost surely:
?
Theorem 4.1. Let ?? ? arg min0???1 ?(?).
The policy p?t (x) = (1 ? ?? )c + ?? (1 ? ?)x and
?
?
?
rt = r (c(? )) is optimal for problem (3) when constraints (3c) and (3b) are imposed in expectation
instead of almost surely, where
(1??)c
?
?(?)
, T 1 ? ?(1 ? ?) supr ? r, 1??(1??)
.
Remark 4.2. Although the multi period policy proposed is not a solution to the original program (3),
we emphasize that it naturally induces heuristic policies (e.g., see Algorithm 1) that are asymptotically
optimal solutions to the original multi period problem (3) without relaxation (see Theorem 6.1).
5
Comparative Analysis
We first compare the optimal reserve price of the single period and multi period model.
Proposition 5.1. Let rS , max{min{?
c, r? (c)}, r? (0)} be the optimal reserve price of the single
M
period constrained model and r , r? (c(?? )) be the optimal reserve price of the multi period
constrained model. Then rS ? rM .
6
The previous result shows that the reserve price of the single-period constrained model is larger or
equal than the one of the multi-period constrained model. As a consequence, in the multi-period
constrained model items are allocated more frequently and the social welfare is larger.
We next compare the intermediary?s optimal profit under the single period and multi period model.
This result quantifies the benefits of dynamic revenue sharing and provides insight into when dynamic
revenue sharing is profitable for the intermediary.
Proposition 5.2. Let ?S ? [0, 1] be such that r? (c(?S )) = rS . Then
+
J S ? J M ? J S + (1 ? ?S )T E [(1 ? ?)bs ? c] .
The previous result shows that the benefit of dynamic revenue sharing is driven, to a large extent, by
the second-highest bid and the opportunity cost c. If the market is thin and the second-highest bid bs
+
is low, then the truncated expectation E , E [(1 ? ?)bs ? c] is low and the benefit from dynamic
S
M
revenue sharing is small, that is, J ? J . If the market is thick and the second-highest bid bs is
high, then the benefit of dynamic revenue sharing depends on the opportunity cost c. If the floor
price c is very low, then rS = r? (0) and ?S = 1, implying that the coefficient in front of E is zero,
and there is no benefit of dynamic revenue sharing J S = J M . If the floor price c is very high, then
rS = r? (c) and ?S = 0, implying that the coefficient in front of E is 1. However, in this case the
truncated expectation E is small and again there is little benefit of dynamic revenue sharing, that is,
J S ? J M . Thus the sweet spot for dynamic revenue sharing is when the second-highest bid is high
and the opportunity cost is neither too high nor too low.
6
Heuristic Revenue Sharing Schemes
So far we focused on the theory of revenue sharing schemes. We now switch our focus to applying
insights derived from theory to the practical implementation of revenue sharing schemes. First we
note that while the policy in the statement of Theorem 4.1 is only guaranteed to satisfy constraints in
expectations, a feasible policy of the stochastic decision problems should satisfy the constraints in an
almost sure sense.
We start then by providing two transformations that convert a given policy satisfying constraints in
expectation to another policy satisfying the constraints in every sample path.
6.1
Multi-period Refund Policy
Our first transformation will keep track of how much each constraint is violated and will issue a
refund to the seller in the last period (see Algorithm 1).
ALGORITHM 1: Heuristic Refund Policy from Lagrangian Relaxation
?
1: Determine the optimal dual variable ?? ? arg min0???1 ?(?)
2: for t = 1, . . . , T do
3:
Set the reserve price rt? = r? (c(?? ))
4:
if item is sold, that is, bft ? rt? then
5:
Collect the buyers? payment x?t = max(rt? , bst )
6:
Pay the seller p?t (x?t ) = (1 ? ?? )c + ?? (1 ? ?)x?t
7:
end if
8: end for P
9: Let DF = Tt=1 1{bft ? rt? } (p?t (x?t ) ? c) be the floor deficit.
P
10: Let DR = Tt=1 1{bft ? rt? } (p?t (x?t ) ? (1 ? ?)x?t ) be the revenue sharing deficit.
11: Pay the seller ? min{DF , DR , 0}
The following result analyzes the performance of the heuristic policy. We omit the proof as this is a
standard result in the revenue management literature.
Theorem 6.1 (Theorem 1, [7]). Let J H be the expected performance of the heuristic policy. Then
?
J H ? J M ? J H + O( T ).
The previous result shows that the heuristic policy given by Algorithm 1 is asymptotically optimal
for the multi-period constrained model, that is, it implies that J H /J M ? 1 as T ? ?. When the
7
number of auctions is large, by the Law of Large Numbers, stochastic quantities tend to concentrate
around their means. So the floor and revenue sharing deficits incurred by violations of the respective
constraints are small relative to the platform?s profit and the policy becomes asymptotically optimal.
Prefix and Hybrid Revenue Sharing Policies. We also propose several other policies satisfying
even more stringent business constraints: revenue sharing constraints can be satisfied in aggregate
over all past auctions at every point in time. Construction details could be found in the full version.
7
Overview of Empirical Evaluation
In this section, we use anonymized real bid data from a major ad exchange to evaluate the policies
discussed in previous sections. Our goal will be to validate our insights on data. In the theoretical
part of this paper we made simplifying assumptions, that not necessarily hold on data. For example,
we assume quasi-concavity of the expected profit function ?(r, c). Even though this function is not
concave, we can still estimate it from data and optimize using linear search. Our theoretical results
also assume we have access to distributions of buyers? bids. We build such distributions from past
data. Finally, in our real data set bids are not necessarily stationary and identically distributed over
time. Even though there might be inaccuracies from bids changing from one day to another, our
revenue sharing policies are also robust to such non-stationarity.
Data Sets The data set is a collection of auction records, where each record corresponds to a real
time auction for an impression and consists of: (i) a seller (publisher) id, (ii) the seller declared
opportunity cost, and (iii) a set of bid records. The maximum revenue share ? that the intermediary
could take is set to be a constant. To show that our results do not rely on the selection of this constant,
we run the simulation for different values of ? (? = 0.15, 0.2, 0.25), while due to the limit of space,
we only present the numbers for ? = 0.25 and refer the readers to the full version for more details.
Our data set will consist of a random sample of auctions from 20 large publishers over the period of 2
days. We will partition the data set in a training set consisting of data for the first day and a testing
set consisting of data for the second day.
Preprocessing Steps Before running the simulation, we need to do some preprocessing of the data
set. The goal of the preprocessing is to learn the parameters required by the policies we introduced
for each seller, in particular, the optimal reserve function r? and the optimal Lagrange multiplier ?? .
We will do this estimation using the training set, i.e., the data from the first day.
The first problem is to estimate ?(r, c) and r? (c). To estimate ?(r, c) for a given impression we
look at all impressions in the training set with the same seller and obtain a list of (bf , bs ) pairs. We
build the empirical distribution where each of those pairs is picked with equal probability. This allows
us to evaluate and optimize ?(r, c) with a single pass over the data using the technique described
in [6]. For each seller, to estimate ?? , we enumerate different ??s from the discretization of [0, 1]
(denoted by D) and evaluate the profits of these policies on the training set. Then the estimation (?
?? )
?
?
?
of ? is the ? that yields the maximum profit on the training set, i.e., ?
? , arg max??D profit(?)
7.1
Evaluating Revenue Sharing Policies
We will evaluate the different policies discussed in the paper on testing set (day 2 of the data set) using
the parameters r?? (c) and ?
?? learned from the training set during preprocessing. For each revenue
sharing policy we evaluate, we will be concerned with the following metrics: profit of the exchange,
payout to the sellers, match rate which corresponds the number of impressions allocated, revenue
extracted from buyers and buyers values which is the sum of highest bids over allocated impressions
(we assume that buyers report their values truthfully in the second-price auction). In addition, the
average intermediary?s revenue share will be calculated.
The policies evaluated will be the following: NAIVE: na?ve policy (Section 2), SINGLE: single
period policy (Section 3), REFUND: multi period refund policy (Algorithm 1), PREFIX and HYBRID.5
In Table 1, we report the results of the policies described above or ? = 0.25 (see the full version
for more values of ?). The metrics are reported with respect to the NAIVE policy. In other words,
the cell in the table corresponding to revenue of policy P is the revenue lift of P with respect to
5
The details of policy PREFIX and HYBRID are omitted here, see the full version for further details.
8
policy
NAIVE
SINGLE
REFUND
PREFIX
HYBRID
profit
payout match rate revenue buyers values
0.00%
0.00%
0.00%
0.00%
0.00%
+1.64% +2.97%
+1.07% +2.64%
+1.39%
+9.55% +9.57%
+10.71% +9.56%
+9.64%
?1.00% +2.16%
?18.51% +1.37%
?2.90%
+4.61% +6.90%
+6.74% +6.33%
+4.55%
Table 1: Performance of the policies for ? = 0.25.
rev. share
25.00%
24.76%
25.00%
24.41%
24.60%
NAIVE: revenue lift(P) = revenue(P)/revenue(NAIVE) ? 1. The only metric that is not reported as a
percentage lift is the revenue share in the last column: rev share(P) = profit(P)/revenue(P).
Interpreting Simulation Results What conclusions can we draw from the lift numbers? The first
conclusion is that even though the theoretical model deviates from practice in a number of different
ways (concavity of ?(r, c), precise distribution estimates, stationarity of bids), we are still able to
improve over the na?ve policy. Notice that the na?ve policy implements the optimal reserve price
subject to a fixed revenue sharing policy. So all the gains from reserve price optimization are already
accounted for in our baseline.
We start by observing that even for SINGLE, which is a simple policy, we are able to considerably
improve over NAIVE across all performance metrics. This highlights that the observation that ?profit
and revenue can be improved by reducing the share taken by the exchange? is not only a theoretical
possibility, but a reality on real-world data.
Next we compare the lifts of SINGLE, which enforces revenue sharing constraints per impression,
versus REFUND, which enforces constraints in aggregate. We can see that the lift is 5.8 times larger
for REFUND compared to SINGLE. For ? = 0.25, the lift6 for SINGLE is +1.64% while REFUND is
+9.55%. This shows the importance of optimizing revenue shares across all auctions instead of
per auction. Additionally, we observe that the match rate and buyers values of REFUND are higher
than those of SINGLE. This is in agreement with Proposition 5.1: because the reserve price of the
single-period constrained model is typically larger than the one of the multi-period constrained model,
we expect REFUND to clear more auctions, which in turns leads to higher buyer values.
Finally, we birefly analyze the performance of PREFIX and HYBRID policies. While PREFIX is
proposed to guarantee more stringent constraints, it fails to have a positive impact on profit. Instead,
with some slight modifications, HYBRID is able to overcome these shortcomings by granting the
intermediary more freedom in picking reserve prices. As a result, we obtain a policy that is consistently
better than SINGLE. Even though not as good as REFUND in terms of revenue lift, HYBRID satisfied
the more stringent constraints that are not necessarily satisfied by REFUND. To sum up, the policies
can be ranked as follows in terms of performance:
REFUND HYBRID SINGLE NAIVE ? PREFIX.
References
[1] Santiago R. Balseiro, Jon Feldman, Vahab Mirrokni, and S. Muthukrishnan. Yield optimization of display
advertising with ad exchange. Management Science, 60(12):2886?2907, 2014.
[2] Renato Gomes and Vahab S. Mirrokni. Optimal revenue-sharing double auctions with applications to ad
exchanges. In 23rd International World Wide Web Conference, WWW ?14, 2014, pages 19?28, 2014.
[3] R Preston McAfee and John McMillan. Auctions and bidding. Journal of economic literature, 25(2):699?
738, 1987.
[4] R. Myerson and M. Satterthwaite. Efficient mechanisms for bilateral trading. Journal of Economics Theory
(JET), 29:265?281, 1983.
[5] Rad Niazadeh, Yang Yuan, and Robert D. Kleinberg. Simple and near-optimal mechanisms for market
intermediation. In Web and Internet Economics, WINE 2014. Proceedings, pages 386?399, 2014.
[6] Renato Paes Leme, Martin P?l, and Sergei Vassilvitskii. A field guide to personalized reserve prices. In
Proceedings of WWW, pages 1093?1102, 2016.
[7] Kalyan Talluri and Garrett van Ryzin. An analysis of bid-price controls for network revenue management.
Management Science, 44(11):1577?1593, 1998.
6
The reader might ask how to interpret lift numbers. The annual revenue of display advertising exchanges is
on the order of billions of dollars. At that scale, 1% lift corresponds to tens of millions of dollars in incremental
annual revenue. We emphasize that this lift is in addition to that obtained by reserve price optimization.
9
| 6861 |@word version:10 advantageous:1 stronger:1 c0:2 bf:2 willing:2 confirms:1 r:5 simulation:3 simplifying:1 paid:4 profit:51 mention:1 prefix:8 past:2 com:6 discretization:1 gmail:1 must:1 sergei:1 exposing:1 john:1 partition:1 implying:2 greedy:1 stationary:1 item:12 granting:1 record:3 provides:1 mathematical:2 driver:3 initiative:1 yuan:1 consists:1 introduce:4 theoretically:1 expected:5 market:12 roughly:1 frequently:1 nor:1 multi:18 decomposed:1 decreasing:2 automatically:1 actual:1 little:2 increasing:3 becomes:1 matched:2 moreover:2 maximizes:1 what:1 inflating:1 transformation:2 guarantee:5 every:5 act:2 concave:3 exactly:1 rm:1 control:2 sale:1 grant:5 omit:2 bst:8 appear:2 before:1 declare:1 service:1 bind:1 treat:1 tsinghua:2 limit:3 consequence:1 positive:1 despite:1 id:1 path:1 might:5 china:4 dynamically:2 specifying:1 balseiro:2 collect:1 practical:4 unique:1 enforces:2 testing:2 practice:9 implement:3 spot:1 area:1 empirical:4 matching:2 word:1 get:1 selection:1 context:1 applying:1 www:3 fruitful:1 imposed:7 reviewer:1 charged:2 maximizing:6 send:1 straightforward:2 attention:1 lagrangian:8 independently:1 convex:1 focused:2 satterthwaite:1 economics:2 amazon:1 simplicity:1 insight:6 zuo:1 his:2 traditionally:1 profitable:1 construction:2 pt:8 programming:1 us:1 designing:4 rita:1 agreement:3 satisfying:6 particularly:1 solved:1 capture:1 decrease:1 highest:9 rescaled:1 voluntarily:1 intuition:2 environment:1 seller:61 dynamic:15 solving:1 kalyan:1 serve:1 efficiency:1 selling:3 bidding:2 joint:1 various:1 muthukrishnan:1 effective:1 shortcoming:1 query:3 marketplace:1 aggregate:13 reservation:5 outside:2 outcome:1 lift:12 heuristic:9 larger:6 solve:1 think:1 online:2 advantage:2 sequence:2 optimize:2 propose:2 aligned:2 entered:1 achieve:1 intuitive:2 validate:1 billion:1 double:3 comparative:2 incremental:1 leave:1 help:2 depending:1 paying:2 implemented:2 come:1 implies:1 met:1 trading:1 concentrate:1 thick:2 merged:1 min0:2 stochastic:3 niazadeh:2 stringent:3 exchange:50 require:1 anonymous:1 preliminary:1 proposition:3 hold:1 around:1 welfare:1 algorithmic:2 reserve:39 major:3 early:1 adopt:1 omitted:1 wine:1 estimation:2 intermediary:39 city:5 always:4 aim:1 avoid:1 broader:1 derived:2 korula:1 misreport:1 focus:3 improvement:1 consistently:1 baseline:2 sense:1 dollar:2 helpful:1 typically:4 bt:6 accept:1 quasi:3 provably:1 compatibility:1 issue:2 among:4 arg:5 dual:3 denoted:1 negotiation:1 development:1 platform:22 constrained:8 equal:3 field:1 beach:1 sell:2 broad:1 represents:1 look:1 paes:2 thin:1 jon:1 others:1 report:3 sweet:1 national:1 ve:13 consisting:2 attempt:1 freedom:3 stationarity:2 highly:1 possibility:2 evaluation:1 violation:1 extreme:1 allocating:3 necessary:1 respective:1 conduct:1 supr:1 theoretical:5 cba00300:1 vahab:3 industry:1 column:1 giles:1 cover:1 maximization:2 cost:46 strategic:1 too:4 front:2 optimally:2 characterize:5 reported:3 varies:1 considerably:1 st:1 international:1 retain:1 contract:8 off:1 picking:1 na:13 again:3 central:1 satisfied:7 management:4 payout:10 dr:2 account:3 potential:1 bidder:1 coefficient:2 santiago:2 satisfy:5 notable:1 ad:22 passenger:1 depends:1 vehicle:1 try:1 picked:1 bilateral:1 analyze:2 characterizes:1 observing:1 competitive:1 start:4 option:1 participant:1 correspond:1 yield:2 bayesian:1 manages:1 ren:1 advertising:7 app:2 history:1 inform:1 sharing:75 whenever:2 definition:1 against:1 naturally:1 proof:1 static:2 sampled:1 gain:1 adjusting:1 ask:1 improves:1 agreed:2 garrett:1 surplus:2 higher:3 day:6 follow:1 specify:1 improved:1 done:1 though:4 evaluated:1 marketing:1 hand:1 web:7 google:8 glance:1 scientific:1 believe:1 pricing:4 usa:1 talluri:1 multiplier:3 hence:1 preston:1 during:1 cba00301:1 impression:27 tt:2 interpreting:1 auction:48 adx:1 recently:1 empirically:2 overview:2 volume:3 million:1 discussed:3 fare:2 extend:1 slight:1 interpret:1 refer:1 feldman:1 talent:1 rd:1 ebay:2 ride:1 access:1 operating:1 align:1 own:1 perspective:3 optimizing:3 irrelevant:1 optimizes:1 driven:1 certain:1 retailer:1 minimum:2 analyzes:1 floor:8 surely:4 determine:1 maximize:2 advertiser:2 period:39 truthful:1 ii:2 full:8 multiple:4 reduces:1 exceeds:1 match:5 youth:1 characterized:1 jet:1 long:2 lin:1 hazard:1 divided:1 impact:1 basic:1 expectation:8 metric:5 df:2 cell:1 addition:2 want:1 sends:2 publisher:16 allocated:5 operate:1 sure:1 comment:2 subject:2 tend:1 leveraging:1 effectiveness:1 call:1 extracting:1 near:1 yang:1 intermediate:1 split:4 identically:3 iii:1 concerned:1 bid:22 switch:1 mcafee:1 restrict:1 suboptimal:1 reduce:1 economic:1 avenue:1 vassilvitskii:1 motivated:1 triviality:1 penalty:2 song:1 york:4 cause:1 action:1 remark:1 migrate:1 enumerate:1 generally:1 leme:2 clear:1 ryzin:1 amount:2 ten:1 induces:1 http:2 specifies:1 percentage:3 notice:1 per:4 track:1 discrete:1 promise:1 incentive:6 shall:1 key:1 sivan:1 drawn:1 changing:1 neither:1 asymptotically:4 worrying:1 relaxation:10 fraction:12 sum:3 beijing:1 enforced:1 run:4 convert:1 auctioneer:2 bft:5 arrive:1 almost:3 reader:2 draw:1 decision:7 fee:2 entirely:1 renato:3 internet:2 pay:9 bound:2 guaranteed:1 display:5 annual:2 constraint:44 personalized:1 sake:1 declared:2 kleinberg:1 nitish:1 min:3 formulating:1 martin:2 ssrn:1 according:10 combination:1 across:5 rev:2 making:1 happens:1 b:6 modification:1 sided:2 taken:3 payment:20 discus:4 turn:1 mechanism:5 end:2 adopted:1 refund:14 apply:1 observe:1 simulating:1 original:3 running:3 include:1 remaining:1 opportunity:18 somewhere:1 exploit:1 restrictive:1 build:2 objective:8 question:1 quantity:1 already:1 map:1 mirrokni:5 traditional:2 rt:25 behalf:3 thank:2 deficit:4 prospective:1 partner:1 valuation:1 collected:7 extent:1 reason:3 enforcing:1 retained:1 index:1 ratio:1 balance:2 providing:1 equivalently:2 robert:1 potentially:2 statement:1 info:1 design:9 implementation:1 policy:73 allowing:1 upper:1 observation:1 sold:7 implementable:1 finite:1 displayed:1 truncated:2 extended:1 precise:1 jim:1 mcmillan:1 introduced:1 complement:1 pair:2 required:3 trip:1 rad:1 website3:1 learned:1 inaccuracy:1 nip:1 able:7 below:1 challenge:1 program:5 max:18 transacted:2 business:4 natural:1 hybrid:8 rely:1 ranked:1 scheme:29 improve:8 technology:1 predates:1 extract:2 columbia:2 naive:7 faced:2 prior:1 literature:2 deviate:1 relative:1 law:1 loss:3 expect:1 highlight:1 interesting:1 versus:1 revenue:130 foundation:1 incurred:1 agent:4 sufficient:1 proxy:1 consistent:1 imposes:6 anonymized:1 share:16 alway:1 compatible:1 accounted:1 supported:1 last:2 free:1 arriving:1 offline:2 guide:2 allow:1 side:1 wide:1 taking:1 distributed:3 benefit:6 overcome:1 calculated:1 van:1 world:5 evaluating:1 concavity:2 ignores:2 author:3 made:4 adaptive:1 collection:1 preprocessing:4 party:1 employing:1 social:1 far:1 lyft:1 approximate:1 ignore:1 emphasize:2 keep:5 maxr:2 sequentially:1 owns:1 gomes:2 truthfully:3 search:5 quantifies:1 pretty:1 table:3 additionally:2 reality:1 nature:2 learn:1 robust:1 ca:1 decoupling:1 elastic:1 inventory:2 necessarily:4 constructing:1 submit:1 main:5 whole:1 repeated:5 allowed:1 referred:1 fashion:1 ny:4 sub:1 fails:1 winning:1 dozen:1 theorem:7 xt:5 offset:1 list:1 submitting:1 consist:1 importance:1 margin:1 demand:3 horizon:2 simply:1 myerson:1 intern:1 lagrange:2 partially:2 corresponds:4 satisfies:4 determines:3 extracted:4 slot:1 goal:4 viewed:1 price:52 feasible:2 operates:2 reducing:1 called:1 total:3 pas:1 buyer:45 uber:8 anticipative:1 latter:1 arises:1 reactive:1 violated:1 evaluate:6 correlated:1 |
6,481 | 6,862 | Decomposition-Invariant Conditional Gradient for
General Polytopes with Line Search
Mohammad Ali Bashiri
Xinhua Zhang
Department of Computer Science, University of Illinois at Chicago
Chicago, Illinois 60661
{mbashi4,zhangx}@uic.edu
Abstract
Frank-Wolfe (FW) algorithms with linear convergence rates have recently achieved
great efficiency in many applications. Garber and Meshi (2016) designed a new
decomposition-invariant pairwise FW variant with favorable dependency on the
domain geometry. Unfortunately it applies only to a restricted class of polytopes
and cannot achieve theoretical and practical efficiency at the same time. In this
paper, we show that by employing an away-step update, similar rates can be
generalized to arbitrary polytopes with strong empirical performance. A new
?condition number? of the domain is introduced which allows leveraging the sparsity
of the solution. We applied the method to a reformulation of SVM, and the linear
convergence rate depends, for the first time, on the number of support vectors.
1
Introduction
The Frank-Wolfe algorithm [FW, 1] has recently gained revived popularity in constrained convex
optimization, in part because linear optimization on many feasible domains of interest admits efficient
computational solutions [2]. It has been well known that FW achieves O(1/) rate for smooth convex
optimization on a compact domain [1, 3, 4]. Recently a number of works have focused on linearly
converging FW variants under various assumptions.
In the context of convex feasibility problem, [5] showed linear rates for FW where the condition
number depends on the distance of the optimum to the relative boundary [6]. Similar dependency
was derived in the local linear rate on polytopes using the away-step [6, 7]. With a different analysis
approach, [8?10] derived linear rates when the Robinson?s condition is satisfied at the optimal solution
[11], but it was not made clear how the rate depends on the dimension and other problem parameters.
To avoid the dependency on the location of the optimum, [12] proposed a variant of FW whose
rate depends on some geometric parameters of the feasible domain (a polytope). In a similar flavor,
[13, 14] analyzed four versions of FW including away-steps [6], and their affine-invariant rates depend
on the pyramidal width (Pw) of the polytope, which is hard to compute and can still be ill-conditioned.
Moreover, [15] recently gave a duality-based analysis for non-strongly convex functions. Some lower
bounds on the dependency of problem parameters for linear rates of FW are given in [12, 16].
To get around the lower bound, one may tailor FW to specific objectives and domains (e.g. spectrahedron in [17]). [18] specialized the pairwise FW (PFW) to simplex-like polytopes (SLPs) whose
vertices are binary, and is defined by equality constraints and xi ? 0. The advantages include: a) the
convergence rate depends linearly on the cardinality of the optimal solution and the domain diameter
square (D2 ), which can be much better than the pyramidal width; b) it is decomposition-invariant,
meaning that it does not maintain a pool of atoms accumulated and the away-step is performed on the
face that the current iterate lies on. This results in considerable savings in computation and storage.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
PFW-1 [18] PFW-2 [18]
(SLP)
general
Unit cube [0, 1]n
Pk = {x ? [0, 1]n : 1> x = k}
Qk = {x ? [0, 1]n : 1> x ? k}
arbitrary polytope in Rn
?
?
?
?
ns
ks
?
?
LJ [13]
general
AFW-1
(SLP)
AFW-2
general
n2
n (k = 1)
k ? Pw?2
D2 ? Pw?2
ns
ks
?
?
n2 s
k2 s
2
k min(sk, n)
D2 nHs
Table 1: Comparison of related methods. These numbers need to be multiplied with ? log 1 to get the
convergence rates, where ? is the condition number of the objective, D is the diameter of the domain,
s is the cardinality of the optimum, and Pw is the pyradimal width.. Our method is AFW. ? means
inapplicable or no rate known. PFW-1 [18] and AFW-1 apply only to SLP, hence not covering Qk
(k ? 2). [13] showed the pyramidal width for Pk only with k = 1.
However, [18] suffers from multiple inherent restrictions. First it applies only to SLPs, which although
encompass useful sets such as k-simplex Pk , do not cover its convex hull with the origin (Qk ):
Pk = {x ? [0, 1]n : 1> x = k},
Qk = {x ? [0, 1]n : 1> x ? k}, where k ? {1, . . . , n}.
Here 1 = (1, . . . , 1)> . Extending its analysis to general polytopes is not promising because it relies
fundamentally on the integrality of the vertices. Second, its rate is derived from a delicately designed
sequence of step size (PFW-1), which exhibits no empirical competency. In fact, the experiments in
[18] resorted to line search (PFW-2). However no rate was proved for it. As shown in [13], dimension
friendly bounds are intrinsically hard for PFW, and they settled for the factorial of the vertex number.
The goal of this paper is to address these two issues while at the same time retaining the computational
efficiency of decomposition invariance. Our contributions are four folds. First we generalize the
dimension friendly linear rates to arbitrary polytopes, and this is achieved by replacing the pairwise
PFW in [18] with the away-step FW (AFW, ?2), and setting the step sizes by line search instead of a
pre-defined schedule. This allows us to avoid ?swapping atoms? in PFW, and the resulting method
(AFW-2) delivers not only strong empirical performance (?5) but also strong theoretical guarantees
(?3.5), improving upon PFW-1 and PFW-2 which are strong in either theory or practice, but not both.
Second, a new condition number Hs is introduced in ?3.1 to characterize the dimension dependency of
AFW-2. Compared with pyramidal width, it not only provides a more explicit form for computation,
but also leverages the cardinality (s) of the optimal solution. This may lead to much smaller constants
considering the likely sparsity of the solution. Since pyramidal width is hard to compute [13], we
leave the thorough comparison for future work, but they are comparable on simple polytopes. The
decomposition invariance of AFW-2 also makes each step much more efficient than [13].
Third, when the domain is indeed an SLP, we provide a step size schedule (AFW-1, ?3.4) yielding the
same rate as PFW-1. This is in fact nontrivial because the price for replacing PFW by AFW is the
much increased hardness in maintaining the integrality of iterates. The current iterate is scaled in
AFW, while PFW simply adds (scaled) new atoms (which on the other hand complicates the analysis
for line search [13]). Our solution relies on first running a constant number of FW-steps.
Finally we applied AFW to a relaxed-convex hull reformulation of binary kernel SVM with bias (?4),
obtaining O(n?(#SV)3 log 1 ) computational complexity for AFW-1 and O(n?(#SV)4 log 1 ) for
AFW-2. Here ? is the condition number of the objective, n is the number of training examples, and
#SV is the number of support vectors in the optimal solution. This is much better than the best known
result of O(n3 ? log 1 ) based on sequential minimal optimization [SMO, 19, 20], because #SV is
typically much smaller than n. To the best of our knowledge, this is the first linear convergence rate
for hinge-loss SVMs with bias where the rate leverages dual sparsity.
A brief comparison of our method (AFW) with [18] and [13] is given in Table 1. AFW-1 matches
the superior rates of PFW-1 on SLPs, and AFW-2 is more general and its rate is slightly worse than
AFW-1 on SLPs. PFW-2 has no rates available, and pyramidal width is hard to compute in general.
2
Preliminaries and Algorithms
Our goal is to solve minx?P f (x), where P is a polytope and f is both strongly convex and smooth. A
2
function f : P ? R is ?-strongly convex if f (y) ? f (x)+hy ? x, ?f (x)i+ ?2 ky ? xk , ? x, y ?
2
Algorithm 1: Decomposition-invariant Away-step Frank-Wolfe (AFW)
1 Initialize x1 by an arbitrary vertex of P. Set q0 = 1.
2 for t = 1, 2, . . . do
3
Choose the FW-direction via vt+ ? arg minv?P hv, ?f (xt )i, and set dFW
? vt+ ? xt .
t
?
A
4
Choose
in (3), and set dt ? xt ? vt? .
FW the away-direction
A vt by calling
the away-oracle
FW
5
if dt , ??f (xt ) ? dt , ??f (xt ) then dt ? dt , else dt ? dA
t . . Choose a direction
6
Choose the step size ?t by using one of the following two options:
7
Option 1: Pre-defined step size:
. This is for SLP only. Need input arguments n0 , ?t .
8
if t ? n0 then
. Perform FW-step for the first n0 steps
9
Set qt = t, ?t = 1t , and revert dt = dFW
t .
10
else
12
11
Find the smallest integer s ? 0 such that qt defined as follows satisfies qt ? d1/?t e:
?
? s
2 qt?1 + 1 if line 5 adopts the FW-step
qt ?
, and ?t ? qt?1 .
(2)
?2s q
? 1 if line 5 adopts the away-step
t?1
Option 2: Line search: ?t ? arg min f (xt + ?dt ), s.t. xt + ?dt ? P. . General purpose
??0
? .
xt+1 ? xt + ?t dt . Return xt if ??f (xt ), dFW
t
13
14
Algorithm 2: Decomposition-invariant Pairwise Frank-Wolfe (PFW) (exactly the same as [18])
1
:= vt+ ? vt? , and b) line 8-11 by
... as in Algorithm 1, except replacing a) line 5 by dt = dPFW
t
Option 1: Pre-defined step size: Find the smallest integer s ? 0 such that 2s qt?1 ? 1/?t .
Set qt ? 2s qt?1 and ?t ? qt?1 . . This option is for SLP only.
P. In this paper, all norms are Euclidean, and we write vectors in bold lowercase letters. f is ?2
smooth if f (y) ? f (x) + hy ? x, ?f (x)i + ?2 ky?xk , ? x, y ? P. Denote the condition number as
? = ?/?, and the diameter of the domain P as D. We require D < ?, i.e. the domain is bounded.
Let [m] := {1, . . . , m}. In general, a polytope P can be defined as
P = {x ? Rn : hak , xi ? bk , ? k ? [m], Cx = d}.
(1)
Here {ak } is a set of ?directions? and is finite (m < ?) and bk cannot be reduced without changing P.
Although the equality constraints can be equivalently written as two linear inequalities, we separate
them out to improve the bounds below. Denoting A = (a1 , . . . , am )> and b = (b1 , . . . , bm )> , we
can simplify the representation into P = {x ? Rn : Ax ? b, Cx = d}.
In the sequel, we will find highly efficient solvers for a special class of polytope that was also studied
by [18]. We call a potytope as a simplex-like polytope (SLP), if all vertices are binary (i.e. the set of
extreme points ext(P) are contained in {0, 1}n ), and the only inequality constraints are x ? [0, 1]n .1
Our decomposition-invariant Frank-Wolfe (FW) method with away-step is shown in Algorithm 1.
There are two different schemes of choosing the step size: one with fixed step size (AFW-1) and one
with line search (AFW-2). Compared with [13], AFW-2 enjoys decomposition invariance. Like [13],
we also present a pairwise version in Algorithm 2 (PFW), which is exactly the method given in [18].
The efficiency of line search in step 13 of Algorithm 1 depends on the polytope. Although in
general one needs a problem-specific procedure to compute the maximal step size, we will show in
experiments some examples where such procedures with high computational efficiency are available.
The idea of AFW is to compute a) the FW-direction in the conventional FW sense (call it FW-oracle),
and b) the away-direction (call it away-oracle). Then pick the one that gives the steeper descent and
take a step along it. Our away-oracle adopts the decomposition-invariant approach in [18], which
differs from [13] by saving the cost of maintaining a pool of atoms. To this end, our search space in
the away-oracle is restricted to the vertices that satisfy all the inequality constraints by equality if the
1
Although [18] does not allow for x ? 1 constraints, we can add a slack variable yi : yi + xi = 1, yi ? 0.
3
current xt does so:
vt? := arg maxv hv, ?f (xt )i , s.t. Av ? b, Cv = d, and hai , xt i = bi ? hai , vi = bi ?i. (3)
Besides saving the space of atoms, this also dispenses with computing the inner product between the
gradient and all existing atoms. Before moving on to the analysis, we here make a new, albeit quick,
observation that this selection scheme is in fact decomposing xt implicitly. Specifically, it tries all
possible decompositions of xt , and for each of them it finds the best away-direction in the traditional
sense. Then it picks the best of the best over all proper convex decompositions of xt .
Property 1. Denote S(x) := {S ? P : x is a proper convex combination of all elements in S},
where proper means that all elements in S have a strictly positive weight. Then the away-step in (3)
is exactly equivalent to maxS?S(xt ) maxv?S hv, ?f (xt )i . See the proof in Appendix A.
3
Analysis
We aim to analyze the rate by which the primal gap ht := f (xt ) ? f (x? ) decays. Here x? is the
minimizer of f , and we assume it can be written as the convex combination of s vertices of P.
3.1
A New Geometric ?Condition Number? of a Polytope
Underlying the analysis of linear convergence for FW-style algorithms is the following inequality
that involves a geometric ?condition number? Hs of the polytope: (vt+ and vt? are FW and awaydirections)
p
2Hs ht /? vt+ ? vt? , ?f (xt ) ? hx? ? xt , ?f (xt )i .
(4)
In Theorem 3 of [13], this Hs is essentially the pyramidal width inverse. In Lemma 3 of [18], it is the
cardinality of the optimal solution, which, despite being better than the pyramidal width, is restricted
to SLPs. Our first key step here is to relax this restriction to arbitrary polytopes and define our Hs .
Let {ui } be the set of vertices of the polytope P, and this set must be finite. We do not assume ui is
binary. The following ?margin? for each separating hyperplane directions ak will be important:
gk := max hak , ui i ? second max hak , ui i ? 0.
i
i
(5)
Here the second max is the second distinct max in {hak , ui i : i}. If hak , ui i is invariant to i, then
this inequality hak , xi ? bk is indeed an equality constraint (hak , xi = maxz?P hak , zi) hence can
be moved to Cx = d. So w.l.o.g, we assume gk > 0. Now we state the generalized result.
(1). Suppose x can be written as some convex combination of s
Lemma 1. Let P be defined as
Pin
s
number of vertices of P: x = i=1 ?i ui , where ?i ? 0, 1> ? = 1. Then any y?? P can be written
Ps
as y = i=1 (?i ? ?i )ui + (1> ?)z, such that z ? P, ?i ? [0, ?i ], and 1> ? ? Hs kx ? yk where
!2
n
X
X akj
Hs :=
max
.
(6)
gk
S?[m],|S|=s
j=1
k?S
In addition, Equation (4) holds with this definition of Hs . Note our Hs is defined here, not in (4).
Some intuitive interpretations of Hs are in order. First the definition in (6) admits a much more
explicit characterization than pyramidal width. The maximization in (6) ranges over all possible
subsets of constraints with cardinality s, and can hence be much lower than if s = m (taking all
constraints). Recall that pyramidal width is oblivious to, hence not benefiting from, the sparsity of
the optimal solution. More comparisons are hard to make because [13] only provided an existential
proof of pyramidal width, along with its value for simplex and hypercube only.
However, Hs is clearly not intrinsic of the polytope. For example, by definition Hs = n for Q2 .
By contrast, we can introduce a slack variable y to Q2 , leading to a polytope over [x; y] (vertical
concatenation), with x ? 0, y ? 0, y + 1> x = 2. The augmented polytope enjoys Hs = s.
Nevertheless, adding slack variables increases the diameter of the space and the vertices may no
longer be binary. It also incurs more computation.
Second, gk may approach 0 (tending Hs to infinity) when more linear constraints are introduced and
vertices get closer neighbors. Hs is infinity if the domain is not a polytope, requiring an uncountable
4
number of supporting hyperplanes. Third, due to the square in (6), Hs grows more rapidly as one
variable participates in a larger number of constraints, than as a constraint involves a larger number
of variables. When all gk = 1 and all akj are nonnegative, Hs grows with the magnitude of akj .
However this is not necessarily the case when akj elements have mixed sign. Finally, Hs is relative
to the affine subspace that P lies in, and is independent of linear equality constraints.
The proof of Lemma 1 utlizes the fact that the lowest value of 1> ? is the optimal objective value of
min?,z 1> ?,
s.t. 0 ? ? ? ?,
y = x ? (u1 , . . . , us )? + (1> ?)z,
z ? P,
(7)
where the inequalities are both elementwise. To ensure z ? P, we require Az ? b, i.e.
(b1> ? AU )? ? A(y ? x),
where U = (u1 , . . . , us ).
(8)
The rest of the proof utilizes the optimality conditions of ?, and is relegated to Appendix A.
Compared with Lemma 2 of [18], our Lemma 1 does not require ext(P) to be binary, and allows
arbitrary inequality constraints rather than only x ? 0. Note Hs depends on b indirectly, and employs
a more explicit form for computation than pyramidal width. Obviously Hs is non-decreasing in s.
Example 1. To get some idea, consider the k-simplex Pk or more general polytopes {x ? [0, 1]n :
Cx = d}. In this case, the inequality constraints are exclusively xi ? [0, 1], meaning ak = ?ek for
all k ? [2n] in (1). Here ek stands for a canonical vector of straight 0 except a single 1 in the k-th
coordinate. Obviously all gk = 1. Therefore by Lemma 1, one can derive Hs = s, ? s ? n.
Example 2. To include inequality, let us consider Qk , the convex hull of a k-simplex. Lemma 1
implies its Hs = n + 3s ? 3, independent of k. One might hope to get better Hs when k = 1, since
the constraint x ? 1 can be dropped in this case. Unfortunately, still Hs = n.
Remark 1. The L0 norm of the optimal x can be connected with s simply by Caratheodory?s theorem.
Obviously s = kxk0 (L0 norm) for P1 and Q1 . In general, an x in P may be decomposed in multiple
ways, and Lemma 1 immediately applies to the lowest (best) possible value of s (which we will refer
to as the cardinality of x following [18]). For example, the smallest s for any x ? Pk (or Qk ) must
be at most kxk0 + 1, because x must be in the convex hull of V := {y ? {0, 1}n : 1> y = k, xi =
0 ? yi = 0 ? i}. Clearly its affine hull has dimension kxk0 , and V is a subset of ext(Pk ) = ext(Qk ).
3.2
Tightness of Hs under a Given Representation of the Polytope
We show some important examples that demonstrate the tightness of Lemma 1 with respect to the
dimensionality (n) and the cardinality of x (s). Note the tightness is in the sense of satisfying the
conditions in Lemma 1, not in the rate of convergence for the optimization algorithm.
Example 3. Consider Q2 . u1 = e1 is a vertex and let x = u1 (hence s = 1) and y = (1, , . . . , )> ,
>
where > 0 is a small scalar.
?So in the necessary condition (8), the row corresponding to 1 x ? 2
becomes ?1 ? (n ? 1) = n ? 1 ? kx ? yk . By Lemma 1, Hs = n which is almost n ? 1.
Example 4. Let us see another example that is not simplex-like. Let ak = ?ek + en+1 + en+2
for k ? [n]. Let A = (a1 , . . . , an )> = (?I, 1, 1) where I is the identity matrix. Define P as
P = x ? [0, 1]n+2 : Ax ? 1 , i.e. b = 1. Since A is totally unimodular, all the vertices of P must
Pn
be binary. Let us consider x = i=1 iei + ren+1 + (1 ? r)en+2 , where r = n(n + 1)/2 and
> 0 is a small positive constant. x can be represented as the convex combination of n + 1 vertices
Xn
x=
iui + (1 ? r)un+1 , where ui = ei + en+1 for i ? n, and un+1 = en+2 . (9)
i=1
With U = (u1 , . . . , un+1 ), we have b1> ? AU
? = (I, 0). Let y = x + en+1 , which is clearly in P.
Then (8) becomes ? ? 1, and so 1> ? ? n2 ky ? xk. Applying Lemma 1 with s = n + 1 and
gk = 1 for all k, we get Hs = 2n2 + n ? 1, which is of the same order of magnitude as n2 .
3.3
Analysis for Pairwise Frank-Wolfe (PFW-1) on SLPs
Equipped with Lemma 1, we can now extend the analysis in [18] to SLPs where the constraint of
x ? 1 can be explicitly accommodated without having to introduce a slack variable which increases
the diameter D and costs more computations.
2
t?1
Theorem 1. Applying PFW-1 to SLP, all iterates must be feasible and ht ? ?D
if we
2 (1 ? c1 )
t?1
1/2
?
?
2
set ?t = c1 (1?c1 ) , where c1 = 16?Hs D2 . The proof just replaces all card(x ) in [18] with Hs .
5
Slight effort is needed to guarantee the feasibility and we show it as Lemma 6 in Appendix A.
When P is not an SLP or general inequality constraints are present, we resort to line search (PFW-2),
which is more efficient than PFW-1 in practice. However, the analysis becomes challenging [13, 18],
because it is difficult to bound the number of steps where the step size is clamped due to the feasibility
constraint (the swap step in [13]). So [13] appealed to a bound that is the factorial of the number of
vertices. Fortunately, we will show below that by switching to AFW, the line search version achieves
linear rates with improved dimension dependency for general polytopes, and the pre-defined step
version preserves the strong rates of PFW-1 on SLPs. These are all facilitated by the Hs in Lemma 1.
3.4
Analysis for Away-step Frank-Wolfe with Pre-defined Step Size (AFW-1) on SLPs
We first show that AFW-1 achieves the same rate of convergence as PFW-1 on SLPs. Although this
does not appear surprising and the proof architecture is similar to [18], we stress that the step size
needs delicate modifications because the descent direction dt in PFW does not rescale xt , while
AFW does. Our key novelty is to first run a constant number of FW-steps (O( 1t ) rate), and start
accepting away-steps when the step size is small enough to ensure feasibility and linear convergence.
We first establish the feasibility of iterates under the pre-defined step sizes. Proofs are in Appendix A.
Lemma 2 (Feasibility of iterates for AFW-1). Suppose P is an SLP and the reference step sizes
{?t }t?n0 are contained in [0, 1]. Then the iterates generated by AFW-1 are always feasible.
Choosing the step size. Key to the AFW-1 algorithm is the delicately chosen sequence of step
sizes. For AFW-1, define (logarithms are natural basis)
r
?
M1 ?
?D2
(t?1)/2
c0 (1 ? c1 )
, where M1 =
?t =
, ? = 52
(10)
, M2 =
?M2
8Hs
2
M2 ? ? 4
1
1
3M2 log n0
c1 = 1
<
, n0 =
, c0 =
(1 ? c1 )1?n0 .
(11)
2
M2 4?
200
c1
n0
Lemma 3. In AFW-1, we have ht ? 3t M2 log t for all t ? [2, n0 ]. Obviously n0 ? 200 by (11).
2
This result is similar to Theorem 1 in [4]. However, their step size is 2/(t + 2) leading to a t+2
M2
rate of convergence. Such a step size will break the integrality of the iterates, and hence we adjusted
the step size, at the cost of a log t term in the rates which can be easily handled in the sequel.
The condition number c1 gets better (bigger) when: the strongly convex parameter ? is larger, the
smoothness constant ? is smaller, the diameter D of the domain is smaller, and Hs is smaller.
?1
Lemma 4. For all t ? n0 , AFW-1 satisfies a) ?t ? 1, b) ?t+1
? ?t?1 ? 1, and c) ?t ? [ 41 ?t , ?t ].
By Lemma 2 and Lemma 4a, we know that the iterates generated by AFW-1 are all feasible.
Theorem 2. Applying AFW-1 to SLP, the gap decays as ht ? c0 (1 ? c1 )t?1 for all t ? n0 .
n0 ?1
2
Proof. By Lemma 3, hn0 ? 3M
. Let the result hold for some t ? n0 . Then
n0 log n0 = c0 (1 ? c1 )
? 2 2
? D (smoothness of f )
2 t
?
?t
+
? ht +
vt ? vt? , ?f (xt ) + ?t2 D2 (by step 5 of Algorithm 1)
2r
2
?t
? p
? 2 2
? ht ?
ht + ?t D (by (4) and the fact hx? ? xt , ?f (xt )i ? ?ht )
2 2Hs
2
1
?
1/2
? ht ? M1 ?t ht + ?t2 D2 (Lemma 4c and the defn. of M1 )
4
2
M12 ?
M2
1/2
= ht ?
c0 (1 ? c1 )(t?1)/2 ht + 2 1 c0 (1 ? c1 )t?1 (by defn. of ?t )
4?M2
? M2
2
2
M1
M
? c0 (1 ? c1 )t?1 1 ?
+ 2 1
= c0 (1 ? c1 )t (by defn. of c1 ).
4?M2
? M2
ht+1 ? ht + ?t hdt , ?f (xt )i +
1/2
(12)
(13)
(14)
(15)
(16)
(17)
Here the inequality in step (17) is by treating (16) as a quadratic of ht and applying the induction
assumption on ht . The last step completes the induction: the conclusion also holds for step t + 1.
6
3.5
Analysis for Away-step Frank-Wolfe with Line Search (AFW-2)
We finally analyze AFW-2 on general polytopes with line search. Noting that f (xt + ?dt ) ? f (x? ) ?
(14) (with ?t in (14) replaced by ?), we minimize both sides over ? : xt + ?dt ? P. If none of the
inequality constraints are satisfied as equality at the optimal ?t of line search, then we call it a good
step and in this case
?
1
1/2
M1 ht ).
(18)
ht+1 ? 1 ?
ht ,
(Eq 14 in ? is minimized at ?t? :=
256?D2 Hs
?D2
The only task left is to bound the number of bad steps (i.e. ?t clamped by its upper bound). In [13]
where the set of atoms is maintained, it is easily shown that up to step t there can be only at most t/2
bad steps, and so the overall rate of convergence is slowed down by at most a factor of two. This
favorable result no longer holds in our decomposition-invariant AFW. However, thanks to the special
property of AFW, it is still not hard to bound the number of bad steps between two good steps.
First we notice that such clamping never happens for FW-steps, because ?t? ? 1 and for FW-steps,
xt + ?t dt ? P implicitly enforces ?t ? 1 only (after ?t ? 0 is imposed). For an away-step, if the
line search is blocked by some constraint, then at least one inequality constraint will turn into an
equality constraint if the next step is still away. Since AFW selects the away-direction by respecting
all equality constraints, the succession of away-steps (called an away epoch) must terminate when the
set of equalities define a singleton. For any index set of inequality constraints S ? [m], let P(S) :=
{x ? P : haj , xi = bj , ? j ? S} be the set of points that satisfy these inequalities with equality. Let
n(P) := max {|S| : S ? [m], |P(S)| = 1, |P(S 0 )| = ? for all S 0 ( S}
(19)
be the maxi-min number of constraints to define a singleton. Then obviously n(P) ? n, and so to
2
2
find an accurate solution, AFP-2 requires at most O( ?D?Hs n(P) log 1 ) ? O( n?D? Hs log 1 ) steps.
2
Example 5. Suppose f (x) = 21 kx + 1k with P = [0, 1]n . Clearly n(P) = n. Unfortunately we
can construct an initial x1 as a convex combination of only O(log n) vertices, but AFW-2 will then
run O(n) number of away-steps consecutively. Hence our above analysis on the max length of away
epoch seems tight. See the construction in Appendix A.
Tighter bounds. By refining the analysis of the polytopes, we may improve upon the n(P) bound.
For example it is not hard to show that n(Pk ) = n(Qk ) = n. Let us consider the number of non-zeros
in the iterates xt . A bad step (which must be an away-step) will either a) set an entry to 1, which will
force the corresponding entry of vt? to be 1 in the future steps of the away epoch, hence can happen
at most k times; or b) set at least one nonzero entry of xt into 0, and will never switch a zero entry to
nonzero. But each FW-step may introduce at most k nonzeros. So the number of bad steps cannot be
2
over 2k times of that of FW-step, and the overall iteration complexity is at most O( k?D? Hs log 1 ).
We can now revisit Table 1 and observe the generality and efficiency of AFW-2. It is noteworthy that
on SLPs, we are not yet able to establish the same rate as AFW-1. We believe that the vertices being
binary is very special, making it hard to generalize the analysis.
4
Application to Kernel Binary SVM
As an concrete example, we apply AFW to the dual objective of a binary SVM with bias:
(SVM-Dual)
min f (x) := 12 x> Qx ?
x
1 >
C 1 x,
s.t. x ? [0, 1]n , y> x = 0.
(20)
Here y = (y1 , . . . , yn )> is the label vector with yi ? {?1, 1}, and Q is the signed
? kernel matrix
with Qij = yi yj k(xi , xj ). Since the feasible region is an SLP with diameter O( n), we can use
both AFW-1 and PFW-1 to solve it with O(#SV ? n? log 1 ) iterations, where ? is the ratio between
the maximum and minimum eigenvalues of Q (assume Q is positive definite), and #SV stands for the
number of support vectors in the optimal solution.
Computational efficiency per iteration. The key technique for computational efficiency is to keep
updating the gradient ?f (x) over the iterations, exploiting the fact that vt+ and vt? might be sparse
and ?f (x) = Qx ? C1 1 is affine in x. In particular, when AFW takes a FW-step in line 5, we have
Qdt = QdFW
= Q(vt+ ? xt ) = ??f (xt ) ?
t
7
1
C1
+ Qvt+ .
(21)
PFW
Similar update formulas can be shown for away-step dA
. So if v+ (or vt? ) has k
t and PFW-step dt
non-zeros, all these three updates can be performed in O(kn) time. Based on them, we can update
the gradient by ?f (xt+1 ) = ?f (xt ) + ?t Qdt . The FW-oracle and away-oracle cost O(n) time
given the gradient, and the line search has a closed form solution. See more details in Appendix B.
Major drawback. This approach unfortunately provides no control of the sparseness of vt+ and vt? .
As a result, each iteration may require evaluating the entire kernel matrix (O(n2 ) kernel evaluations),
leading to an overall computational cost O(#SV ? n3 ? log 1 ) . This can be prohibitive.
4.1
Reformulation by Relaxed Convex Hull
To ensure the sparsity of each update, we reformulate the SVM dual objective (20) by using the
reduced convex hull (RC-Hull, [21]). Let P and N be the set of positive and negative examples, resp.
1 > +
1
2
(RC-Margin)
min
(1 ? + 1> ? ? ) + k?k ? ? + ?,
|P
|
|N
|
+
?
2
?, ? ?R , ? ?R
, ?, ? K
(22)
s.t.
(RC-Hull)
A> ? ? ?1 + ? + ? 0,
min
u?R|P | ,v?R|N |
1
2
2
kAu ? Bvk ,
?B > ? + ?1 + ? ? ? 0, ? + ? 0, ? ? ? 0.
s.t. u ? PK , v ? PK .
(23)
Here A (or B) is a matrix whose i-th column is the (implicit) feature representation of the i-th positive
(or negative) example. RC-Margin resembles the primal SVM formulation, except that the bias term
is split into two terms ? and ?. RC-Hull is the dual problem of RC-Margin, and it has a very intuitive
geometric meaning. When K = 1, RC-Hull tries to find the distance between the convex hull of P
1
and N . When the integer K is greater than 1, then K
Au is a reduced convex hull of the positive
examples, and the objective finds the distance of the reduced convex hull of P and N .
Since the feasible region of RC-Hull is a simplex, dt in AFW and PFW have at most 2K and 4K
nonzeros respectively, and it costs O(nK) time to update the gradient (see Appendix B.1). Given
K, Appendix B.2 shows how to recover the corresponding C in (20), and to translate the optimal
solutions. Although solving RC-Hull requires the knowledge of K (which is unknown a priori if we
are only given C), in practice, it is equally justified to tune the value of K via model selection tools
in the first place, which is approximately tuning the number of support vectors.
4.2
Discussion and Comparison of Rates of Convergence
Clearly, the feasible region of RC-Hull is an SLP, allowing us to apply AFW-1 and PFW-1 with
optimal linear convergence: O(#SV ? ?K log 1 ) ? O(?(#SV)2 log 1 ), because K = 1> u ? #SV.
So overall, the computational cost is O(n?(#SV)3 log 1 ).
[20] shows sequential minimal optimization (SMO) [19, 22] costs O(n3 ? log 1 ) computations. This
is greater than O(n?(#SV)3 log 1 ) when #SV ? n2/3 . [23] requires O(?2 n kQksp log 1 ) iterations,
and each iteration costs O(n). SVRG [24], SAGA [25], SDCA [26] require losses to be decomposable
and smooth, which do not hold for hinge loss with a bias. SDCA can be extended to almost smooth
losses such as hinge loss, but still the dimension dependency is unclear and it cannot handle bias.
As a final remark, despite the superior rates of AFW-1 and PFW-1, their pre-defined step size makes
them impractical. With line search, AFW-2 is much more efficient in practice, and at the same time
provides theoretical guarantees of O(n?(#SV)4 log 1 ) computational cost, just slightly worse by #SV
times. Such an advantage in both theory and practice by one method is not available in PFW [18].
5
Experiments and Future Work
In this section we compare the empirical performance of AFW-2 against related methods. We first
illustrate the performance on kernel binary SVM, then we investigate a problem whose domain is not
an SLP, and finally we demonstrate the scalability of AFW-2 on a large scale dataset.
Binary SVM Our first comparison is on solving kernel binary SVMs with bias. Three datasets are
used. breast-cancer and a1a are obtained from the UCI repository [27] with n = 568 and 1, 605
training examples respectively, and ijcnn1 is from [28] with a subset of 5, 000 examples.
8
106
10
10
1
0
50
100
# Kernel evaluations / # of examples
(a) Breast-cancer (K = 10)
150
107
AFW-2
SMO
106
105
103
2
AFW-2
SMO
Primal Objective
AFW-2
SMO
Primal Objective
Primal Objective
104
105
10
4
10
3
104
0
20
40
60
103
0
300
600
900
1200
# Kernel evaluations / # of examples
# Kernel evaluations / # of examples
(b) a1a (K = 30)
(c) ijcnn1 (K = 20)
Figure 1: Comparison of SMO and AFW-2 on three different datasets
As a competitor, we adopted the well established Sequential Minimal Optimization (SMO) algorithm
[19]. The implementation updates all cached errors corresponding to each examples if any variable is
being updated at each step. Using these cached error, the algorithm heuristically picks the best subset
of variable to update at each iteration.
We first run AFW-2 on the RC-Hull objective in (23), with the value of K set to optimize the test
accuracy (K shown in Figure 1). After obtaining the optimal solution, we compute the equivalent C
value based on the conversion rule in Appendix B.2, and then run SMO on the dual objective (20).
PFW-1 and PFW-2 are also applicable to the RC-Hull formulation. Although the rate of PFW-1 is better than AFW-2, it is
much slower in practice. Although empirically we observed
that PFW-2 is similar to our AFW-2, unfortunately PFW-2 has
no theoretical guarantee.
General Polytope Our next comparison uses Qk as the domain. Since it is not an SLP, neither PFW-1 nor PFW-2 provides a bound. Here we aim to show that AFP-2 is not only
advantageous in providing a good rate of convergence, it is
also comparable to (or better than) PFW-2 in terms of practical
efficiency. Our objective is a least square (akin to lasso):
minx f (x) = kAx ? bk2 ,
0 ? x ? 1,
1> x ? 375.
1010
AFW-2
PFW-2
Gap
100
10
-10
10-20
0
20
# steps
40
60
Figure 2: Lasso
10
8
AFW-2
SMO
Primal Objective
In Figure 1, we show the decay of the primal SVM objective
(hence fluctuation) as a function of (the number of kernel evaluations divided by n). This allows us to avoid the complication
of CPU frequency and kernel caching. Clearly, AFW-2 outperforms SMO on breast-cancer and ijcnn1, and overtakes SMO
on a1a after a few iterations.
107
106
105
104
250
500
750
1000
# Kernel evaluations / # of examples
Figure 3: Full ijcnn1 (K = 100)
Here A ? R100?1000 , and both A and b were generated randomly. Both the FW-oracle and awayoracle are simply based on sorting the gradient. As shown in Figure 2, AFW-2 is indeed slightly
faster than PFW-2.
Scalability To demonstrate the scalability of AFP-2, we plot its convergence curve along with
SMO on the full ijcnn1 dataset with 49, 990 examples. In Figure 3, AFW-2 starts with a higher primal
objective value, but after a while it outperforms SMO near the optimum. In this problem, kernel
evaluation is the major computational bottleneck, hence used as the horizontal axis. This also helps
avoiding the complication of CPU speed (e.g. when wall-clock time is used).
6
Future work
We will extend the decomposition invariant method to gauge regularized problems [29?31], and
derive comparable linear convergence rates. Moreover, although it is hard to evaluate pyramidal
width, it will be valuable to compare it with Hs even in terms of upper/lower bounds.
Acknowledgements. We thank Dan Garber for very helpful discussions and clarifications on [18].
Mohammad Ali Bashiri is supported in part by NSF grant RI-1526379.
9
References
[1] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics
Quarterly, 3(1-2):95?110, 1956.
[2] Z. Harchaoui, M. Douze, M. Paulin, M. Dudik, and J. Malick. Large-scale image classification
with trace-norm regularization. In Proc. IEEE Conf. Computer Vision and Pattern Recognition.
2012.
[3] E. S. Levitin and B. T. Polyak. Constrained minimization methods. USSR Computational
Mathematics and Mathematical Physics, 6(5):787?823, 1966.
[4] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In Proceedings
of International Conference on Machine Learning. 2013.
[5] A. Beck and M. Teboulle. A conditional gradient method with linear rate of convergence for
solving convex linear systems. Mathematical Methods of Operations Research, 59(2):235?247,
2004.
[6] J. Gu?eLat and P. Marcotte. Some comments on Wolfe?s ?away step?. Mathematical Programming, 35(1):110?119, 1986.
[7] P. Wolfe. Convergence theory in nonlinear programming. In Integer and Nonlinear Programming. North-Holland, 1970.
[8] S. D. Ahipasaoglu, P. Sun, and M. J. Todd. Linear convergence of a modified Frank-Wolfe algorithm for computing minimum-volume enclosing ellipsoids. Optimization Methods Software,
23(1):5?19, 2008.
?
E. Frandi, C. Sartori, and H. Allende. A novel Frank-Wolfe algorithm. analysis
[9] R. Nanculef,
and applications to large-scale svm training. Information Sciences, 285(C):66?99, 2014.
[10] P. Kumar and E. A. Yildirim. A linearly convergent linear-time first-order algorithm for support
vector classification with a core set result. INFORMS J. on Computing, 23(3):377?391, 2011.
[11] S. M. Robinson. Generalized equations and their solutions, part II: Applications to nonlinear
programming. Springer Berlin Heidelberg, 1982.
[12] D. Garber and E. Hazan. A linearly convergent variant of the conditional gradient algorithm
under strong convexity, with applications to online and stochastic optimization. SIAM Journal
on Optimization, 26(3):1493?1528, 2016.
[13] S. Lacoste-Julien and M. Jaggi. On the global linear convergence of Frank-Wolfe optimization
variants. In Neural Information Processing Systems. 2015.
[14] S. Lacoste-Julien and M. Jaggi. An affine invariant linear convergence analysis for Frank-Wolfe
algorithms. In NIPS 2013 Workshop on Greedy Algorithms, Frank-Wolfe and Friends. 2013.
[15] A. Beck and S. Shtern. Linearly convergent away-step conditional gradient for non-strongly
convex functions. Mathematical Programming, pp. 1?27, 2016.
[16] G. Lan. The complexity of large-scale convex programming under a linear optimization oracle.
Technical report, University of Florida, 2014.
[17] D. Garber. Faster projection-free convex optimization over the spectrahedron. In Neural
Information Processing Systems. 2016.
[18] D. Garber and O. Meshi. Linear-memory and decomposition-invariant linearly convergent
conditional gradient algorithm for structured polytopes. In Neural Information Processing
Systems. 2016.
[19] J. C. Platt. Sequential minimal optimization: A fast algorithm for training support vector
machines. Tech. Rep. MSR-TR-98-14, Microsoft Research, 1998.
[20] N. List and H. U. Simon. SVM-optimization and steepest-descent line search. In S. Dasgupta
and A. Klivans, eds., Proc. Annual Conf. Computational Learning Theory. Springer, 2009.
[21] K. P. Bennett and E. J. Bredensteiner. Duality and geometry in SVM classifiers. In Proceedings
of International Conference on Machine Learning. 2000.
[22] S. S. Keerthi, O. Chapelle, and D. DeCoste. Building support vector machines with reduced
classifier complexity. Journal of Machine Learning Research, 7:1493?1515, 2006.
10
[23] Y. You, X. Lian, J. Liu, H.-F. Yu, I. S. Dhillon, J. Demmel, and C.-J. Hsieh. Asynchronous
parallel greedy coordinate descent. In Neural Information Processing Systems. 2016.
[24] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In Neural Information Processing Systems. 2013.
[25] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with
support for non-strongly convex composite objectives. In Neural Information Processing
Systems. 2014.
[26] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss
minimization. Journal of Machine Learning Research, 14:567?599, 2013.
[27] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/
ml.
[28] D. Prokhorov. Ijcnn 2001 neural network competition. Slide presentation in IJCNN, 1:97, 2001.
[29] M. Jaggi and M. Sulovsky. A simple algorithm for nuclear norm regularized problems. In
Proceedings of International Conference on Machine Learning. 2010.
[30] X. Zhang, Y. Yu, and D. Schuurmans. Accelerated training for matrix-norm regularization: A
boosting approach. In Neural Information Processing Systems. 2012.
[31] Z. Harchaoui, A. Juditsky, and A. Nemirovski. Conditional gradient algorithms for normregularized smooth convex optimization. Mathematical Programming, 152:75?112, 2015.
11
| 6862 |@word h:38 msr:1 repository:2 version:4 pw:4 norm:6 seems:1 advantageous:1 c0:8 d2:9 heuristically:1 decomposition:15 hsieh:1 prokhorov:1 q1:1 delicately:2 incurs:1 pick:3 tr:1 reduction:1 initial:1 liu:1 exclusively:1 lichman:1 denoting:1 outperforms:2 existing:1 current:3 surprising:1 yet:1 written:4 must:7 chicago:2 happen:1 designed:2 treating:1 update:8 n0:16 maxv:2 plot:1 greedy:2 prohibitive:1 juditsky:1 xk:3 steepest:1 paulin:1 core:1 accepting:1 provides:4 iterates:8 characterization:1 location:1 complication:2 hyperplanes:1 boosting:1 zhang:4 rc:12 along:3 mathematical:5 qij:1 dan:1 introduce:3 pairwise:6 hardness:1 indeed:3 p1:1 nor:1 decreasing:1 decomposed:1 cpu:2 decoste:1 equipped:1 considering:1 totally:1 cardinality:7 provided:1 solver:1 moreover:2 bounded:1 underlying:1 becomes:3 lowest:2 q2:3 impractical:1 guarantee:4 thorough:1 friendly:2 unimodular:1 exactly:3 k2:1 scaled:2 platt:1 control:1 unit:1 grant:1 classifier:2 appear:1 yn:1 before:1 positive:6 dropped:1 local:1 todd:1 switching:1 ext:4 despite:2 ak:4 fluctuation:1 noteworthy:1 approximately:1 might:2 signed:1 au:3 k:2 studied:1 resembles:1 bredensteiner:1 challenging:1 nemirovski:1 bi:2 range:1 practical:2 enforces:1 zhangx:1 yj:1 practice:6 minv:1 definite:1 differs:1 procedure:2 sdca:2 empirical:4 composite:1 projection:2 pre:7 get:7 cannot:4 selection:2 storage:1 context:1 applying:4 restriction:2 conventional:1 equivalent:2 quick:1 maxz:1 imposed:1 optimize:1 convex:29 focused:1 decomposable:1 immediately:1 m2:12 rule:1 nuclear:1 handle:1 coordinate:3 updated:1 resp:1 construction:1 suppose:3 programming:8 us:1 origin:1 bashiri:2 wolfe:17 element:3 satisfying:1 hdt:1 caratheodory:1 updating:1 recognition:1 sulovsky:1 observed:1 hv:3 revisiting:1 region:3 connected:1 sun:1 valuable:1 yk:2 convexity:1 complexity:4 ui:10 respecting:1 xinhua:1 depend:1 tight:1 solving:3 ali:2 predictive:1 inapplicable:1 upon:2 efficiency:9 swap:1 basis:1 r100:1 gu:1 easily:2 various:1 represented:1 revert:1 distinct:1 fast:2 demmel:1 choosing:2 shalev:1 whose:4 garber:5 larger:3 solve:2 allende:1 relax:1 tightness:3 uic:1 final:1 online:1 obviously:5 advantage:2 sequence:2 eigenvalue:1 douze:1 maximal:1 product:1 uci:3 rapidly:1 translate:1 achieve:1 benefiting:1 intuitive:2 moved:1 competition:1 ky:3 az:1 scalability:3 exploiting:1 convergence:21 optimum:4 extending:1 p:1 cached:2 incremental:1 leave:1 help:1 derive:2 qdt:2 illustrate:1 nanculef:1 informs:1 friend:1 rescale:1 qt:10 eq:1 strong:6 involves:2 implies:1 direction:10 drawback:1 hull:19 consecutively:1 stochastic:3 dfw:3 meshi:2 require:5 hx:2 wall:1 preliminary:1 tighter:1 adjusted:1 strictly:1 a1a:3 hold:5 around:1 ic:1 great:1 bj:1 major:2 achieves:3 smallest:3 purpose:1 favorable:2 proc:2 applicable:1 label:1 gauge:1 tool:1 hope:1 minimization:2 clearly:6 always:1 aim:2 modified:1 rather:1 avoid:3 pn:1 caching:1 derived:3 ax:2 l0:2 refining:1 naval:1 tech:1 contrast:1 am:1 sense:3 helpful:1 lowercase:1 accumulated:1 entire:1 lj:1 typically:1 relegated:1 selects:1 issue:1 classification:2 dual:7 ill:1 arg:3 overall:4 priori:1 retaining:1 malick:1 constrained:2 special:3 initialize:1 ussr:1 cube:1 construct:1 saving:3 having:1 beach:1 atom:7 never:2 yu:2 future:4 simplex:8 report:1 t2:2 simplify:1 inherent:1 oblivious:1 fundamentally:1 competency:1 employ:1 few:1 randomly:1 preserve:1 beck:2 defn:3 geometry:2 replaced:1 keerthi:1 maintain:1 delicate:1 microsoft:1 interest:1 highly:1 investigate:1 evaluation:7 analyzed:1 extreme:1 yielding:1 swapping:1 primal:8 spectrahedron:2 accurate:1 closer:1 necessary:1 euclidean:1 accommodated:1 logarithm:1 theoretical:4 minimal:4 complicates:1 increased:1 column:1 teboulle:1 cover:1 maximization:1 cost:10 vertex:17 subset:4 entry:4 johnson:1 characterize:1 dependency:7 kn:1 sv:15 st:1 thanks:1 international:3 siam:1 akj:4 sequel:2 participates:1 physic:1 pool:2 concrete:1 satisfied:2 settled:1 qvt:1 choose:4 worse:2 conf:2 ek:3 resort:1 style:1 return:1 leading:3 singleton:2 bold:1 north:1 satisfy:2 explicitly:1 depends:7 vi:1 performed:2 try:2 break:1 closed:1 analyze:2 steeper:1 haj:1 start:2 recover:1 option:5 hazan:1 parallel:1 simon:1 contribution:1 afw:64 square:3 minimize:1 accuracy:1 qk:9 variance:1 succession:1 afp:3 generalize:2 yildirim:1 ren:1 none:1 straight:1 suffers:1 ed:1 definition:3 against:1 competitor:1 frequency:1 pp:1 proof:8 proved:1 dataset:2 intrinsically:1 recall:1 knowledge:2 dimensionality:1 schedule:2 higher:1 dt:17 improved:1 formulation:2 strongly:6 generality:1 just:2 implicit:1 clock:1 hand:1 horizontal:1 replacing:3 ei:2 nonlinear:3 grows:2 believe:1 usa:1 building:1 requiring:1 equality:10 hence:10 regularization:2 q0:1 nonzero:2 dhillon:1 width:14 covering:1 maintained:1 generalized:3 stress:1 mohammad:2 demonstrate:3 delivers:1 meaning:3 image:1 novel:1 recently:4 superior:2 specialized:1 tending:1 empirically:1 nh:1 volume:1 extend:2 interpretation:1 slight:1 elementwise:1 m1:6 refer:1 blocked:1 cv:1 smoothness:2 tuning:1 mathematics:1 illinois:2 moving:1 chapelle:1 longer:2 add:2 jaggi:4 showed:2 inequality:15 binary:13 rep:1 vt:20 yi:6 minimum:2 fortunately:1 relaxed:2 greater:2 kxk0:3 dudik:1 novelty:1 ii:1 multiple:2 encompass:1 full:2 nonzeros:2 harchaoui:2 smooth:6 technical:1 match:1 faster:2 bach:1 long:1 divided:1 e1:1 equally:1 bigger:1 a1:2 feasibility:6 converging:1 variant:5 kax:1 breast:3 essentially:1 vision:1 iteration:9 kernel:14 achieved:2 c1:18 justified:1 addition:1 else:2 pyramidal:13 completes:1 rest:1 archive:1 ascent:1 comment:1 leveraging:1 integer:4 call:4 marcotte:1 near:1 leverage:2 noting:1 split:1 enough:1 iterate:2 switch:1 xj:1 gave:1 zi:1 architecture:1 lasso:2 polyak:1 inner:1 idea:2 bottleneck:1 handled:1 defazio:1 url:1 accelerating:1 effort:1 akin:1 remark:2 useful:1 clear:1 tune:1 factorial:2 revived:1 slide:1 svms:2 diameter:7 reduced:5 http:1 canonical:1 nsf:1 notice:1 revisit:1 sign:1 popularity:1 per:1 write:1 levitin:1 dasgupta:1 key:4 four:2 reformulation:3 nevertheless:1 lan:1 changing:1 neither:1 ht:20 integrality:3 lacoste:3 resorted:1 hak:8 run:4 inverse:1 letter:1 facilitated:1 you:1 tailor:1 place:1 slp:15 almost:2 utilizes:1 appendix:9 comparable:3 bound:13 convergent:4 fold:1 replaces:1 quadratic:2 oracle:9 nonnegative:1 nontrivial:1 annual:1 ijcnn:2 constraint:25 infinity:2 n3:3 ri:1 software:1 hy:2 calling:1 u1:5 speed:1 argument:1 min:7 optimality:1 kumar:1 klivans:1 department:1 structured:1 combination:5 smaller:5 slightly:3 modification:1 happens:1 making:1 ijcnn1:5 slowed:1 invariant:13 restricted:3 equation:2 slack:4 pin:1 turn:1 needed:1 know:1 end:1 adopted:1 available:3 decomposing:1 operation:1 multiplied:1 apply:3 observe:1 quarterly:1 away:32 indirectly:1 slower:1 florida:1 uncountable:1 running:1 include:2 ensure:3 maintaining:2 hinge:3 establish:2 hypercube:1 objective:17 traditional:1 unclear:1 hai:2 exhibit:1 gradient:14 minx:2 subspace:1 distance:3 separate:1 card:1 separating:1 concatenation:1 thank:1 berlin:1 polytope:17 induction:2 besides:1 length:1 index:1 reformulate:1 ratio:1 providing:1 ellipsoid:1 equivalently:1 difficult:1 unfortunately:5 frank:15 gk:7 trace:1 negative:2 implementation:1 enclosing:1 proper:3 unknown:1 perform:1 allowing:1 conversion:1 upper:2 av:1 observation:1 vertical:1 m12:1 datasets:2 finite:2 descent:5 supporting:1 logistics:1 extended:1 y1:1 rn:3 overtakes:1 arbitrary:6 introduced:3 bk:3 smo:13 polytopes:14 established:1 nip:2 robinson:2 address:1 able:1 below:2 pattern:1 sparsity:5 including:1 max:8 memory:1 natural:1 force:1 regularized:3 scheme:2 improve:2 brief:1 julien:3 axis:1 existential:1 epoch:3 geometric:4 acknowledgement:1 relative:2 appealed:1 loss:6 mixed:1 affine:5 bk2:1 hn0:1 row:1 cancer:3 supported:1 last:1 free:2 svrg:1 asynchronous:1 enjoys:2 bias:7 allow:1 side:1 neighbor:1 face:1 taking:1 sparse:2 boundary:1 dimension:7 xn:1 stand:2 evaluating:1 curve:1 kau:1 adopts:3 made:1 bm:1 employing:1 qx:2 compact:1 implicitly:2 keep:1 ml:1 global:1 minimized:1 b1:3 xi:9 shwartz:1 search:17 un:3 sk:1 table:3 promising:1 terminate:1 ca:1 obtaining:2 dispenses:1 improving:1 schuurmans:1 heidelberg:1 necessarily:1 domain:15 da:2 pk:10 linearly:6 n2:7 x1:2 augmented:1 en:6 n:2 explicit:3 saga:2 lie:2 clamped:2 third:2 theorem:5 down:1 formula:1 bad:5 specific:2 xt:37 normregularized:1 maxi:1 list:1 decay:3 svm:13 admits:2 intrinsic:1 workshop:1 albeit:1 sequential:4 adding:1 gained:1 magnitude:2 elat:1 conditioned:1 sparseness:1 margin:4 kx:3 gap:3 flavor:1 clamping:1 nk:1 sorting:1 cx:4 bvk:1 simply:3 likely:1 contained:2 scalar:1 holland:1 applies:3 springer:2 minimizer:1 satisfies:2 relies:2 conditional:6 goal:2 identity:1 presentation:1 shtern:1 price:1 bennett:1 feasible:8 fw:32 hard:9 considerable:1 specifically:1 except:3 hyperplane:1 lemma:22 pfw:42 called:1 clarification:1 duality:2 invariance:3 support:8 accelerated:1 evaluate:1 lian:1 ahipasaoglu:1 d1:1 avoiding:1 |
6,482 | 6,863 | VAIN: Attentional Multi-agent Predictive Modeling
Yedid Hoshen
Facebook AI Research, NYC
[email protected]
Abstract
Multi-agent predictive modeling is an essential step for understanding physical,
social and team-play systems. Recently, Interaction Networks (INs) were proposed
for the task of modeling multi-agent physical systems. One of the drawbacks of
INs is scaling with the number of interactions in the system (typically quadratic
or higher order in the number of agents). In this paper we introduce VAIN,
a novel attentional architecture for multi-agent predictive modeling that scales
linearly with the number of agents. We show that VAIN is effective for multiagent predictive modeling. Our method is evaluated on tasks from challenging
multi-agent prediction domains: chess and soccer, and outperforms competing
multi-agent approaches.
1
Introduction
Modeling multi-agent interactions is essential for understanding the world. The physical world
is governed by (relatively) well-understood multi-agent interactions including fundamental forces
(e.g. gravitational attraction, electrostatic interactions) as well as more macroscopic phenomena
(electrical conductors and insulators, astrophysics). The social world is also governed by multi-agent
interactions (e.g. psychology and economics) which are often imperfectly understood. Games such
as Chess or Go have simple and well defined rules but move dynamics are governed by very complex
policies. Modeling and inference of multi-agent interaction from observational data is therefore an
important step towards machine intelligence.
Deep Neural Networks (DNNs) have had much success in machine perception e.g. Computer Vision
[1, 2, 3], Natural Language Processing [4] and Speech Recognition [5, 6]. These problems usually
have temporal and/or spatial structure, which makes them amenable to particular neural architectures
- Convolutional and Recurrent Neural Networks (CNN [7] and RNN [8]). Multi-agent interactions
are different from machine perception in several ways:
? The data is no longer sampled on a spatial or temporal grid.
? The number of agents changes frequently.
? Systems are quite heterogeneous, there is not a canonical large network that can be used for
finetuning.
? Multi-agent systems have an obvious factorization (into point agents), whereas signals such
as images and speech do not.
To model simple interactions in a physics simulation context, Interaction Networks (INs) were
proposed by Battaglia et al. [9]. Interaction networks model each interaction in the physical
interaction graph (e.g. force between every two gravitating bodies) by a neural network. By the
additive sum of the vector outputs of all the interactions, a global interaction vector is obtained.
The global interaction alongside object features are then used to predict the future velocity of the
object. It was shown that Interaction Networks can be trained for different numbers of physical agents
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
and generate accurate results for simple physical scenarios in which the nature of the interaction is
additive and binary (i.e. pairwise interaction between two agents) and while the number of agents is
small.
Although Interaction Networks are suitable for the physical domain for which they were introduced,
they have significant drawbacks that prevent them from being efficiently extensible to general multiagent interaction scenarios. The network complexity is O(N d ) where N is the number of objects
and d is the typical interaction clique size. Fundamental physics interactions simulated by the
method have d = 2, resulting in a quadratic dependence and higher order interactions become
completely unmanageable. In Social LSTM [10], this was remedied by pooling a local neighborhood
of interactions. The solution however cannot work for scenarios with long-range interactions. Another
solution offered by Battaglia et al. [9] is to add several fully connected layers modeling the high-order
interactions. This approach struggles when the objective is to select one of the agents (e.g. which
agent will move), as it results in a distributed representation and loses the structure of the problem.
In this work we present VAIN (Vertex Attention Interaction Network), a novel multi-agent attentional
neural network for predictive modeling. VAIN?s attention mechanism helps with modeling the locality
of interactions and improves performance by determining which agents will share information. VAIN
can be said to be a CommNet [11] with a novel attention mechanism or a factorized Interaction
Network [9]. This will be made more concrete in Sec. 2. We show that VAIN can model high-order
interactions with linear complexity in the number of vertices while preserving the structure of the
problem, this has lower complexity than IN in cases where there are many fewer vertices than edges
(in many cases linear vs quadratic in the number of agents).
For evaluation we introduce two non-physical tasks which more closely resemble real-world and
game-playing multi-agent predictive modeling, as well as a physical Bouncing Balls task. Our
non-physical tasks are taken from Chess and Soccer and contain different types of interactions and
different data regimes. The interaction graph on these tasks is not known apriori, as is typical in
nature.
An informal analysis of our architecture is presented in Sec. 2. Our method is presented in Sec. 3. Description of our experimental evaluation scenarios and our results are provided in Sec. 4. Conclusion
and future work are presented in Sec. 5.
Related Work
This work is primarily concerned with learning multi-agent interactions with graph structures. The
seminal works in graph neural networks were presented by Scarselli et al. [12, 13] and Li et al. [14].
Another notable iterative graph-like neural algorithm is the Neural-GPU [15]. Notable works in
graph NNs includes Spectral Networks [16] and work by Duvenaud et al. [17] for fingerprinting of
chemical molecules.
Two related approaches that learn multi-agent interactions on a graph structure are: Interaction
Networks [9] which learn a physical simulation of objects that exhibit binary relations and Communication Networks (CommNets) [11], presented for learning optimal communications between
agents. The differences between our approach VAIN and previous approaches INs and CommNets
are analyzed in detail in Sec. 2.
Another recent approach is PointNet [18] where every point in a point cloud is embedded by a deep
neural net, and all embeddings are pooled globally. The resulting descriptor is used for classification
and segmentation. Although a related approach, the paper is focused on 3D point clouds rather
than multi-agent systems. A different approach is presented by Social LSTM [10] which learns
social interaction by jointly training multiple interacting LSTMs. The complexity of that approach is
quadratic in the number of agents requiring the use of local pooling that only deals with short range
interactions to limit the number of interacting bodies.
The attentional mechanism in VAIN has some connection to Memory Networks [19, 20] and Neural
Turning Machines [21]. Other works dealing with multi-agent reinforcement learning include [22]
and [23].
There has been much work on board game bots (although the approach of modeling board games as
interactions in a neural network multi agent system is new). Approaches include [24, 25] for Chess,
[26, 27, 28] for Backgammons [29] for Go.
2
Concurrent work: We found on Arxiv two concurrent submissions which are relevant to this work.
Santoro et al. [30] discovered that an architecture nearly identical to Interaction Nets achieves
excellent performance on the CLEVR dataset [31]. We leave a comparison on CLEVR for future
work. Vaswani et al. [32] use an architecture that bears similarity to VAIN for achieving state-ofthe-art performance for machine translation. The differences between our work and Vaswani et al.?s
concurrent work are substantial in application and precise details.
2
Factorizing Multi-Agent Interactions
In this section we give an informal analysis of the multi-agent interaction architectures presented by
Interaction Networks [9], CommNets [11] and VAIN.
Interaction Networks model each interaction by a neural network. For simplicity of analysis, let us
restrict the interactions to be of 2nd order. Let ?int (xi , xj ) be the interaction between agents Ai and
Aj , and ?(xi ) be the non-interacting features
of agent Ai . The output is given by a function ?() of
P
the sum of all of the interactions of Ai , j ?int (xi , xj ) and of the non-interacting features ?(xi ).
X
oi = ?(
?int (xi , xj ), ?(xi ))
(1)
j6=i
A single step evaluation of the output for the entire system requires O(N 2 ) evaluations of ?int ().
An alternative architecture is presented by CommNets, where interactions are not modeled explicitly.
Instead an interaction vector is computed for each agent ?com (xi ). The output is computed by:
X
oi = ?(
?com (xj ), ?(xi ))
(2)
j6=i
A single step evaluation of the CommNet architecture requires O(N ) evaluations of ?com (). A
significant drawback of this representation is not explicitly modeling the interactions and putting
the whole burden of modeling on ?. This can often result in weaker performance (as shown in our
experiments).
VAIN?s architecture preserves the complexity advantages of CommNet while addressing its limitations in comparison to IN. Instead of requiring a full network evaluation for every interaction pair
c
?int (xi , xj ) it learns a communication vector ?vain
(xi ) for each agent and additionally an attention
a
vector ai = ?vain (xi ). The strength of interaction between agents is modulated by kernel function
2
e|ai ?aj | . The interaction is approximated by:
2
?int (xi , xj ) = e|ai ?aj | ?vain (xj )
(3)
The output is given by:
oi = ?(
X
2
e|ai ?aj | ?vain (xj ), ?(xi ))
(4)
j6=i
In cases where the kernel function is a good approximation for the relative strength of interaction (in
some high-dimensional linear space), VAIN presents an efficient linear approximation for IN which
preserves CommNet?s complexity in ?().
Although physical interactions are often additive, many other interesting cases (Games, Social, Team
Play) are not additive. In such cases the average instead the sum of ? should be used (in [9] only
physical scenarios were presented and therefore the sum was always used, whereas in [11] only
non-physical cases were considered and therefore only averaging was used). In non-additive cases
VAIN uses a softmax:
X
2
2
e|ai ?aj |
(5)
Ki,j = e|ai ?aj | /
j
3
Model Architecture
In this section we model the interaction between N agents denoted by A1 ...AN . The output can be
either be a prediction for every agent or a system-level prediction (e.g. predict which agent will act
3
Figure 1: A schematic of a single-hop VAIN: i) The agent features Fi are embedded by singleton
encoder E s () to yield encoding esi and communications encoder E c () to yield vector eci and attention
vector
ai ii) For each agent an attention-weighted sum of all embeddings eci is computed Pi =
P
2
c
j wi,j ? ej . The attention weights wi,j are computed by a Softmax over ?||ai ? aj || . The
s
diagonal wi,i is set to zero to exclude self-interactions. iii) The singleton codes ei are concatenated
with the pooled feature Pi to yield intermediate feature Ci iv) The feature is passed through decoding
network D() to yield per-agent vector oi . For Regression: oi is the final output of the network. vii)
For Classification: oi is scalar and is passed through a Softmax.
next). Although it is possible to use multiple hops, our presentation here only uses a single hop (and
they did not help in our experiments).
Features are extracted for every agent Ai and we denote the features by Fi . The features are guided
by basic domain knowledge (such as agent type or position).
We use two agent encoding functions: i) a singleton encoder for single-agent features E s () ii) A
communication encoder for interaction with other agents E c (). The singleton encoding function E s ()
is applied on all agent features Fi to yield singleton encoding esi
E s (Fi ) = esi
(6)
We define the communication encoding function E c (). The encoding function is applied to all
agent features Fi to yield both encoding eci and attention vector ai . The attention vector is used for
addressing the agents with whom information exchange is sought. E c () is implemented by fully
connected neural networks (from now FCNs).
E c (Fi ) = (eci , ai )
(7)
For each agent we compute the pooled feature Pi , the interaction vectors from other agents weighted
by attention. We exclude self-interactions by setting the self-interaction weight to 0:
X
Pi =
ej ? Sof tmax(?||ai ? aj ||2 ) ? (1 ? ?j=i )
(8)
j
This is in contrast to the average pooling mechanism used in CommNets and we show that it yields
better results. The motivation is to average only information from relevant agents (e.g. nearby or
particularly influential agents). The weights wi,j = Sof tmaxj (?||ai ? aj ||2 ) give a measure of
the interaction between agents. Although naively this operation scales quadratically in the number
of agents, it is multiplied by the feature dimension rather by a full E() evaluation and is therefore
significantly smaller than the cost of the (linear number) of E() calculations carried out by the
algorithm. In case the number of agents is very large (>1000) the cost can still be mitigated: The
Softmax operation often yields a sparse matrix, in such cases the interaction can be modeled by the
K-Nearest neighbors (measured by attention). The calculation is far cheaper than evaluating E c ()
4
O(N 2 ) times as in IN. In cases where even this cheap operation is too expensive we recommend to
using CommNets as a default as they truly have an O(N) complexity.
The pooled-feature Pi is concatenated to the original features Fi to form intermediate features Ci :
Ci = (Pi , ei )
(9)
The features Ci are passed through decoding function D() which is also implemented by FCNs. The
result is denoted by oi :
oi = D(Ci )
(10)
For regression problems, oi is the per-agent output of VAIN. For classification problems, D()
is designed to give scalar outputs. The result is passed through a softmax layer yielding agent
probabilities:
P rob(i) = Sof tmax(oi )
(11)
Several advantages of VAIN over Interaction Networks [9] are apparent:
Representational Power: VAIN does not assume that the interaction graph is pre-specified (in fact the
attention weights wi,j learn the graph). Pre-specifying the graph structure is advantageous when it is
clearly known e.g. spring-systems where locality makes a significant difference. In many multi-agent
scenarios the graph structure is not known apriori. Multiple-hops can give VAIN the potential to
model higher-order interactions than IN, although this was not found to be advantageous in our
experiments.
Complexity: As explained in Sec. 2, VAIN features better complexity than INs. The complexity
advantage increases with the order of interaction.
4
Evaluation
We presented VAIN, an efficient attentional model for predictive modeling of multi-agent interactions.
In this section we show that our model achieves better results than competing methods while having
a lower computational complexity.
We perform experiments on tasks from different multi-agent domains to highlight the utility and
generality of VAIN: chess move, soccer player prediction and physical simulation.
4.1
Chess Piece Prediction
Chess is a board game involving complex multi-agent interactions. Chess is difficult from a multiagent perspective due to having 12 different types of agents and non-local high-order interactions. In
this experiment we do not attempt to create an optimal chess player. Rather, we are given a board
position from a professional game. Our task is to identify the piece that will move next (MPP).
There are 32 possible pieces, each encoded by one-hot encodings of piecetype, x position, y position.
Missing pieces are encoded with all zeros. The output is the id of the piece that will move next.
For training and evaluation of this task we downloaded 10k games from the FICS Games Dataset, an
on-line repository of chess games. All the games used are standard games between professionally
ranked players. 9k randomly sampled games were used for training, and the remaining 1k games
for evaluation. Moves later in the game than 100 (i.e. 50 Black and 50 White moves), were dropped
from the dataset so as not to bias it towards particularly long games. The total number of moves is
around 600k.
We use the following methods for evaluation: Rand: Random piece selection. F C: A standard
FCN with three hidden layers (64 hidden nodes each). This method requires indexing to be learned.
SM ax: Each piece is encoded by neural network into a scalar "vote". The "votes" from all input
pieces are fed to a Sof tmax classifier predicting the output label. This approach does not require
learning to index, but cannot model interactions. 1hop ? F C: Each piece is encoded as in SMax but
to a vector rather than a scalar. A deep (3 layer) classifier predicts the MPP from the concatenation of
the vectors. CommN et: A standard CommNet (no attention) [11]. The protocol for CommNet is
the same as VAIN. IN : An Interaction Network followed by Softmax (as for VAIN). Inference for
this IN required around 8 times more computation than VAIN and CommNet. ours ? V AIN .
5
Table 1: Accuracy (%) for the Next Moving Piece (MPP) experiments.
Rand
FC
SM ax
1hop ? F C
CommN et
IN
ours
4.5
21.6
13.3
18.6
27.2
28.3
30.1
The results for next moving chess piece prediction can be seen in Table. 1. Our method clearly
outperforms the competing baselines illustrating that VAIN is effective at selection type problems
- i.e. selecting 1 - of- N agents according to some criterion (in this case likelihood to move). The
non-interactive method SM ax performs much better than Rand (+9%) due to use of statistics of
moves. Interactive methods (F C, 1hot ? F C, CommN et, IN and V AIN ) naturally perform
better as the interactions between pieces are important for deciding the next mover. It is interesting
that the simple F C method performs better than 1hop ? F C (+3%), we think this is because the
classifier in 1hop?F C finds it hard to recover the indexes after the average pooling layer. This shows
that one-hop networks followed by fully connected classifiers (such as the original formulation of
Interaction Networks) struggle at selection-type problems. Our method V AIN performs much better
than 1hop ? IN (11.5%) due to the per-vertex outputs oi , and coupling between agents. V AIN also
performs significantly better than F C (+8.5%) as it does not have to learn indexing. It outperforms
vanilla CommNet by 2.9%, showing the advantages of our attentional mechanism. It also outperforms
INs followed by a per-agent Softmax (similarly to the formulation for VAIN) by 1.8% even though
the IN performs around 8 times more computation than VAIN.
4.2
Soccer Players
Team-player interaction is a promising application area for end-to-end multi-agent modeling as
the rules of sports interaction are quite complex and not easily formulated by hand-coded rules.
An additional advantage is that predictive modeling can be self-supervised and no labeled data is
necessary. In team-play situations many agents may be present and interacting at the same time
making the complexity of the method critical for its application.
In order to evaluate the performance of VAIN on team-play interactions, we use the Soccer Video and
Player Position Dataset (SVPP) [33]. The SVPP dataset contains the parameters of soccer players
tracked during two home matches played by Troms? IL, a Norwegian soccer team. The sensors
were positioned on each home team player, and recorded the player?s location, heading direction
and movement velocity (as well as other parameters that we did not use in this work). The data was
re-sampled by [33] to occur at regular 20 Hz intervals. We further subsampled the data to 2 Hz. We
only use sensor data rather than raw-pixels. End-to-end inference from raw-pixel data is left to future
work.
The task that we use for evaluation is predicting from the current state of all players, the position of
each player for each time-step during the next 4 seconds (i.e. at T + 0.5, T + 1.0 ... T + 4.0). Note
that for this task, we just use a single frame rather than several previous frames, and therefore do not
use RNN encoders for this task.
We use the following methods for evaluation: Static: trivial prediction of 0-motion. P ALV : Linearly
extrapolating the agent displacement by the current linear velocity. P ALAF : A linear regressor
predicting the agent?s velocity using all features including the velocity, but also the agent?s heading
direction and most significantly the agent?s current field position. P AD: a predictive model using all
the above features but using three fully-connected layers (with 256, 256 and 16 nodes). CommN et:
A standard CommNet (no attention) [11]. The protocol for CommNet is the same as VAIN. IN : An
Interaction Network [9], requiring O(N 2 ) network evaluations. ours: VAIN.
We excluded the second half of the Anzhi match due to large sensor errors for some of the players
(occasional 60m position changes in 1-2 seconds).
A few visualizations of the Soccer scenario can be seen in Fig. 4. The positions of the players are
indicated by green circles, apart from a target player (chosen by us), that is indicated by a blue circle.
The brightness of each circle is chosen to be proportional to the strength of attention between each
player and the target player. Arrows are proportional to player velocity. We can see in this scenario
that the attention to nearest players (attackers to attackers, midfielder to midfielders) is strongest, but
attention is given to all field players. The goal keeper normally receives no attention (due to being
6
Figure 2: a) A soccer match used for the Soccer task. b) A chess position illustrating the high-order
nature of the interactions in next move prediction. Note that in both cases, VAIN uses agent positional
and sensor data rather than raw-pixels.
Table 2: Soccer Prediction errors (meters).
Experiments
Methods
Dataset
Time-step
Static
P ALV
P ALAF
P AD
IN
CommN et
ours
1103a
0.5
2.0
4.0
0.54
1.99
3.58
0.14
1.16
2.67
0.14
1.14
2.62
0.14
1.13
2.58
0.16
1.09
2.47
0.15
1.10
2.48
0.14
1.09
2.47
1103b
0.5
2.0
4.0
0.49
1.81
3.27
0.13
1.06
2.42
0.13
1.06
2.41
0.13
1.04
2.38
0.14
1.02
2.30
0.13
1.02
2.31
0.13
1.02
2.30
1107a
0.5
2.0
4.0
0.61
2.23
3.95
0.17
1.36
3.10
0.17
1.34
3.03
0.17
1.32
2.99
0.17
1.26
2.82
0.17
1.26
2.81
0.17
1.25
2.79
1.84
1.11
1.10
1.08
1.04
1.04
1.03
Mean
far away, and in normal situations not affecting play). This is an example of mean-field rather than
sparse attention.
We evaluated our methods on the SVPP dataset. The prediction errors in Table. 2 are broken down for
different time-steps and for different train / test datasets splits. It can be seen that the non-interactive
baselines generally fare poorly on this task as the general configuration of agents is informative for
the motion of agents beyond a simple extrapolation of motion. Examples of patterns than can be
picked up include: running back to the goal to help the defenders, running up to the other team?s goal
area to join an attack. A linear model including all the features performs better than a velocity only
model (as position is very informative). A non-linear per-player model with all features improves
on the linear models. The interaction network, CommNet and VAIN significantly outperform the
non-interactive methods. VAIN outperformed CommNet and IN, achieving this with only 4% of the
number of encoder evaluations performed by IN. This validates our premise that VAIN?s architecture
can model object interactions without modeling each interaction explicitly.
4.3
Bouncing Balls
Following Battaglia et al. [9], we present a simple physics-based experiment. In this scenario, balls
are bouncing inside a 2D square container of size L. There are N identical balls (we use N = 50)
which are of constant size and are perfectly elastic. The balls are initialized at random positions and
with initial velocities sampled at random from [?v0 ..v0 ] (we use v0 = 3ms?1 ). The balls collide
with other balls and with the walls, where the collisions are governed by the laws of elastic collisions.
The task which we evaluate is the prediction of the displacement and change in velocity of each ball in
the next time step. We evaluate the prediction accuracy of our method V AIN as well as Interaction
Networks [9] and CommNets [11]. We found it useful to replace VAIN?s attention mechanism by an
unnormalized attention function due to the additive nature of physical forces:
2
pi,j = e?||ai ?aj || ? ?i,j
(12)
In Fig. 4 we can observe the attention maps for two different balls in the Bouncing Balls scenario.
The position of the ball is represented by a circle. The velocity of each ball is indicated by a line
7
Figure 3: Accuracy differences between VAIN and IN for different computation budgets: VAIN
outperforms IN by spending its computation budget on a few larger networks (one for each agent)
rather than many small networks (one for every pair of agents). This is even more significant for
small computation budgets.
Table 3: RMS accuracy of Bouncing Ball next step prediction.
RM S
VEL0
VEL-CONST
COMMNET
IN
VAIN
0.561
0.547
0.510
0.139
0.135
extending from the center of the circle, the length of the line is proportional to the speed of the ball.
For each figure we choose a target ball Ai , and paint it blue. The attention strength of each agent
Aj with respect to Ai is indicated by the shade of the circle. The brighter the circle, the stronger the
attention. In the first scenario we observe that the two balls near the target receive attention whereas
other balls are suppressed. This shows that the system exploits the sparsity due to locality that is
inherent to this multi-agent system. In the second scenario we observe, that the ball on a collision
course with the target receives much stronger attention, relative to a ball that is much closer to the
target but is not likely to collide with it. This indicates VAIN learns important attention features
beyond the simple positional hand-crafted features typically used.
The results of our bouncing balls experiments can be seen in Tab. 3. We see that in this physical scenario VAIN significantly outperformed CommNets, and achieves better performance than Interaction
Networks for similar computation budgets. In Fig. 4.2 we see that the difference increases for small
computation budgets. The attention mechanism is shown to be critical to the success of the method.
4.4
Analysis and Limitations
Our experiments showed that VAIN achieves better performance than other architectures with similar
complexity and equivalent performance to higher complexity architectures, mainly due to its attention
mechanism. There are two ways in which the attention mechanism implicitly encodes the interactions
of the system: i) Sparse: if only a few agents significantly interact with agent ao , the attention
mechanism will highlight these agents (finding K spatial nearest neighbors is a special case of
such attention). In this case CommNets will fail. ii) Mean-field: if a space can be found where the
important interactions act in an additive way, (e.g. in soccer team dynamics scenario), the attention
mechanism would find the correct weights for the mean field. In this case CommNets would work,
but VAIN can still improve on them.
VAIN is less well-suited for cases where both: interactions are not sparse such that the K most
important interactions will not give a good representation and where the interactions are strong and
highly non-linear so that a mean-field approximation is non-trivial. One such scenario is the M body
gravitation problem. Interaction Networks are particularly well suited for this scenario and VAIN?s
factorization will not yield an advantage.
Implementation
8
Bouncing Balls (a)
Bouncing Balls (b)
Soccer (a)
Soccer (b)
Figure 4: A visualization of attention in the Bouncing Balls and Soccer scenarios. The target ball
is blue, and others are green. The brightness of each ball indicates the strength of attention with
respect to the (blue) target ball. The arrows indicate direction of motion. Bouncing Balls: Left
image: The ball nearer to target ball receives stronger attention. Right image: The ball on collision
course with the target ball receives much stronger attention than the nearest neighbor of the target
ball. Soccer: This is an example of mean-field type attention, where the nearest-neighbors receive
privileged attention, but also all other field players receive roughly equal attention. The goal keeper
typically receives no attention due to being far away.
Soccer: The encoding and decoding functions Ec (), Es () and D() were implemented by fullyconnected neural networks with two layers, each of 256 hidden units and with ReLU activations. The
encoder outputs had 128 units. For IN each layer was followed by a BatchNorm layer (otherwise the
system converged slowly to a worse minimum). For VAIN no BatchNorm layers were used. Chess:
The encoding and decoding functions E() and D() were implemented by fully-connected neural
networks with three layers, each of width 64 and with ReLU activations. They were followed by
BatchNorm layers for both IN and VAIN. Bouncing Balls: The encoding and decoding function Ec (),
Es () and D() were implemented with FCNs with 256 hidden units and three layer. The encoder
outputs had 128 units. No BatchNorm units were used. For Soccer, Ec () and D() architectures
for VAIN and IN was the same. For Chess we evaluate INs with Ec () being 4 times smaller than
for VAIN, this still takes 8 times as much computation as used by VAIN. For Bouncing Balls the
computation budget was balanced between VAIN and IN by decreasing the number of hidden units in
Ec () for IN by a constant factor.
In all scenarios the attention vector ai is of dimension 10 and shared features with the encoding
vectors ei . Regression problems were trained with L2 loss, and classification problems were trained
with cross-entropy loss. All methods were implemented in PyTorch [34] in a Linux environment.
End-to-end optimization was carried out using ADAM [35] with ? = 1e ? 3 and no L2 regularization
was used. The learning rate was halved every 10 epochs. The chess prediction training for the MPP
took several hours on a M40 GPU, other tasks had shorter training times due to smaller datasets.
5
Conclusion and Future Work
We have shown that VAIN, a novel architecture for factorizing interaction graphs, is effective
for predictive modeling of multi-agent systems with a linear number of neural network encoder
evaluations. We analyzed how our architecture relates to Interaction Networks and CommNets.
Examples were shown where our approach learned some of the rules of the multi-agent system. An
interesting future direction to pursue is interpreting the rules of the game in symbolic form, from
VAIN?s attention maps wi,j . Initial experiments that we performed have shown that some chess rules
can be learned (movement of pieces, relative values of pieces), but further research is required.
Acknowledgement
We thank Rob Fergus for significant contributions to this work. We also thank Gabriel Synnaeve and
Arthur Szlam for fruitful comments on the manuscript.
9
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In NIPS, 2012.
[2] Yaniv Taigman, Ming Yang, Marc?Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap
to human-level performance in face verification. In CVPR, 2014.
[3] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for
face recognition and clustering. In CVPR, 2015.
[4] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang
Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google?s neural machine
translation system: Bridging the gap between human and machine translation. arXiv preprint
arXiv:1609.08144, 2016.
[5] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep
Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural
networks for acoustic modeling in speech recognition: The shared views of four research groups.
IEEE Signal Processing Magazine, 2012.
[6] Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro,
Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-toend speech recognition in english and mandarin. In ICML, 2016.
[7] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne
Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition.
Neural computation, 1989.
[8] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation,
1997.
[9] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction
networks for learning about objects, relations and physics. In NIPS, 2016.
[10] Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Li Fei-Fei, and
Silvio Savarese. Social lstm: Human trajectory prediction in crowded spaces. In CVPR, 2016.
[11] Sainbayar Sukhbaatar, Rob Fergus, et al. Learning multiagent communication with backpropagation. In NIPS, 2016.
[12] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.
The graph neural network model. IEEE Transactions on Neural Networks, 2009.
[13] Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph
domains. In IJCNN, 2005.
[14] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural
networks. ICLR, 2016.
[15] ?ukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. ICLR, 2016.
[16] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally
connected networks on graphs. ICLR, 2014.
[17] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel,
Al?n Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning
molecular fingerprints. In NIPS, 2015.
[18] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point
sets for 3d classification and segmentation. CVPR, 2017.
[19] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint
arXiv:1410.3916, 2014.
10
[20] Sainbayar Sukhbaatar, Jason Weston, and Rob Fergus. End-to-end memory networks. In NIPS,
2015.
[21] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint
arXiv:1410.5401, 2014.
[22] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, and Soumith Chintala. Episodic exploration
for deep deterministic policies: An application to starcraft micromanagement tasks. ICLR,
2017.
[23] Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long, and Jun Wang.
Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games. arXiv
preprint arXiv:1703.10069, 2017.
[24] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484?489,
2016.
[25] Yuandong Tian and Yan Zhu. Better computer go player with neural network and long-term
prediction. ICLR, 2016.
[26] Murray Campbell, A Joseph Hoane, and Feng-hsiung Hsu. Deep blue. Artificial intelligence,
2002.
[27] Matthew Lai. Giraffe: Using deep reinforcement learning to play chess. arXiv preprint
arXiv:1509.01549, 2015.
[28] Omid E David, Nathan S Netanyahu, and Lior Wolf. Deepchess: End-to-end deep neural
network for automatic learning in chess. In ICANN, 2016.
[29] Gerald Tesauro. Neurogammon: A neural-network backgammon program. In IJCNN, 1990.
[30] Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter
Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning.
arXiv preprint arXiv:1706.01427, 2017.
[31] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick,
and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary
visual reasoning. arXiv preprint arXiv:1612.06890, 2016.
[32] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762,
2017.
[33] Svein Arne Pettersen, Dag Johansen, H?vard Johansen, Vegard Berg-Johansen, Vamsidhar Reddy Gaddam, Asgeir Mortensen, Ragnar Langseth, Carsten Griwodz, H?kon Kvale
Stensland, and P?l Halvorsen. Soccer video and player position dataset. In Proceedings of the
5th ACM Multimedia Systems Conference, pages 18?23. ACM, 2014.
[34] https://github.com/pytorch/pytorch/, 2017.
[35] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
11
| 6863 |@word cnn:1 repository:1 illustrating:2 fcns:3 advantageous:2 stronger:4 nd:1 simulation:3 jingdong:1 brightness:2 initial:2 configuration:1 contains:1 selecting:1 jimenez:1 daniel:1 ours:4 outperforms:5 current:3 com:5 activation:2 diederik:1 guez:1 parmar:1 gpu:2 john:1 additive:7 informative:2 cheap:1 designed:1 extrapolating:1 v:1 sukhbaatar:2 intelligence:2 fewer:1 half:1 ivo:1 krikun:1 short:2 aja:1 tarlow:1 pascanu:2 node:2 location:1 attack:1 donnie:1 become:1 yuan:2 troms:1 fullyconnected:1 professionally:1 inside:1 introduce:2 pairwise:1 peng:2 roughly:1 frequently:1 multi:29 globally:1 decreasing:1 ming:1 soumith:1 provided:1 mitigated:1 factorized:1 pursue:1 unified:1 finding:1 temporal:2 combat:1 every:7 act:2 alahi:1 starcraft:2 interactive:4 zaremba:1 classifier:4 rm:1 normally:1 unit:6 szlam:2 wayne:2 danihelka:1 understood:2 local:3 dropped:1 struggle:2 limit:1 encoding:12 id:1 laurent:1 clevr:3 tmax:3 black:1 specifying:1 challenging:1 catanzaro:1 factorization:2 vaswani:3 range:2 tian:1 lecun:2 tsoi:1 backpropagation:2 razvan:2 displacement:2 mortensen:1 episodic:1 area:2 rnn:2 yan:1 significantly:6 pre:2 regular:1 symbolic:1 cannot:2 selection:3 keeper:2 context:1 seminal:1 equivalent:1 deterministic:1 map:2 missing:1 center:1 fruitful:1 go:4 economics:1 attention:44 sainath:1 sepp:1 focused:1 jimmy:1 simplicity:1 rule:6 attraction:1 embedding:1 target:11 play:7 magazine:1 guzik:1 carl:1 us:3 jaitly:1 velocity:10 recognition:5 approximated:1 particularly:3 expensive:1 submission:1 predicts:1 labeled:1 hoshen:1 mike:2 cloud:2 module:1 preprint:8 electrical:1 wang:1 connected:6 ranzato:1 movement:2 substantial:1 balanced:1 environment:1 broken:1 complexity:14 kalenichenko:1 esi:3 dynamic:2 gerald:1 trained:3 predictive:10 eric:1 completely:1 easily:1 finetuning:1 collide:2 kratarth:1 represented:1 train:1 effective:3 artificial:1 zemel:1 klaus:1 neighborhood:1 quite:2 apparent:1 encoded:4 larger:1 cvpr:4 otherwise:1 encoder:8 statistic:1 think:1 jointly:1 vain:57 final:1 validates:1 hagenbuchner:1 advantage:6 sequence:1 net:3 took:1 interaction:94 qin:1 relevant:2 cao:1 poorly:1 representational:1 description:1 sutskever:2 yaniv:1 extending:1 smax:1 adam:5 leave:1 silver:1 object:6 help:3 coupling:1 recurrent:1 batchnorm:4 mandarin:1 andrew:1 measured:1 nearest:5 strong:1 implemented:6 resemble:1 indicate:1 direction:4 guided:1 laurens:1 drawback:3 closely:1 correct:1 stochastic:1 exploration:1 human:3 observational:1 exchange:1 require:1 dnns:1 premise:1 ao:1 wall:1 sainbayar:2 ryan:1 elementary:1 pytorch:3 gravitational:1 marco:2 around:3 duvenaud:2 considered:1 guibas:1 deciding:1 normal:1 lawrence:2 maclaurin:1 predict:2 mo:1 matthew:2 rgen:1 achieves:4 sought:1 battaglia:5 outperformed:2 schroff:1 label:1 jackel:1 ross:1 ain:5 hubbard:1 concurrent:3 create:1 weighted:2 illia:1 clearly:2 sensor:4 always:1 rather:9 ej:2 ax:3 rezende:1 backgammon:2 likelihood:1 indicates:2 mainly:1 contrast:1 baseline:2 inference:3 polosukhin:1 typically:3 entire:1 santoro:2 hidden:5 relation:2 ukasz:1 pixel:3 classification:6 denoted:2 spatial:3 art:1 softmax:7 special:1 apriori:2 field:8 equal:1 having:2 beach:1 hop:10 identical:2 yu:1 icml:1 nearly:1 jones:1 fcn:1 future:6 others:1 recommend:1 defender:1 inherent:1 primarily:1 few:3 randomly:1 wen:1 richard:2 preserve:2 mover:1 cheaper:1 scarselli:3 subsampled:1 attempt:1 llion:1 dougal:1 highly:1 evaluation:17 henderson:1 analyzed:2 truly:1 yielding:1 amenable:1 accurate:1 edge:1 closer:1 necessary:1 arthur:3 shorter:1 tree:1 iv:1 savarese:1 initialized:1 re:1 circle:7 girshick:1 battenberg:1 modeling:20 extensible:1 cost:2 vertex:4 addressing:2 imperfectly:1 krizhevsky:1 johnson:1 sumit:1 too:1 encoders:1 nns:1 vard:1 st:1 fundamental:2 lstm:3 physic:4 dong:1 decoding:5 regressor:1 ashish:1 ilya:2 concrete:1 linux:1 zeming:1 recorded:1 choose:1 slowly:1 huang:1 worse:1 lukasz:1 chung:1 wojciech:1 li:5 exclude:2 potential:1 singleton:5 gabriele:2 sec:7 pointnet:2 includes:1 pooled:4 int:6 crowded:1 ioannis:1 notable:2 explicitly:3 bombarell:1 ad:2 leonidas:1 piece:14 later:1 performed:2 extrapolation:1 picked:1 philbin:1 tab:1 wolfgang:1 view:1 hirzel:1 recover:1 jason:2 alv:2 contribution:1 square:1 hariharan:1 greg:2 oi:11 convolutional:3 descriptor:1 accuracy:4 efficiently:1 yield:9 ofthe:1 identify:1 synnaeve:2 raw:3 norouzi:1 vincent:1 handwritten:1 trajectory:1 j6:3 converged:1 ah:1 bharath:1 strongest:1 facebook:1 gravitation:1 chrzanowski:1 mohamed:1 james:1 obvious:1 naturally:1 chintala:1 lior:2 static:2 sampled:4 hsu:1 dataset:9 knowledge:1 improves:2 segmentation:2 positioned:1 back:1 campbell:1 manuscript:1 alexandre:2 higher:4 eci:4 supervised:1 danilo:1 rand:3 raposo:1 formulation:2 evaluated:2 though:1 vel:1 generality:1 just:1 rahman:1 hand:2 receives:5 lstms:1 ei:3 iparraguirre:1 su:1 google:1 aj:11 indicated:4 usa:1 dario:1 lillicrap:1 contain:1 requiring:3 vignesh:1 regularization:1 chemical:1 excluded:1 deal:1 white:1 game:19 self:4 during:2 width:1 robicquet:1 soccer:19 unnormalized:1 criterion:1 m:1 mohammad:1 performs:6 motion:4 interpreting:1 reasoning:2 image:3 spending:1 novel:4 recently:1 fi:7 charles:1 physical:17 tracked:1 uszkoreit:1 fare:1 yuandong:1 significant:5 ai:21 dag:1 nyc:1 vanilla:1 grid:1 automatic:1 similarly:1 closing:1 language:2 had:4 fingerprint:1 moving:2 bruna:1 longer:1 similarity:1 v0:3 gt:1 electrostatic:1 add:1 patrick:1 halved:1 recent:1 showed:1 perspective:1 apart:1 tesauro:1 scenario:18 schmidhuber:1 binary:2 success:2 jorge:1 kvale:1 der:1 preserving:1 seen:4 additional:1 minimum:1 florian:1 george:2 deng:1 zip:1 goel:1 signal:2 ii:3 relates:1 multiple:3 full:2 match:3 calculation:2 cross:1 long:6 lin:1 arne:1 lai:2 molecular:1 coded:1 a1:1 privileged:1 schematic:1 prediction:16 involving:1 regression:3 basic:1 heterogeneous:1 vision:1 qi:1 navdeep:1 arxiv:17 kernel:2 sof:4 hochreiter:1 receive:3 whereas:3 affecting:1 interval:1 hoane:1 macroscopic:1 container:1 yonghui:1 micromanagement:1 comment:1 pooling:4 hz:2 quan:1 neurogammon:1 near:1 yang:2 chopra:1 intermediate:2 iii:1 embeddings:2 concerned:1 split:1 xj:8 relu:2 psychology:1 brighter:1 architecture:16 competing:3 restrict:1 perfectly:1 mpp:4 veda:1 rms:1 utility:1 bridging:1 passed:4 peter:2 speech:5 compositional:1 deep:12 gabriel:2 generally:1 collision:4 useful:1 malinowski:1 locally:1 generate:1 http:1 outperform:1 canonical:1 coates:1 aidan:1 toend:1 bot:1 diagnostic:1 per:5 bryan:1 blue:5 rishita:1 group:1 putting:1 four:1 achieving:2 prevent:1 dahl:1 graph:17 sum:5 taigman:1 turing:1 you:1 bouncing:12 wu:1 yann:2 home:2 lanctot:1 maaten:1 scaling:1 layer:13 ki:1 followed:5 played:1 gomez:1 quadratic:4 strength:5 occur:1 ijcnn:2 alex:2 fei:4 encodes:1 anubhai:1 markus:1 nearby:1 nathan:1 speed:1 franco:2 spring:1 relatively:1 gpus:1 influential:1 according:1 amodei:1 ball:34 smaller:3 suppressed:1 mastering:1 wi:6 joseph:1 rob:4 making:1 chess:18 quoc:1 explained:1 den:1 indexing:2 taken:1 visualization:2 reddy:1 mechanism:11 fail:1 jared:1 fed:1 antonoglou:1 end:11 informal:2 usunier:1 operation:3 panneershelvam:1 multiplied:1 denker:1 observe:3 occasional:1 away:2 spectral:2 alternative:1 professional:1 original:2 gori:2 remaining:1 include:3 running:2 clustering:1 const:1 exploit:1 concatenated:2 murray:1 feng:1 move:11 objective:1 paint:1 kaiser:2 dependence:1 diagonal:1 antoine:1 said:1 exhibit:1 iclr:6 attentional:6 remedied:1 simulated:1 concatenation:1 thank:2 chris:1 maddison:1 whom:1 trivial:2 code:2 length:1 modeled:2 index:2 julian:1 schrittwieser:1 ying:1 difficult:1 hao:1 noam:1 ba:1 astrophysics:1 implementation:1 policy:2 perform:2 attacker:2 gated:1 datasets:2 sm:3 howard:1 situation:2 relational:1 hinton:2 communication:7 team:9 precise:1 norwegian:1 interacting:5 discovered:1 frame:2 shazeer:1 fingerprinting:1 jakob:1 introduced:1 david:5 pair:2 required:2 specified:1 connection:1 imagenet:1 acoustic:1 fics:1 quadratically:1 learned:3 boser:1 johansen:3 hour:1 kingma:1 nip:6 nearer:1 justin:1 beyond:2 alongside:1 usually:1 perception:2 pattern:1 yujia:1 mateusz:1 regime:1 sparsity:1 monfardini:2 program:1 omid:1 including:3 memory:4 video:2 green:2 power:1 suitable:1 hot:2 natural:1 force:3 ranked:1 predicting:3 turning:1 critical:2 zhu:1 improve:1 github:1 carried:2 jun:1 niki:1 joan:1 epoch:1 understanding:2 l2:2 meter:1 acknowledgement:1 determining:1 graf:1 relative:3 macherey:2 law:1 embedded:2 multiagent:5 fully:5 bear:1 highlight:2 interesting:3 limitation:2 proportional:3 loss:2 geoffrey:2 abdel:1 downloaded:1 agent:88 vanhoucke:1 offered:1 verification:1 coordinated:1 netanyahu:1 playing:1 share:1 pi:7 translation:3 casper:1 bordes:1 course:2 english:1 heading:2 bias:1 weaker:1 senior:1 neighbor:4 aspuru:1 face:2 unmanageable:1 sparse:4 distributed:1 van:2 dimension:2 default:1 world:4 evaluating:1 fb:1 made:1 reinforcement:2 kon:1 sifre:1 nguyen:1 far:3 ec:5 social:7 transaction:1 implicitly:1 dmitry:1 bernhard:1 rafael:1 clique:1 dealing:1 global:2 xi:13 fergus:3 factorizing:2 search:1 iterative:1 table:5 additionally:1 promising:1 nature:5 commnet:13 molecule:1 nicolas:1 ca:1 learn:5 brockschmidt:1 elastic:2 interact:1 excellent:1 complex:3 zitnick:1 domain:5 protocol:2 did:2 marc:3 giraffe:1 icann:1 linearly:2 arrow:2 whole:1 motivation:1 aurelio:1 body:3 fig:3 crafted:1 join:1 board:4 position:14 governed:4 learns:3 zhifeng:1 tang:1 down:1 shade:1 showing:1 barrett:1 essential:2 burden:1 naively:1 ramanathan:1 ci:5 maxim:1 diamos:1 budget:6 gap:2 chen:2 locality:3 vii:1 suited:2 entropy:1 fc:1 timothy:2 likely:1 gao:1 bidirectionally:1 positional:2 visual:1 sport:1 scalar:4 driessche:1 deepface:1 wolf:2 loses:1 extracted:1 acm:2 weston:2 goal:4 presentation:1 formulated:1 carsten:1 towards:2 replace:1 shared:2 change:3 hard:1 typical:2 averaging:1 conductor:1 total:1 silvio:1 multimedia:1 experimental:1 e:2 player:23 vote:2 select:1 il:1 tara:1 berg:1 modulated:1 facenet:1 phenomenon:1 evaluate:4 schuster:1 |
6,483 | 6,864 | An Empirical Bayes Approach to Optimizing
Machine Learning Algorithms
James McInerney
Spotify Research
45 W 18th St, 7th Floor
New York, NY 10011
[email protected]
Abstract
There is rapidly growing interest in using Bayesian optimization to tune model and
inference hyperparameters for machine learning algorithms that take a long time to
run. For example, Spearmint is a popular software package for selecting the optimal
number of layers and learning rate in neural networks. But given that there is
uncertainty about which hyperparameters give the best predictive performance, and
given that fitting a model for each choice of hyperparameters is costly, it is arguably
wasteful to ?throw away? all but the best result, as per Bayesian optimization.
A related issue is the danger of overfitting the validation data when optimizing
many hyperparameters. In this paper, we consider an alternative approach that
uses more samples from the hyperparameter selection procedure to average over
the uncertainty in model hyperparameters. The resulting approach, empirical
Bayes for hyperparameter averaging (EB-Hyp) predicts held-out data better than
Bayesian optimization in two experiments on latent Dirichlet allocation and deep
latent Gaussian models. EB-Hyp suggests a simpler approach to evaluating and
deploying machine learning algorithms that does not require a separate validation
data set and hyperparameter selection procedure.
1
Introduction
There is rapidly growing interest in using Bayesian optimization (BayesOpt) to tune model and
inference hyperparameters for machine learning algorithms that take a long time to run (Snoek
et al., 2012). Tuning algorithms by grid search is a time consuming task. Tuning by hand is
also time consuming and requires trial, error, and expert knowledge of the model. To capture this
knowledge, BayesOpt uses a performance model (usually a Gaussian process) as a guide to regions
of hyperparameter space that perform well. BayesOpt balances exploration and exploitation to decide
which hyperparameter to evaluate next in an iterative procedure.
BayesOpt for machine learning algorithms is a form of model selection in which some objective, such
as predictive likelihood or root mean squared error, is optimized with respect to hyperparameters ?.
Thus, it is an empirical Bayesian procedure where the marginal likelihood is replaced by a proxy
objective. Empirical Bayes optimizes the marginal likelihood of data set X (a summary of symbols
is provided in Table 1),
?? := arg max Ep(? | ?) [p(X | ?)],
?
(1)
then uses p(? | X, ??) as the posterior distribution over the unknown model parameters ? (Carlin
and Louis, 2000). Empirical Bayes is applied in different ways, e.g., gradient-based optimization
of Gaussian process kernel parameters, optimization of hyperparameters to conjugate priors in
variational inference. What is special about BayesOpt is that it performs empirical Bayes in a way
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1400000
1000000
1200000
negative log lik on test data
negative log lik on validation data
1200000
800000
600000
400000
200000
0
1000000
800000
600000
400000
200000
0
20
40
60
80
iteration ordered by valiation error
100
0
120
(a) Negative log likelihood on validation data
0
20
40
60
80
iteration ordered by valiation error
100
120
(b) Negative log likelihood on test data
Figure 1: Performance in negative logarithm of the predictive likelihood for the validation data
(left plot) and test data (right plot) ordered by validation error. Each iteration represents a different
hyperparameter setting.
Table 1: Summary of Symbols
Symbol
Meaning
?
?
?
??
?
?
X
X?
the model parameters
the hyperparameters
the hyper-hyperparameters
the hyperparameters fit by empirical Bayes
the hyper-hyperparameters fit by empirical Bayes
the dataset
unseen data
that requires calculating the posterior p(? | X, ? (s) ) for each member in a sequence 1, . . . , S of
candidate hyperparameters ? (1) , ? (2) , . . . , ? (S) . Often these posteriors are approximate, such as a
point estimate, a Monte Carlo estimate, or a variational approximation. Nonetheless, these operations
are usually expensive to compute.
Therefore, what is surprising about BayesOpt for approximate inference is that it disregards most
of the computed posteriors and keeps only the posterior p(? | X, ??) that optimizes the marginal
likelihood. It is surprising because the intermediate posteriors have something to say about the data,
even if they condition on hyperparameter configurations that do not maximize the marginal likelihood.
In other words, when we harbour uncertainty about ?, should we be more Bayesian? We argue for
this approach, especially if one believes there is a danger of overfitting ? on the validation set, which
is especially the case as the dimensionality of the hyperparameters grows. As an illustrative example,
Figure 1 shows the predictive performance of a set of 115 posteriors (each corresponding to a
different hyperparameter) of latent Dirichlet allocation on validation data and testing data. Overfitting
validation means that the single best posterior would not be selected as the final answer in BayesOpt.
Bayes empirical Bayes (Carlin and Louis, 2000) extends the empirical Bayes paradigm by introducing
a family of hyperpriors p(? | ?) indexed by ? and calculates the posterior over the model parameters
by integrating,
p(? | X, ?) = Ep(? | X,?) [p(? | X, ?)].
(2)
This leads to the question of how to select the hyper-hyperparameter ?. A natural answer is a
hierarchical empirical Bayes approach where ? is maximized1 ,
? = arg max Ep(? | ?) Ep(? | ?) [p(X | ?, ?)],
?
?
1
(3)
this approach could also be called type-III maximum likelihood because it involves marginalizing over
model parameters ?, hyperparameters ?, and maximizing hyper-hyperparameters ?.
2
? is used as the posterior. Comparing Eq. 3 to Eq. 1 highlights that we are adding an
and p(? | X, ?)
extra layer of marginalization that can be exploited with the intermediate posteriors in hand. Note
the distinction between marginalizing the hyperparameters to the model vs. hyperparameters to the
Gaussian process of model performance. Eq. 3 describes the former; the latter is already a staple of
BayesOpt (Osborne, 2010).
In this paper, we present empirical Bayes for hyperparameter averaging (EB-Hyp), an extension to
BayesOpt that makes use of this hierarchical approach to incorporate the intermediate posteriors in
an approximate predictive distribution over unseen data X ? .
The Train-Marginalize-Test Pipeline EB-Hyp is an alternative procedure for evaluating and
deploying machine learning algorithms that reduces the need for a separate validation data set.
Validation data is typically used to avoid overfitting. Overfitting is a danger in selecting both
parameters and hyperparameters. The state of the art provides sophisticated ways of regularizing or
marginalizing over parameters to avoid overfitting on training data. But there is no general method
for regularizing hyperparameters and typically there is a requirement of conjugacy or continuity in
order to simultaneously fit parameters and hyperparameters in the same training procedure.
Therefore, the standard practice for dealing with the hyperparameters of machine learning models and
algorithms is to use a separate validation data set (Murphy, 2012). One selects the hyperparameter
that results in the best performance on validation data after fitting the training data. The best
hyperparameter and corresponding posterior are then applied to a held-out test data set and the
resulting performance is the final estimate of the generalization performance of the entire system.
This practice of separate validation has carried over to BayesOpt.
EB-Hyp avoids overfitting training data through marginalization and allows us to train, marginalize,
and test without a separate validation data set. It consists of three steps:
1. Train a set of parameters on training data Xtrain , each one conditioned on a choice of
hyperparameter.
2. Marginalize the hyperparameters out of the set of full or approximate posteriors.
3. Test (or Deploy) the marginal predictive distribution on test data Xtest and report the
performance.
In this paper, we argue in favour of this framework as a way of simplifying the evaluation and
deployment pipeline. We emphasize that the train step admits a broad category of posterior approximation methods for a large number of models, including maximum likelihood, maximum a posteriori,
variational inference, or Markov chain Monte Carlo.
In summary, our contributions are the following:
? We highlight the three main shortcomings of the current prevalent approach to tuning
hyperparameters of machine learning algorithms (computationally wasteful, potentially
overfitting validation, added complexity of a separate validation data set) and propose a new
empirical Bayes procedure, EB-Hyp, to address those issues.
? We develop an efficient algorithm to perform EB-Hyp using Monte Carlo approximation
to both sample hyperparameters from the marginal posterior and to optimize over the
hyper-hyperparameters.
? We apply EB-Hyp to two models and real world data sets, comparing to random search and
BayesOpt, and find a significant improvement in held out predictive likelihood validating
the approach and approximation in practice.
2
Related Work
Empirical Bayes has a long history started by Robbins (1955) with a nonparametric approach,
to parametric EB (Efron and Morris, 1972) and modern applications of EB (Snoek et al., 2012;
Rasmussen and Williams, 2006). Our work builds on these hierarchical Bayesian approaches.
BayesOpt uses a GP to model performance of machine learning algorithms. A previous attempt at
reducing the wastefulness of BayesOpt has focused on directing computational resources toward
3
more optimal regions of hyperparameter space (Swersky et al., 2014). Another use of the GP as a
performance model arises in Bayesian quadrature, which uses a GP to approximately marginalize over
parameters (Osborne et al., 2012). However, quadrature is computationally infeasible for forming a
predictive density after marginalizing hyperparameters because that requires knowing p(? | X, ?) for
the whole space of ?. In contrast, the EB-Hyp approximation depends on the posterior only at the
sampled points, which has already been calculated to estimate the marginals.
Finally, EB-Hyp resembles ensemble methods, such as boosting and bagging, because it is a weighted
sum over posteriors. Boosting trains models on data reweighted to emphasize errors from previous
models (Freund et al., 1999) while bagging takes an average of models trained on bootstrapped data
(Breiman, 1996).
3
Empirical Bayes for Hyperparameter Averaging
As introduced in Section 1, EB-Hyp adds another layer in the model hierarchy with the addition of
a hyperprior p(? | ?). The Bayesian approach is to marginalize over ? but, as usual, the question
of how to select the hyper-hyperparameter ? lingers. Empirical Bayes provides a response to the
selection of hyperprior in the form a maximum marginal likelihood approach (see Eq. 3). It is
useful to incorporate maximization into the posterior approximation when tuning machine learning
algorithms because of the small number of samples we can collect (due to the underlying assumption
that the inner training procedure is expensive to run).
Our starting point is to approximate the posterior predictive distribution under EB-Hyp using Monte
?
Carlo samples of ? (s) ? p(? | X, ?),
p(X ? | X) ?
S
1X
?
E
| ?, ? (s) )]
(s) [p(X
S s=1 p(? | X,? )
(4)
for a choice of hyperprior p(? | ?).
?
There are two main challenges that Eq. 4 presents. The first is that the marginal posterior p(? | X, ?)
is not readily available to sample from. We address this in Section 3.1. The second is the choice of
? We describe our approach to this in Section 3.2.
hyperprior p(? | ?) and how to find ?.
3.1
Acquisition Strategy
The acquisition strategy describes which hyperparameter to evaluate next during tuning. A na?ve way
to choose evaluation point ? is to sample from the uniform distribution or the hyperprior. However,
this is likely to select a number of points where p(X |?, ?) has low density, squandering computational
resources.
BayesOpt addresses this by using an acquisition function conditioned on the current performance
model posterior then maximizing this function to select the next evaluation point. BayesOpt offers
several choices for the acquisition function. The most prominent are expected improvement, upper
confidence bound, and Thompson sampling (Brochu et al., 2010; Chapelle and Li, 2011). Expected
improvement and the upper confidence bound result in deterministic acquisition functions and are
therefore hard to incorporate into Eq. 4, which is a Monte Carlo average. In contrast, Thompson
sampling is a stochastic procedure that is competitive with the non-stochastic procedures (Chapelle
and Li, 2011), so we use it as a starting point for our acquisition strategy.
Thompson sampling maintains a model of rewards for actions performed in an environment and
repeats the following for iteration s = 1, . . . , S:
1. Draw a simulation of rewards from the current reward posterior conditioned on the history
r(s) ? p(r | {? (t) , f (t) | t < s}).
2. Choose the action that gives the maximum reward in the simulation ? (s)
arg max? r(s) (?).
3. Observe reward f (s) from the environment for performing action ? (s) .
4
=
Thompson sampling balances exploration with exploitation because actions with large posterior
means and actions with high variance are both more likely to appear as the optimal action in the
sample r(s) . However, the arg max presents difficulties in the reweighting required to perform Bayes
empirical Bayes approaches. We discuss these difficulties in more depth in Section 3.2. Furthermore,
it is unclear exactly what the sample set {? (1) , . . . , ? (S) } represents. This question becomes pertinent
when we care about more than just the optimal hyperparameter. To address these issues, we next
present a procedure that generalizes Thompson sampling when it is used for hyperparameter tuning.
Performance Model Sampling Performance model sampling is based on the idea that the set of
simulated rewards r(s) can themselves be treated as a probability distribution of hyperparameters,
from which we can also draw samples. In a hyperparameter selection context, let p?(s) (X | ?) ? r(s) ,
the marginal likelihood. The procedure repeats for iterations s = 1, . . . , S:
(t)
1. draw p?(s) (X | ?) ? P(p(X | ?) | {? (t) , fX | t < s})
2. draw ? (s) ? p?(s) (? | X)
Z
(s)
3. evaluate fX = p(X | ?)p(? | ? (s) )d?
where p?(s) (? | X) := Z ?1 p?(s) (X | ?)p(?)
(5)
where P is the performance model distribution and Z is the normalization constant.2 The marginal
likelihood p(X | ? (s) ) may be evaluated exactly (e.g., Gaussian process marginal given kernel
hyperparameters) or estimated using methods that approximate the posterior p(? | X, ? (s) ) such as
maximum likelihood estimation, Markov chain Monte Carlo sampling, or variational inference.
Thompson sampling is recovered from performance model sampling when the sample in Step 2 of
Eq. 5 is replaced with the maximum a posteriori approximation (with a uniform prior over the bounds
of the hyperparameters) to select where to obtain the next hyperparameter sample ? (s) . Given the
effectiveness of Thompson sampling in various domains (Chapelle and Li, 2011), this is likely to
work well for hyperparameter selection. Furthermore, Eq. 5 admits a broader range of acquisition
strategies, the simplest being a full sample. And importantly, it allows us to consider the convergence
of EB-Hyp.
The sample p?(s) (X | ?) of iteration s from the procedure in Eq. 5 converges to the true probability
density function p(X |?) as s ? ? under the assumptions that p(X |?) is smooth and the performance
model P is drawn from a log Gaussian process with smooth mean and covariance over a finite input
space. Consistency of the Gaussian process in one dimension has been shown for fixed Borel
probability measures (Choi and Schervish, 2004). Furthermore, rates of convergence are favourable
for a variety of covariance functions using the log Gaussian process for density estimation (van der
Vaart and van Zanten, 2008). Performance model sampling additionally changes the sampling
distribution of ? on each iteration. Simulation p?(s) (? | X) from the posterior of P conditioned on
the evaluation history has non-zero density wherever the prior p(?) is non-zero by the definition of
p?(s) (? | X) in Eq. 5 and the fact that draws from a log Gaussian process are non-zero. Therefore, as
(t)
s ? ?, the input-output set {? (t) , fX | t < s} on which P is conditioned will cover the input space.
It follows from the above discussion that the samples {? (s) | s ? [1, S]} from the procedure in
Eq. 5 converge to the posterior distribution p(? | X) as S ? ?. Therefore, the sample p?(s) (X | ?)
converges to the true pdf p(X | ?) as s ? ?. Since {? (s) | s ? [1, S]} is sampled independently from
{?
p(s) (X | ?) | s ? [1, S]} (respectively), the set of samples therefore tends to p(? | X) as S ? ?.
A key limitation to the above discussion for continuous hyperparameters is the assumption that the
true marginal p(X | ?) is smooth. This may not always be the case, for example an infinitesimal
change in the learning rate for gradient descent on a non-convex objective could result in finding a
completely different local optimum. This affects asymptotic convergence but discontinuities in the
2
Z can be easily calculated if ? is discrete or if p(?) is conjugate to p(X | ?). In non-conjugate continuous
cases, ? may be discretized to a high granularity. Since EB-Hyp is an active procedure, the limiting computational
bottleneck is to calculate the posterior of the performance model. For GPs, this is an O(S 3 ) operation in the
number of hyperparameter evaluations S. If onerous, the operation is amenable to well established fast
approximations, e.g,. the inducing points method (Hensman et al., 2013).
5
1
2
3
4
5
6
7
8
9
10
11
12
13
inputs training data Xtrain and inference algorithm A : (X, ?) ? p(? | X, ?)
output predictive density p(X ? | Xtrain )
initialize evaluation history V = {}
while V not converged do
draw performance function from GP posterior p?(s) (X | ?) ? GP(? | V )
calculate hyperparameter posterior p?(s) (? | X) := Z ?1 p?(s) (X | ?)p(?)
draw next evaluation point ? (s) := arg max? p?(s) (? | X)
run parameter inference conditioned on hyperparameter p(? | ? (s) ) := A(Xtrain , ? (s) )
R
(s)
evaluate performance fX := p(Xtrain | ?)p(? | ? (s) )d?
(s)
append (? (s) , fX ) to history V
end
? using Eq. 3 (discussed in Section 3.2)
find optimal ?
return: approximation to p(X ? | Xtrain ) using Eq. 4
Algorithm 1: Empirical Bayes for hyperparameter averaging (EB-Hyp)
Table 2: Predictive log likelihood for latent Dirichlet allocation (LDA), 20 Newsgroup dataset
Method
BayesOpt
EB-Hyp
Predictive Log Lik.
(% Improvement on BayesOpt)
with validation
without validation
with validation
without validation
-357648 (0.00%)
-361661 (-1.12%)
-357650 (-0.00%)
-351911 (+1.60%)
-2666074 (-645%)
Random
marginal likelihood are not likely to affect the outcome at the scale number of evaluations typical
in hyperparameter tuning. Importantly, the smoothness assumption does not pose a problem to
discrete hyperparameters (e.g., number of units in a hidden layer). Another limitation of performance
model sampling is that it focuses on the marginal likelihood as the metric to be optimized. This
is less of a restriction as it may first appear. Various performance metrics are often equivalent or
approximations to a particular likelihood, e.g., mean squared error is the negative log likelihood of a
Gaussian-distributed observation.
3.2
Weighting Strategy
Performance model sampling provides a set of hyperparameter samples, each with a performance
(s)
fX and a computed posterior p(? | X, ? (s) ). These three elements can be combined in a Monte Carlo
average to provide a prediction over unseen data or a mean parameter value.
Following from Section 3.1, the samples of ? from Eq. 5 converge to the distribution of p(? | X, ?).
A standard Bayesian treatment of the hierarchical model requires selecting a fixed ?, equivalent to
a predetermined weighted or unweighted average of the models of a BayesOpt run. However, we
found that fixing ? is not competitive with approaches to hyperparameter tuning that involve some
maximization. This is likely to arise from the small number of samples collected during tuning
(recall that collecting more samples involves new entire runs of parameter training and is usually
computationally expensive).
? selects the best hyper-hyperparameter and reintroduces maximizaThe empirical Bayes selection of ?
tion in a way that makes use of the intermediate posteriors during tuning, as in Eq. 4. In addition, it
? This depends on the choice of hyperprior. There is
uses hyper-hyperparameter optimization to find ?.
flexibility in this choice; we found that a nonparametric hyperprior that places a uniform distribution
over the top T < S samples (by value of fX (? (t) )) from Eq. 4 works well in practice, and this is
S
what we use in Section 4 with T = b 10
c. This choice of hyperprior avoids converging on a point
mass in the limit of infinite sized data X and forces the approximate marginal to spread probability
6
Table 3: Predictive log lik. for deep latent Gaussian model (DLGM), Labeled Faces in the Wild
Method
BayesOpt
EB-Hyp
Predictive Log Lik.
(% Improvement on BayesOpt)
with validation
without validation
with validation
without validation
-17071 (0.00%)
-15970 (+6.45%)
-16375 (+4.08%)
-15872 (+7.02%)
-17271 (-1.17%)
Random
mass across a well-performing set of models, any one of which is likely to dominate the prediction
for any given data point (though, importantly, it will not always be the same model).
After the Markov chain in Eq. 5 converges, the samples {? (s) | s = 1, . . . , S} and the (approximated)
posteriors p(? | X, ? (s) ) can be used in Eq. 4. The EB-Hyp algorithm is summarized in Algorithm 1.
The dominating computational cost comes from running inference to evaluate A(Xtrain , ? (s) ). All
the other steps combined are negligible in comparison.
4
Experiments
We apply EB-Hyp and BayesOpt to two approximate inference algorithms and data sets. We also
apply uniform random search, which is known to outperform a grid or manual search (Bergstra and
Bengio, 2012).
In the first experiment, we consider stochastic variational inference on latent Dirichlet allocation
(SVI-LDA) applied to the 20 Newsgroups data.3 In the second, a deep latent Gaussian model (DLGM)
on the Labeled Faces in the Wild data set (Huang et al., 2007). We find that EB-Hyp outperforms
BayesOpt and random search as measured by predictive likelihood.
For the performance model, we use the log Gaussian process in our experiments implemented in
the GPy package (GPy, 2012). The performance model uses the Mat?rn 32 kernel to express the
assumption that nearby hyperparameters typically perform similarly; but this kernel has the advantage
of being less smooth than the squared exponential, making it more suitable to capture abrupt changes
in the marginal likelihood (Stein, 1999). Between each hyperparameter sample, we optimize the
kernel parameters and the independent noise distribution for the observations so far by maximizing
the marginal likelihood of the Gaussian process.
Throughout, we randomly split the data into training, validation, and test sets. To assess the necessity of a separate validation set we consider two scenarios: (1) training and validating on the
train+validation data, (2) training on the train data and validating on the validation data. In either
case, the test data is used only at the final step to report overall performance.
4.1
Latent Dirichlet Allocation
Latent Dirichlet allocation (LDA) is an unsupervised model that finds topic structure in a set of text
documents expressed as K word distributions (one per topic) and D topic distributions (one per
document). We apply stochastic variational inference to LDA (Hoffman et al., 2013), a method that
approximates the posterior over parameters p(? | X, ?) in Eq. 4 with variational distribution q(? | v, ?).
The algorithm minimizes the KL divergence between q and p by adjusting the variational parameters.
We explored four hyperparameters of SVI-LDA in the experiments: K ? [50, 200], the number
of topics; log(?) ? [?5, 0], the hyperparameter to the Dirichlet document-topic prior; log(?) ?
[?5, 0], the hyperparameter to the Dirichlet topic-word distribution prior; ? ? [0.5, 0.9], the decay
parameter to the learning rate (t0 + t)?? , where t0 was fixed at 10 for this experiment. Several other
hyperparameters are required and were kept fixed during the experiment. The minibatch size was
fixed at 100 documents and the vocabulary was selected from the top 1,000 words, excluding stop
words, words that appear in over 95% of documents, and words that appear in only one document.
3
http://qwone.com/~jason/20Newsgroups/
7
0
1
log(eta)
2
3
4
5
5
4
3
log(alpha)
2
1
0
Figure 2: A 2D slice of the performance model posterior after a run of EB-Hyp on LDA. The two
hyperparameters control the sparsity of the Dirichlet priors. The plot indicates a negative relationship
between them.
The 11,314 resulting documents were randomly split 80%-10%-10% into training, validation, and
test sets.
Table 2 shows performance in log likelihood on the test data of the two approaches. The percentage
change over the BayesOpt benchmark is reported in parentheses. EB-Hyp performs significantly
better than BayesOpt in this problem. To understand why, Figure 1 examines the error (negative log
likelihood) on both the validation and test data for all the hyperparameters selected during BayesOpt.
In the test scenario, BayesOpt chooses the hyperparameters corresponding to the left-most bar in
Figure 1b because those hyperparameters minimized error on the validation set. However, Figure 1b
shows that other hyperparameter settings outperform this selection when testing. For finite validation
data, there is no way of knowing how the optimal hyperparameter will behave on test data before
seeing it, motivating an averaging approach like EB-Hyp. In addition, Table 2 shows that a separate
validation data set is not necessary with EB-Hyp. In contrast, BayesOpt does need separate validation
and overfits the training data without it.
Figure 2 shows a slice of the posterior mean function of the performance model for two of the
hyperparameters, ? and ?, controlling the sparsity of the document-topics and the topic-word
distributions, respectively. There is a negative relationship between the two hyperparameters, meaning
that the sparser we make the topic distribution for documents, the denser we need to make the word
distribution for topics to maintain the same performance (and vice versa). EB-Hyp combines several
models of different degrees of sparsity in a way that respects this trade-off.
4.2
Supervised Deep Latent Gaussian Models
Stochastic backpropagation for deep latent Gaussian models (DLGMs) approximates the posterior
of an unsupervised deep model using variational inference and stochastic gradient ascent (Rezende
et al., 2014). In addition to a generator network, a recognition network is introduced that amortizes
inference (i.e., once trained, the recognition network finds variational parameters for new data in a
closed-form expression). In this experiment, we use an extension of the DLGM with supervision
(Li et al., 2015) to perform label prediction on a subset of the Labeled Faces in the Wild data set
(Huang et al., 2007). The data consist of 1,288 images of 1,850 pixels each, split 60%-20%-20% into
training, validation, and test data (respectively).
We considered 4 hyperparameters for the DLGM with a one-layered recognition model: N1 ?
[10, 200], the number of hidden units in the first layer of the generative and recognition models;
N2 ? [0, 200], the number of hidden units in the second layer of the generative model only (when
N2 = 0, only one layer is used); log(?) ? [?5, ?0.05], the variance of the prior of the weights
in the generative model; and log(?) ? [?5, ?0.05], the gradient ascent step size. Table 3 shows
performance for the DLGM. The single best performing hyperparameters were (N1 = 91, N2 =
86, log(?) = ?5, log(?) = ?5). We find again that, EB-Hyp outperforms all the other methods on
test data. This is achieved without validation.
8
5
Conclusions
We introduced a general-purpose procedure for dealing with unknown hyperparameters that control
the behaviour of machine learning models and algorithms. Our approach is based on approximately
marginalizing the hyperparameters by taking a weighted average of posteriors calculated by existing
inference algorithms that are time intensive. To do this, we introduced a procedure for sampling
informative hyperparameters from a performance model. Our approaches are supported by an efficient
algorithm. In two sets of experiments, we found this algorithm outperforms optimization and random
approaches.
The arguments and evidence presented in this paper point toward a tendency of the standard
optimization-based methodologies to overfit hyperparameters. Other things being equal, this tendency
punishes (in reported performance on test data) methods that are more sensitive to hyperparameters
compared to methods that are less sensitive. The result is a bias in the literature towards methods
whose generalization performance is less sensitive to hyperparameters. Averaging approaches like
EB-Hyp help reduce this bias.
Acknowledgments
Many thanks to Scott Linderman, Samantha Hansen, Eric Humphrey, Ching-Wei Chen, and the
reviewers of the workshop on Advances in Approximate Bayesian Inference (2016) for their insightful
comments and feedback.
References
Bergstra, J. and Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine
Learning Research, 13, 281?305.
Breiman, L. (1996). Bagging predictors. Machine learning, 24(2), 123?140.
Brochu, E., Cora, V. M., and De Freitas, N. (2010). A tutorial on Bayesian optimization of expensive cost
functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint
arXiv:1012.2599.
Carlin, B. P. and Louis, T. A. (2000). Empirical Bayes: Past, present and future. Journal of the American
Statistical Association, 95(452), 1286?1289.
Chapelle, O. and Li, L. (2011). An empirical evaluation of Thompson sampling. In Advances in Neural
Information Processing Systems, pages 2249?2257.
Choi, T. and Schervish, M. J. (2004). Posterior consistency in nonparametric regression problems under gaussian
process priors.
Efron, B. and Morris, C. (1972). Limiting the risk of Bayes and empirical Bayes estimators?Part II: The
empirical Bayes case. Journal of the American Statistical Association, 67(337), 130?139.
Freund, Y., Schapire, R., and Abe, N. (1999). A short introduction to boosting. Journal-Japanese Society For
Artificial Intelligence, 14(771-780), 1612.
GPy (2012). GPy: A Gaussian process framework in Python. http://github.com/SheffieldML/GPy.
Hensman, J., Fusi, N., and Lawrence, N. D. (2013). Gaussian processes for big data. arXiv preprint
arXiv:1309.6835.
Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. W. (2013). Stochastic variational inference. Journal of
Machine Learning Research, 14(1), 1303?1347.
Huang, G. B., Ramesh, M., Berg, T., and Learned-Miller, E. (2007). Labeled faces in the wild: A database
for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst.
Li, C., Zhu, J., Shi, T., and Zhang, B. (2015). Max-margin deep generative models. In Advances in Neural
Information Processing Systems, pages 1837?1845.
Murphy, K. P. (2012). Machine learning: a probabilistic perspective. MIT press.
9
Osborne, M. (2010). Bayesian Gaussian Processes for Sequential Prediction, Optimisation and Quadrature.
Ph.D. thesis, PhD thesis, University of Oxford.
Osborne, M., Garnett, R., Ghahramani, Z., Duvenaud, D. K., Roberts, S. J., and Rasmussen, C. E. (2012). Active
Learning of Model Evidence Using Bayesian Quadrature. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q.
Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 46?54. Curran Associates,
Inc.
Rasmussen, C. E. and Williams, C. K. (2006). Gaussian processes for machine learning. the MIT Press, 2(3), 4.
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference
in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pages
1278?1286.
Robbins, H. (1955). The empirical Bayes approach to statistical decision problems. In Herbert Robbins Selected
Papers, pages 49?68. Springer.
Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical Bayesian optimization of machine learning
algorithms. In Advances in neural information processing systems, pages 2951?2959.
Stein, M. L. (1999). Interpolation of spatial data: some theory for kriging. Springer Science & Business Media.
Swersky, K., Snoek, J., and Adams, R. P. (2014). Freeze-thaw bayesian optimization. arXiv preprint
arXiv:1406.3896.
van der Vaart, A. W. and van Zanten, J. H. (2008). Rates of contraction of posterior distributions based on
Gaussian process priors. The Annals of Statistics, pages 1435?1463.
10
| 6864 |@word trial:1 exploitation:2 simulation:3 simplifying:1 covariance:2 xtest:1 contraction:1 necessity:1 configuration:1 selecting:3 punishes:1 bootstrapped:1 document:9 outperforms:3 existing:1 freitas:1 current:3 com:3 comparing:2 surprising:2 recovered:1 past:1 readily:1 informative:1 predetermined:1 pertinent:1 plot:3 v:1 generative:5 selected:4 intelligence:1 short:1 blei:1 provides:3 boosting:3 simpler:1 zhang:1 wierstra:1 consists:1 fitting:2 wild:4 combine:1 snoek:4 expected:2 themselves:1 growing:2 discretized:1 humphrey:1 becomes:1 provided:1 underlying:1 mass:2 medium:1 what:4 minimizes:1 finding:1 collecting:1 exactly:2 control:2 unit:3 appear:4 louis:3 arguably:1 before:1 negligible:1 local:1 tends:1 limit:1 samantha:1 oxford:1 interpolation:1 approximately:2 eb:29 resembles:1 suggests:1 collect:1 deployment:1 sheffieldml:1 range:1 acknowledgment:1 practical:1 testing:2 practice:4 backpropagation:2 svi:2 procedure:17 danger:3 empirical:24 significantly:1 word:9 integrating:1 confidence:2 seeing:1 staple:1 marginalize:5 selection:8 layered:1 context:1 risk:1 optimize:2 restriction:1 deterministic:1 equivalent:2 reviewer:1 maximizing:3 shi:1 williams:2 starting:2 independently:1 thompson:8 focused:1 convex:1 abrupt:1 examines:1 estimator:1 importantly:3 dominate:1 amortizes:1 fx:7 limiting:2 annals:1 hierarchy:1 deploy:1 controlling:1 user:1 gps:1 us:7 curran:1 associate:1 element:1 expensive:4 approximated:1 recognition:5 predicts:1 labeled:4 database:1 ep:4 preprint:3 wang:1 capture:2 calculate:2 region:2 trade:1 kriging:1 environment:3 complexity:1 reward:6 trained:2 predictive:15 eric:1 completely:1 easily:1 various:2 train:7 fast:1 shortcoming:1 describe:1 monte:7 artificial:1 hyper:9 outcome:1 whose:1 dominating:1 denser:1 say:1 statistic:1 unseen:3 vaart:2 gp:5 final:3 sequence:1 advantage:1 propose:1 rapidly:2 flexibility:1 inducing:1 convergence:3 requirement:1 spearmint:1 optimum:1 adam:2 converges:3 help:1 develop:1 fixing:1 pose:1 measured:1 eq:19 throw:1 implemented:1 involves:2 lingers:1 come:1 larochelle:1 stochastic:8 exploration:2 require:1 behaviour:1 generalization:2 extension:2 considered:1 duvenaud:1 lawrence:1 purpose:1 estimation:2 label:1 hansen:1 sensitive:3 robbins:3 vice:1 weighted:3 hoffman:2 cora:1 mit:2 gaussian:22 always:2 avoid:2 breiman:2 broader:1 rezende:2 focus:1 improvement:5 prevalent:1 likelihood:25 indicates:1 contrast:3 posteriori:2 inference:18 typically:3 entire:2 hidden:3 selects:2 pixel:1 issue:3 arg:5 overall:1 art:1 special:1 initialize:1 spatial:1 marginal:17 equal:1 once:1 beach:1 sampling:17 represents:2 broad:1 unsupervised:2 future:1 minimized:1 report:3 modern:1 randomly:2 simultaneously:1 ve:1 divergence:1 murphy:2 replaced:2 maintain:1 n1:2 attempt:1 hyp:27 interest:2 evaluation:9 held:3 chain:3 amenable:1 necessary:1 indexed:1 logarithm:1 hyperprior:8 modeling:1 eta:1 cover:1 bayesopt:27 maximization:2 cost:2 introducing:1 subset:1 uniform:4 predictor:1 motivating:1 reported:2 answer:2 combined:2 chooses:1 st:3 density:6 thanks:1 amherst:1 international:1 probabilistic:1 off:1 gpy:5 na:1 squared:3 again:1 thesis:2 choose:2 huang:3 expert:1 american:2 return:1 li:6 de:1 bergstra:2 summarized:1 inc:1 depends:2 performed:1 root:1 tion:1 jason:1 closed:1 overfits:1 competitive:2 bayes:25 maintains:1 contribution:1 ass:1 variance:2 ensemble:1 miller:1 bayesian:16 carlo:7 history:5 converged:1 deploying:2 manual:1 definition:1 infinitesimal:1 nonetheless:1 acquisition:7 mohamed:1 james:1 sampled:2 stop:1 dataset:2 treatment:1 popular:1 adjusting:1 massachusetts:1 recall:1 knowledge:2 efron:2 dimensionality:1 sophisticated:1 brochu:2 supervised:1 methodology:1 response:1 wei:1 evaluated:1 though:1 furthermore:3 just:1 overfit:1 hand:2 reweighting:1 minibatch:1 continuity:1 lda:6 grows:1 usa:1 true:3 qwone:1 former:1 reweighted:1 during:5 illustrative:1 prominent:1 pdf:1 performs:2 meaning:2 variational:11 image:1 discussed:1 association:2 approximates:2 marginals:1 significant:1 freeze:1 versa:1 paisley:1 smoothness:1 tuning:10 unconstrained:1 grid:2 consistency:2 similarly:1 dlgm:5 chapelle:4 supervision:1 add:1 something:1 posterior:40 perspective:1 optimizing:2 optimizes:2 reintroduces:1 scenario:2 der:2 exploited:1 herbert:1 care:1 floor:1 converge:2 maximize:1 paradigm:1 ii:1 lik:5 full:2 reduces:1 smooth:4 technical:1 offer:1 long:4 mcinerney:1 parenthesis:1 calculates:1 prediction:4 converging:1 regression:1 optimisation:1 metric:2 arxiv:6 iteration:7 kernel:5 normalization:1 achieved:1 addition:4 harbour:1 extra:1 ascent:2 comment:1 validating:3 thing:1 member:1 effectiveness:1 granularity:1 intermediate:4 iii:1 bengio:2 split:3 variety:1 marginalization:2 carlin:3 fit:3 affect:2 newsgroups:2 inner:1 idea:1 reduce:1 knowing:2 intensive:1 thaw:1 favour:1 bottleneck:1 t0:2 expression:1 spotify:2 york:1 action:6 deep:8 useful:1 involve:1 tune:2 xtrain:7 nonparametric:3 stein:2 morris:2 ph:1 category:1 simplest:1 http:2 schapire:1 outperform:2 percentage:1 tutorial:1 estimated:1 per:3 discrete:2 hyperparameter:36 mat:1 express:1 key:1 four:1 drawn:1 wasteful:2 kept:1 schervish:2 sum:1 run:7 package:2 uncertainty:3 swersky:2 extends:1 family:1 place:1 decide:1 throughout:1 draw:7 fusi:1 decision:1 layer:7 bound:3 software:1 nearby:1 argument:1 performing:3 conjugate:3 describes:2 across:1 wherever:1 making:1 pipeline:2 computationally:3 resource:2 conjugacy:1 discus:1 end:1 studying:1 available:1 operation:3 generalizes:1 linderman:1 apply:4 hyperpriors:1 hierarchical:5 away:1 observe:1 alternative:2 weinberger:1 bagging:3 top:2 dirichlet:9 running:1 calculating:1 ghahramani:1 especially:2 build:1 society:1 objective:3 question:3 already:2 added:1 parametric:1 costly:1 strategy:5 usual:1 unclear:1 gradient:4 separate:9 simulated:1 topic:10 argue:2 collected:1 toward:2 relationship:2 balance:2 ching:1 robert:1 potentially:1 negative:9 append:1 unknown:2 perform:5 upper:2 observation:2 markov:3 benchmark:1 finite:2 ramesh:1 descent:1 behave:1 excluding:1 directing:1 rn:1 abe:1 introduced:4 required:2 kl:1 optimized:2 distinction:1 learned:1 established:1 nip:1 discontinuity:1 address:4 bar:1 usually:3 scott:1 sparsity:3 challenge:1 max:6 including:1 belief:1 suitable:1 natural:1 difficulty:2 treated:1 force:1 business:1 zhu:1 github:1 started:1 carried:1 text:1 prior:9 literature:1 python:1 marginalizing:5 asymptotic:1 freund:2 highlight:2 limitation:2 allocation:6 generator:1 validation:37 degree:1 proxy:1 editor:1 summary:3 repeat:2 supported:1 rasmussen:3 infeasible:1 guide:1 bias:2 understand:1 burges:1 face:5 taking:1 van:4 distributed:1 feedback:1 slice:2 calculated:3 depth:1 evaluating:2 avoids:2 world:1 dimension:1 hensman:2 unweighted:1 vocabulary:1 reinforcement:1 far:1 approximate:10 emphasize:2 alpha:1 keep:1 dealing:2 overfitting:8 active:3 consuming:2 search:6 latent:11 iterative:1 continuous:2 why:1 table:7 additionally:1 onerous:1 ca:1 zanten:2 bottou:1 japanese:1 domain:1 garnett:1 main:2 spread:1 whole:1 noise:1 hyperparameters:49 arise:1 n2:3 big:1 osborne:4 quadrature:4 borel:1 ny:1 pereira:1 exponential:1 candidate:1 weighting:1 choi:2 insightful:1 symbol:3 favourable:1 explored:1 admits:2 decay:1 evidence:2 consist:1 workshop:1 adding:1 sequential:1 phd:1 conditioned:6 margin:1 sparser:1 chen:1 likely:6 forming:1 expressed:1 ordered:3 springer:2 sized:1 towards:1 hard:1 change:4 typical:1 infinite:1 reducing:1 averaging:6 called:1 tendency:2 disregard:1 newsgroup:1 select:5 berg:1 latter:1 arises:1 incorporate:3 evaluate:5 regularizing:2 |
6,484 | 6,865 | Differentially Private Empirical Risk Minimization
Revisited: Faster and More General?
Di Wang
Dept. of Computer Science and Engineering
State University of New York at Buffalo
Buffalo, NY 14260
[email protected]
Minwei Ye
Dept. of Computer Science and Engineering
State University of New York at Buffalo
Buffalo, NY 14260
[email protected]
Jinhui Xu
Dept. of Computer Science and Engineering
State University of New York at Buffalo
Buffalo, NY 14260
[email protected]
Abstract
In this paper we study the differentially private Empirical Risk Minimization
(ERM) problem in different settings. For smooth (strongly) convex loss function
with or without (non)-smooth regularization, we give algorithms that achieve either
optimal or near optimal utility bounds with less gradient complexity compared with
previous work. For ERM with smooth convex loss function in high-dimensional
(p n) setting, we give an algorithm which achieves the upper bound with less
gradient complexity than previous ones. At last, we generalize the expected excess
empirical risk from convex loss functions to non-convex ones satisfying the PolyakLojasiewicz condition and give a tighter upper bound on the utility than the one in
[34].
1
Introduction
Privacy preserving is an important issue in learning. Nowadays, learning algorithms are often required
to deal with sensitive data. This means that the algorithm needs to not only learn effectively from
the data but also provide a certain level of guarantee on privacy preserving. Differential privacy is a
rigorous notion for statistical data privacy and has received a great deal of attentions in recent years
[11, 10]. As a commonly used supervised learning method, Empirical Risk Minimization (ERM)
also faces the challenge of achieving simultaneously privacy preserving and learning. Differentially
Private (DP) ERM with convex loss function has been extensively studied in the last decade, starting
from [7]. In this paper, we revisit this problem and present several improved results.
Problem Setting Given a dataset D = {z1 , z2 ? ? ? , zn } from a data universe X , and a closed
convex set C ? Rp , DP-ERM is to find
n
x? ? arg min F r (x, D) = F (x, D) + r(x) =
x?C
1X
f (x, zi ) + r(x)
n i=1
with the guarantee of being differentially private. We refer to f as loss function. r(?) is some simple
(non)-smooth convex function called regularizer. If the loss function is convex, the utility of the
?
This research was supported in part by NSF through grants IIS-1422591, CCF-1422324, and CCF-1716400.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
[8][7]
Method
Objective Perturbation
[21]
Objective Perturbation
[6]
Gradient Perturbation
[34]
This Paper
Utility Upper Bd
O( n2p2 )
Gradient Complexity
N/A
Non smooth Regularizer?
No
N/A
Yes
(n)
O( p log
n2 2 )
O(n2 )
Yes
Output Perturbation
O( n2p2 )
O(n? log( n
? ))
No
Gradient Perturbation
O( p nlog(n)
2 2 )
O((n + ?) log( n?
p ))
Yes
O( n2p2 +
?||x? ||2
n )
2
Table 1: Comparison with previous (, ?)-DP algorithms. We assume that the loss function f is
convex, 1-smooth, differentiable (twice differentiable for objective perturbation), and 1-Lipschitz. F r
is ?-strongly convex. Bound and complexity ignore multiplicative dependence on log(1/?). ? = L
?
is the condition number. The lower bound is ?(min{1, n2p2 })[6].
algorithm is measured by the expected excess empirical risk, i.e. E[F r (xprivate , D)] ? F r (x? , D). The
expectation is over the coins of the algorithm.
A number of approaches exist for this problem with convex loss function, which can be roughly
classified into three categories. The first type of approaches is to perturb the output of a non-DP
algorithm. [7] first proposed output perturbation approach which is extended by [34]. The second
type of approaches is to perturb the objective function [7]. We referred to it as objective perturbation
approach. The third type of approaches is to perturb gradients in first order optimization algorithms.
[6] proposed gradient perturbation approach and gave the lower bound of the utility for both general
convex and strongly convex loss functions. Later, [28] showed that this bound can actually be broken
by adding more restrictions on the convex domain C of the problem.
As shown in the following tables2 , the output perturbation approach can achieve the optimal bound of
utility for strongly convex case. But it cannot be generalized to the case with non-smooth regularizer.
The objective perturbation approach needs to obtain the optimal solution to ensure both differential
privacy and utility, which is often intractable in practice, and cannot achieve the optimal bound. The
gradient perturbation approach can overcome all the issues and thus is preferred in practice. However,
its existing results are all based on Gradient Descent (GD) or Stochastic Gradient Descent (SGD).
For large datasets, they are slow in general. In the first part of this paper, we present algorithms
with tighter utility upper bound and less running time. Almost all the aforementioned results did
not consider the case where the loss function is non-convex. Recently, [34] studied this case and
measured the utility by gradient norm. In the second part of this paper, we generalize the expected
excess empirical risk from convex to Polyak-Lojasiewicz condition, and give a tighter upper bound of
the utility given in [34]. Due to space limit, we leave many details, proofs, and experimental studies
in the supplement.
2
Related Work
There is a long list of works on differentially private ERM in the last decade which attack the problem
from different perspectives. [17][30] and [2] investigated regret bound in online settings. [20] studied
regression in incremental settings. [32] and [31] explored the problem from the perspective of
learnability and stability. We will compare to the works that are most related to ours from the utility
and gradient complexity (i.e., the number (complexity) of first order oracle (f (x, zi ), ?f (x, zi ))
being called) points of view. Table 1 is the comparison for the case that loss function is strongly
convex and 1-smooth. Our algorithm achieves near optimal bound with less gradient complexity
compared with previous ones. It is also robust to non-smooth regularizers.
Tables 2 and 3 show that for non-strongly convex and high-dimension cases, our algorithms outperform other peer methods. Particularly, we improve the gradient complexity from O(n2 ) to O(n log n)
while preserving the optimal bound for non-strongly convex case. For high-dimension case, gradient
complexity is reduced from O(n3 ) to O(n1.5 ). Note that [19] also considered high-dimension case
2
Bound and complexity ignore multiplicative dependence on log(1/?).
2
[21]
Method
Objective Perturbation
Utility Upper
Bd
?
p
O( n )
[6]
Gradient Perturbation
O(
[34]
Output Perturbation
This paper
Gradient Perturbation
Gradient Complexity
N/A
Non smooth Regularizer?
Yes
O(n2 )
Yes
?
p log3/2 (n)
)
n
? 2
p
O([ n ] 3 )
?
p
O( n )
2
3
O(n[ n
d ] )
No
n
n
O( ?
p + n log( p ))
Yes
Table 2: Comparison with previous (, ?)-DP algorithms, where F r is not necessarily strongly convex.
We assume that the loss function f is convex, 1-smooth, differentiable( twice differentiable for
objective perturbation), and 1-Lipschitz. Bound and complexity
ignore multiplicative dependence on
?
p
log(1/?). The lower bound in this case is ?(min{1, n })[6].
via dimension reduction. But their method requires the optimal value in the dimension-reduced space,
in addition they considered loss functions under the condition rather than `2 - norm Lipschitz.
For non-convex problem under differential privacy, [15][9][13] studied private SVD. [14] investigated
k-median clustering. [34] studied ERM with non-convex smooth loss functions. In [34], the authors
defined the utility using gradient norm as E[||?F (xprivate )||2 ]. They achieved a qualified utility in
O(n2 ) gradient complexity via DP-SGD. In this paper, we use DP-GD and show that it has a tighter
utility upper bound.
Method
[28]
Gradient Perturbation
Utility
? 2Upper2Bd
GC +||C|| log(n)
O(
)
n
Gradient Complexity
Non smooth Regularizer?
n3 2
O( (G2 +||C||
2 ) log2 (n) )
C
Yes
N/A
No
2
[28]
[29]
Objective Perturbation
Gradient Perturbation
O( GC +?||C||
)
n
O(
2
(GC3
?
This paper
Gradient Perturbation
O(
log2 (n))
2
(n) 3
2
3
O( (n)2 )
3
GC
?
n1.5
O
2
2
)
G2C +||C||2
)
n
Yes
1
(GC +||C|| ) 4
No
Table 3: Comparison with previous (, ?)-DP algorithms. We assume that the loss function f is
convex, 1-smooth, differentiable( twice differentiable for objective perturbation), and 1-Lipschitz.
The utility bound depends on GC , which is the Gaussian width of C. Bound and complexity ignore
multiplicative dependence on log(1/?).
3
Preliminaries
Notations: We let [n] denote {1, 2, . . . , n}. Vectors are in column form. For a vector v, we use
||v||2 to denote its `2 -norm. For the gradient complexity notation, G, ?, are omitted unless specified.
D = {z1 , ? ? ? , zn } is a dataset of n individuals.
Definition 3.1 (Lipschitz Function over ?). A loss function f : C ? X ? R is G-Lipschitz (under
`2 -norm) over ?, if for any z ? X and ?1 , ?2 ? C, we have |f (?1 , z) ? f (?2 , z)| ? G||?1 ? ?2 ||2 .
Definition 3.2 (L-smooth Function over ?). A loss function f : C ? X ? R is L-smooth over ? with
respect to the norm || ? || if for any z ? X and ?1 , ?2 ? C, we have
||?f (?1 , z) ? ?f (?2 , z)||? ? L||?1 ? ?2 ||,
where || ? ||? is the dual norm of || ? ||. If f is differentiable, this yields
L
f (?1 , z) ? f (?2 , z) + h?f (?2 , z), ?1 ? ?2 i + ||?1 ? ?2 ||2 .
2
We say that two datasets D, D0 are neighbors if they differ by only one entry, denoted as D ? D0 .
Definition 3.3 (Differentially Private[11]). A randomized algorithm A is (, ?)-differentially private
if for all neighboring datasets D, D0 and for all events S in the output space of A, we have
P r(A(D) ? S) ? e P r(A(D0 ) ? S) + ?,
3
when ? = 0 and A is -differentially private.
We will use Gaussian Mechanism [11] and moments accountant [1] to guarantee (, ?)-DP.
Definition 3.4 (Gaussian Mechanism). Given any function q : X n ? Rp , the Gaussian Mechanism
is defined as:
MG (D, q, ) = q(D) + Y,
?
2 ln(1.25/?)?2 (q)
where Y is drawn from Gaussian Distribution N (0, ? 2 Ip ) with ? ?
. Here ?2 (q)
is the `2 -sensitivity of the function q, i.e. ?2 (q) = supD?D0 ||q(D)?q(D0 )||2 . Gaussian Mechanism
preservers (, ?)-differentially private.
The moments accountant proposed in [1] is a method to accumulate the privacy cost which has tighter
bound for and ?. Roughly speaking, when
p we use the Gaussian Mechanism on the (stochastic)
gradient descent, we can save a factor of ln(T /?) in the asymptotic bound of standard deviation of
noise compared with the advanced composition theorem in [12].
Theorem 3.1 ([1]). For G-Lipschitz loss function, there exist constants c1 and c2 so that given the
sampling probability q = l/n and the number of steps T, for any < c1 q 2 T , a DP stochastic gradient
algorithm with batch size l that injects Gaussian Noise with standard deviation G
n ? to the gradients
(Algorithm 1 in [1]), is (, ?)-differentially private for any ? > 0 if
p
q T ln(1/?)
? ? c2
.
4
Differentially Private ERM with Convex Loss Function
In this section we will consider ERM with (non)-smooth regularizer3 , i.e.
n
minp F r (x, D) = F (x, D) + r(x) =
x?R
1X
f (x, zi ) + r(x).
n i=1
(1)
The loss function f is convex for every z. We define the proximal operator as
1
proxr (y) = arg minp { ||x ? y||22 + r(x)},
x?R 2
and denote x? = arg minx?Rp F r (x, D).
Algorithm 1 DP-SVRG(F r , x
?0 , T, m, ?, ?)
Input: f (x, z) is G-Lipschitz and L-smooth. F r (x, D) is ?-strongly convex w.r.t `2 -norm. x
?0 is the
initial point, ? is the step size, T, m are the iteration numbers.
1: for s = 1, 2, ? ? ? , T do
2:
x
?=x
?s?1
3:
v? = ?F (?
x)
4:
xs0 = x
?
5:
for t = 1, 2, ? ? ? , m do
6:
Pick ist ? [n]
7:
vts = ?f (xst?1 , zist ) ? ?f (?
x, zist ) + v? + ust , where ust ? N (0, ? 2 Ip )
s
s
s
8:
xt = prox?r (xt?1 ? ?vt )
9:
end for P
m
1
s
10:
x
?s = m
k=1 xk
11: end for
12: return x
?T
3
All of the algorithms and theorems in this section are applicable to closed convex set C rather than Rp .
4
4.1
Strongly convex case
We first consider the case that F r (x, D) is ?-strongly convex, Algorithm 1 is based on the ProxSVRG [33], which is much faster than SGD or GD. We will show that DP-SVRG is also faster than
DP-SGD or DP-GD in terms of the time needed to achieve the near optimal excess empirical risk
bound.
Definition 4.1 (Strongly Convex). The function f (x) is ?-strongly convex with respect to norm || ? ||
if for any x, y ? dom(f ), there exist ? > 0 such that
?
f (y) ? f (x) + h?f, y ? xi + ||y ? x||2 ,
(2)
2
where ?f is any subgradient on x of f .
Theorem 4.1. In DP-SVRG(Algorithm 1), for ? c1 Tnm
2 with some constant c1 and ? > 0, it is
(, ?)-differentially private if
G2 T m ln( 1? )
(3)
?2 = c
n 2 2
for some constant c.
Remark 4.1. The constraint on in Theorems 4.1 and 4.3 comes from Theorem 3.1. This constraint
can be removed if the noise ? is amplified by a factor of O(ln(T /?)) in (3) and (6). But accordingly
?
m/?)) in the utility bound in (5) and (7). In this case the guarantee
there will be a factor of O(log(T
of differential privacy is by advanced composition theorem and privacy amplification via sampling[6].
Theorem 4.2 (Utility guarantee). Suppose that the loss function f (x, z) is convex, G-Lipschitz and
L-smooth over x. F r (x, D) is ?-strongly convex w.r.t `2 -norm. In DP-SVRG(Algorithm 1), let ?
1
be as in (3). If one chooses ? = ?( L1 ) ? 12L
and sufficiently large m = ?( L
? ) so that they satisfy
inequality
1
8L?(m + 1)
1
+
< ,
(4)
?(1 ? 8?L)?m m(1 ? 8L?)
2
2 2
?
then the following holds for T = O log( pGn2 ln(1/?)
) ,
2
? p log(n)G log(1/?) ,
E[F r (?
xT , D)] ? F r (x? , D) ? O
(5)
n2 2 ?
?
where some insignificant
logarithm terms are hiding in the O-notation. The total gradient complexity
n?
L
is O (n + ? ) log p .
Remark 4.2. We can further use some acceleration methods to reduce the gradient complexity, see
[25][3].
4.2
Non-strongly convex case
In some cases, F r (x, D) may not be strongly convex. For such cases, [5] has recently showed that
SVRG++ has less gradient complexity than Accelerated Gradient Descent. Following the idea of
DP-SVRG, we present the algorithm DP-SVRG++ for the non-strongly convex case. Unlike the
previous one, this algorithm can achieve the optimal utility bound.
T
Theorem 4.3. In DP-SVRG++(Algorithm 2), for ? c1 2n2m with some constant c1 and ? > 0, it is
(, ?)-differentially private if
G2 2T m ln( 2? )
?2 = c
(6)
n 2 2
for some constant c.
Theorem 4.4 (Utility guarantee). Suppose that the loss function f (x, z) is convex, G-Lipschitz and
1
L-smooth. In DP-SVRG++(Algorithm 2), if ? is chosen
13L , and m = ?(L) is
as in (6), ? =
sufficiently large, then the following holds for T = O log(
E[F r (?
xT , D)] ? F r (x? , D) ? O
The gradient complexity is O
nL
?
p
+ n log( n
)
.
p
5
n
)
? ?
G p log(1/?)
!
p
G p ln(1/?))
.
n
,
(7)
Algorithm 2 DP-SVRG++(F r , x
?0 , T, m, ?, ?)
Input:f (x, z) is G-Lipschitz, and L-smooth over x ? C. x
?0 is the initial point, ? is the step size, and
T, m are the iteration numbers.
x10 = x
?0
for s = 1, 2, ? ? ? , T do
v? = ?F (?
xs?1 )
ms = 2s m
for t = 1, 2, ? ? ? , ms do
Pick ist ? [n]
vts = ?f (xst?1 , zist ) ? ?f (?
xs?1 , zist ) + v? + uts , where uts ? N (0, ? 2 Ip )
s
s
s
xt = prox?r (xt?1 ? ?vt )
end for P
ms
xsk
x
?s = m1s k=1
xs+1
= xsms
0
end for
return x
?T
5
Differentially Private ERM for Convex Loss Function in High Dimensions
The utility bounds and gradient complexities in Section 4 depend on dimensionality p. In highdimensional (i.e., p n) case, such a dependence is not very desirable. To alleviate this issue, we
can usually get rid of the dependence on dimensionality by reformulating the problem so that the
goal is to find the parameter in some closed centrally symmetric convex set C ? Rp (such as l1 -norm
ball), i.e.,
n
1X
min F (x, D) =
f (x, zi ),
(8)
x?C
n i=1
where the loss function is convex.
?
[28],[29] showed
? that the p term in (5),(7) can be replaced by the Gaussian Width of C, which is no
larger than O( p) and can be significantly smaller in practice (for more detail and examples one
may refer to [28]). In this section, we propose a faster algorithm to achieve the upper utility bound.
We first give some definitions.
Algorithm 3 DP-AccMD(F, x0 , T, ?, w)
Input:f (x, z) is G-Lipschitz , and L-smooth over x ? C . ||C||2 is the `2 norm diameter of the
convex set C. w is a function that is 1-strongly convex w.r.t || ? ||C . x0 is the initial point, and T is the
iteration number.
Define Bw (y, x) = w(y) ? h?w(x), y ? xi ? w(x)
y0 , z0 = x0
for k = 0, ? ? ? , T ? 1 do
1
?k+1 = k+2
4L and rk = 2?k+1 L
xk+1 = rk zk + (1 ? rk )yk
L||C||2
yk+1 = arg miny?C { 2 2 ||y ? xk+1 ||2C + h?F (xk+1 ), y ? xk+1 i}
zk+1 = arg minz?C {Bw (z, zk ) + ?k+1 h?F (xk+1 ) + bk+1 , z ? zk i}, where bk+1 ?
N (0, ? 2 Ip )
end for
return yT
Definition 5.1 (Minkowski Norm). The Minkowski norm (denoted by || ? ||C ) with respect to a
centrally symmetric convex set C ? Rp is defined as follows. For any vector v ? Rp ,
|| ? ||C = min{r ? R+ : v ? rC}.
The dual norm of || ? ||C is denoted as || ? ||C ? , for any vector v ? Rp , ||v||C ? = maxw?C |hw, vi|.
6
The following lemma implies that for every smooth convex function f (x, z) which is L-smooth with
respect to `2 norm, it is L||C||22 -smooth with respect to || ? ||C norm.
Lemma 5.1. For any vector v, we have ||v||2 ? ||C||2 ||v||C , where ||C||2 is the `2 -diameter and
||C||2 = supx,y?C ||x ? y||2 .
Definition 5.2 (Gaussian Width). Let b ? N (0, Ip ) be a Gaussian random vector in Rp . The Gaussian
width for a set C is defined as GC = Eb [supw?C hb, wi].
Lemma 5.2 ([28]). For W = (maxw?C hw, vi)2 where v ? N (0, Ip ), we have Ev [W ] = O(G2C +
||C||22 ).
Our algorithm DP-AccMD is based on the Accelerated Mirror Descent method, which was studied
in [4],[23].
Theorem 5.3. In DP-AccMD( Algorithm 3), for , ? > 0, it is (, ?)-differentially private if
?2 = c
G2 T ln(1/?)
n 2 2
(9)
for some constant c.
Theorem 5.4 (Utility Guarantee). Suppose the loss function f (x, z) is G-Lipschitz , and L-smooth
over x ? C . In DP-AccMD, let ? be as in (9) and w be a function that is 1-strongly convex with
respect to || ? ||C . Then if
!
p
L||C||22 Bw (x? , x0 )n
2
p
p
,
T =O
G ln(1/?) G2C + ||C||22
we have
!
p
p
Bw (x? , x0 ) G2C + ||C||22 G ln(1/?)
E[F (yT , D)] ? F (x? , D) ? O
.
n
?
n1.5 L
.
The total gradient complexity is O
1
2
2
p
(GC +||C||2 ) 4
6
ERM for General Functions
In this section, we consider non-convex functions with similar objective function as before,
n
minp F (x, D) =
x?R
1X
f (x, zi ).
n i=1
(10)
Algorithm 4 DP-GD(x0 , F, ?, T, ?, D)
Input:f (x, z) is G-Lipschitz , and L-smooth over x ? C . F is under the assumptions. 0 < ? ?
is the step size. T is the iteration number.
for t = 1, 2, ? ? ? , T do
xt = xt?1 ? ? (?F (xt?1 , D) + zt?1 ), where zt?1 ? N (0, ? 2 Ip )
end for
return xT (For section 6.1)
return xm where m is uniform sampled from {0, 1, ? ? ? , m ? 1}(For section 6.2)
1
L
Theorem 6.1. In DP-GD( Algorithm 4), for , ? > 0, it is (, ?)-differentially private if
?2 = c
G2 T ln(1/?)
n 2 2
for some constant c.
7
(11)
6.1
Excess empirical risk for functions under Polyak-Lojasiewicz condition
In this section, we consider excess empirical risk in the case where the objective function F (x, D)
satisfies Polyak-Lojasiewicz condition. This topic has been studied in [18][27][26][24][22].
Definition 6.1 ( Polyak-Lojasiewicz condition). For function F (?), denote X ? = arg minx?Rp F (x)
and F ? = minx?Rp F (x). Then there exists ? > 0 and for every x,
||?F (x)||2 ? 2?(F (x) ? F ? ).
(12)
(12) guarantees that every critical point (i.e., the point where the gradient vanish) is the global
minimum. [18] shows that if F is differentiable and L-smooth w.r.t `2 norm, then we have the
following chain of implications:
Strong Convex ? Essential Strong Convexity? Weak Strongly Convexity ? Restricted Secant
Inequality ? Polyak-Lojasiewicz Inequality ? Error Bound
Theorem 6.2. Suppose that f (x, z) is G-Lipschitz, and L-smooth over xC, and F (x, D) satisfies
the Polyak-Lojasiewicz
condition.
In DP-GD( Algorithm 4), let ? be as in (11) with ? = L1 . Then if
2 2
? log( 2n
T =O
) , the following holds
pG log(1/?)
E[F (xT , D)] ? F (x? , D) ? O(
G2 p log2 (n) log(1/?)
),
n 2 2
(13)
? hides other log, L, ? terms.
where O
DP-GD achieves near optimal bound since strongly convex functions can be seen as a special case in
the class of functions satisfying Polyak-Lojasiewicz condition. The lower bound for strongly convex
functions is ?(min{1, n2p2 })[6]. Our result has only a logarithmic multiplicative term comparing to
that. Thus we achieve near optimal bound in this sense.
6.2
Tight upper bound for (non)-convex case
In [34], the authors considered (non)-convex smooth loss functions and measured the utility as
2
||F (xprivate , D)||2 . They proposed an algorithm with
? gradient complexity O(n ). For this algorithm,
they showed that E[||F (xprivate , D)||2 ] ? O(
can eliminate the log(n) term.
log(n)
p log(1/?)
).
n
By using DP-GD( Algorithm 4), we
Theorem 6.3. Suppose that f (x, z) is G-Lipschitz,
and L-smooth. In DP-GD( Algorithm 4), let ?
?
1
Ln
), we have
be as in (11) with ? = L . Then when T = O( ?
p log(1/?)G
?
p
LG p log(1/?)
E[||?F (xm , D)|| ] ? O(
).
n
2
(14)
Remark 6.1. Although we can obtain the optimal bound by Theorem 3.1 using DP-SGD, there will
be a constraint on . Also, we still do not know the lower bound of the utility using this measure. We
leave it as an open problem.
7
Discussions
From the discussion in previous sections, we know that when gradient perturbation is combined
with linearly converge first order methods, near optimal bound with less gradient complexity can be
achieved. The remaining issue is whether the optimal bound can be obtained in this way. In Section
6.1, we considered functions satisfying the Polyak-Lojasiewicz condition, and achieved near optimal
bound on the utility. It will be interesting to know the bound for functions satisfying other conditions
(such as general Gradient-dominated functions [24], quasi-convex and locally-Lipschitz in [16])
under the differential privacy model. For general non-smooth convex loss function (such as SVM
), we do not know whether the optimal bound is achievable with less time complexity. Finally, for
non-convex loss function, proposing an easier interpretable measure for the utility is another direction
for future work.
8
References
[1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep
learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on
Computer and Communications Security, pages 308?318. ACM, 2016.
[2] N. Agarwal and K. Singh. The price of differential privacy for online learning. In Proceedings
of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia,
6-11 August 2017, pages 32?40, 2017.
[3] Z. Allen-Zhu. Katyusha: the first direct acceleration of stochastic gradient methods. In
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages
1200?1205. ACM, 2017.
[4] Z. Allen-Zhu and L. Orecchia. Linear Coupling: An Ultimate Unification of Gradient and Mirror
Descent. In Proceedings of the 8th Innovations in Theoretical Computer Science, ITCS ?17,
2017.
[5] Z. Allen-Zhu and Y. Yuan. Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex
Objectives. In Proceedings of the 33rd International Conference on Machine Learning,
ICML ?16, 2016.
[6] R. Bassily, A. Smith, and A. Thakurta. Private empirical risk minimization: Efficient algorithms
and tight error bounds. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual
Symposium on, pages 464?473. IEEE, 2014.
[7] K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In Advances in Neural
Information Processing Systems, pages 289?296, 2009.
[8] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(Mar):1069?1109, 2011.
[9] K. Chaudhuri, A. Sarwate, and K. Sinha. Near-optimal differentially private principal components. In Advances in Neural Information Processing Systems, pages 989?997, 2012.
[10] C. Dwork. Differential privacy: A survey of results. In International Conference on Theory and
Applications of Models of Computation, pages 1?19. Springer, 2008.
[11] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private
data analysis. In Theory of Cryptography Conference, pages 265?284. Springer, 2006.
[12] C. Dwork, G. N. Rothblum, and S. Vadhan. Boosting and differential privacy. In Foundations of
Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 51?60. IEEE, 2010.
[13] C. Dwork, K. Talwar, A. Thakurta, and L. Zhang. Analyze gauss: optimal bounds for privacypreserving principal component analysis. In Proceedings of the 46th Annual ACM Symposium
on Theory of Computing, pages 11?20. ACM, 2014.
[14] D. Feldman, A. Fiat, H. Kaplan, and K. Nissim. Private coresets. In Proceedings of the forty-first
annual ACM symposium on Theory of computing, pages 361?370. ACM, 2009.
[15] M. Hardt and A. Roth. Beyond worst-case analysis in private singular vector computation. In
Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 331?340.
ACM, 2013.
[16] E. Hazan, K. Levy, and S. Shalev-Shwartz. Beyond convexity: Stochastic quasi-convex optimization. In Advances in Neural Information Processing Systems, pages 1594?1602, 2015.
[17] P. Jain, P. Kothari, and A. Thakurta. Differentially private online learning. In COLT, volume 23,
pages 24?1, 2012.
[18] H. Karimi, J. Nutini, and M. Schmidt. Linear convergence of gradient and proximal-gradient
methods under the polyak-?ojasiewicz condition. In Joint European Conference on Machine
Learning and Knowledge Discovery in Databases, pages 795?811. Springer, 2016.
9
[19] S. P. Kasiviswanathan and H. Jin. Efficient private empirical risk minimization for highdimensional learning. In Proceedings of The 33rd International Conference on Machine
Learning, pages 488?497, 2016.
[20] S. P. Kasiviswanathan, K. Nissim, and H. Jin. Private incremental regression. In Proceedings of
the 36th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS
2017, Chicago, IL, USA, May 14-19, 2017, pages 167?182, 2017.
[21] D. Kifer, A. Smith, and A. Thakurta. Private convex empirical risk minimization and highdimensional regression. Journal of Machine Learning Research, 1(41):3?1, 2012.
[22] G. Li and T. K. Pong. Calculus of the exponent of kurdyka-{\ L} ojasiewicz inequality and
its applications to linear convergence of first-order methods. arXiv preprint arXiv:1602.02915,
2016.
[23] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical programming,
103(1):127?152, 2005.
[24] Y. Nesterov and B. T. Polyak. Cubic regularization of newton method and its global performance.
Mathematical Programming, 108(1):177?205, 2006.
[25] A. Nitanda. Stochastic proximal gradient descent with acceleration techniques. In Advances in
Neural Information Processing Systems, pages 1574?1582, 2014.
[26] B. T. Polyak. Gradient methods for the minimisation of functionals. USSR Computational
Mathematics and Mathematical Physics, 3(4):864?878, 1963.
[27] S. J. Reddi, A. Hefny, S. Sra, B. Poczos, and A. Smola. Stochastic variance reduction for
nonconvex optimization. In International conference on machine learning, pages 314?323,
2016.
[28] K. Talwar, A. Thakurta, and L. Zhang. Private empirical risk minimization beyond the worst
case: The effect of the constraint set geometry. arXiv preprint arXiv:1411.5417, 2014.
[29] K. Talwar, A. Thakurta, and L. Zhang. Nearly optimal private lasso. In Advances in Neural
Information Processing Systems, pages 3025?3033, 2015.
[30] A. G. Thakurta and A. Smith. (nearly) optimal algorithms for private online learning in fullinformation and bandit settings. In Advances in Neural Information Processing Systems, pages
2733?2741, 2013.
[31] Y.-X. Wang, J. Lei, and S. E. Fienberg. Learning with differential privacy: Stability, learnability
and the sufficiency and necessity of erm principle. Journal of Machine Learning Research,
17(183):1?40, 2016.
[32] X. Wu, M. Fredrikson, W. Wu, S. Jha, and J. F. Naughton. Revisiting differentially private regression: Lessons from learning theory and their consequences. arXiv preprint arXiv:1512.06388,
2015.
[33] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance
reduction. SIAM Journal on Optimization, 24(4):2057?2075, 2014.
[34] J. Zhang, K. Zheng, W. Mou, and L. Wang. Efficient private ERM for smooth objectives. In
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI
2017, Melbourne, Australia, August 19-25, 2017, pages 3922?3928, 2017.
10
| 6865 |@word private:32 achievable:1 norm:18 open:1 calculus:1 pg:1 pick:2 sgd:5 nsw:1 moment:2 necessity:1 reduction:3 initial:3 ours:1 existing:1 z2:1 comparing:1 chu:1 bd:2 ust:2 chicago:1 interpretable:1 intelligence:1 accordingly:1 xk:6 smith:4 ojasiewicz:2 boosting:1 revisited:1 kasiviswanathan:2 attack:1 zhang:6 rc:1 mathematical:3 c2:2 direct:1 differential:10 symposium:7 abadi:1 yuan:1 focs:2 privacy:17 x0:6 expected:3 roughly:2 hiding:1 notation:3 proposing:1 guarantee:8 every:4 grant:1 before:1 engineering:3 limit:1 consequence:1 rothblum:1 twice:3 eb:1 studied:7 practice:3 regret:1 secant:1 empirical:14 significantly:1 get:1 cannot:2 operator:1 risk:14 restriction:1 yt:2 roth:1 attention:1 starting:1 pod:1 convex:60 survey:1 mironov:1 stability:2 notion:1 suppose:5 programming:2 goodfellow:1 satisfying:4 particularly:1 database:2 preprint:3 wang:3 worst:2 revisiting:1 removed:1 yk:2 broken:1 complexity:25 miny:1 convexity:3 pong:1 nesterov:2 dom:1 depend:1 tight:2 singh:1 joint:2 regularizer:5 jain:1 artificial:1 shalev:1 peer:1 larger:1 say:1 ip:7 online:4 n2m:1 jinhui:2 differentiable:8 mg:1 propose:1 nlog:1 neighboring:1 chaudhuri:3 achieve:7 amplified:1 amplification:1 differentially:20 convergence:2 ijcai:1 incremental:2 leave:2 coupling:1 measured:3 received:1 strong:2 sydney:1 come:1 implies:1 differ:1 direction:1 stochastic:8 australia:2 preliminary:1 alleviate:1 tighter:5 hold:3 sufficiently:2 considered:4 great:1 achieves:3 omitted:1 applicable:1 thakurta:7 sensitive:1 minimization:9 gaussian:12 rather:2 xsk:1 minimisation:1 rigorous:1 sense:1 mou:1 eliminate:1 bandit:1 quasi:2 karimi:1 issue:4 arg:6 aforementioned:1 dual:2 denoted:3 supw:1 colt:1 exponent:1 ussr:1 special:1 beach:1 sampling:2 progressive:1 icml:2 nearly:2 future:1 simultaneously:1 individual:1 replaced:1 geometry:1 bw:4 n1:3 dwork:4 zheng:1 nl:1 regularizers:1 mcsherry:1 chain:1 implication:1 nowadays:1 unification:1 unless:1 logarithm:1 theoretical:1 sinha:1 melbourne:1 column:1 zn:2 cost:1 deviation:2 entry:1 uniform:1 learnability:2 supx:1 proximal:4 gd:10 chooses:1 st:2 combined:1 international:6 randomized:1 sensitivity:2 siam:1 physic:1 return:5 li:1 prox:2 coresets:1 jha:1 satisfy:1 depends:1 vi:2 multiplicative:5 later:1 view:1 closed:3 analyze:1 hazan:1 il:1 variance:2 gc3:1 yield:1 lesson:1 yes:8 generalize:2 weak:1 itcs:1 classified:1 monteleoni:2 definition:9 sixth:1 proof:1 di:1 sampled:1 dataset:2 hardt:1 knowledge:1 ut:2 dimensionality:2 fiat:1 hefny:1 actually:1 supervised:1 improved:2 katyusha:1 sufficiency:1 strongly:23 mar:1 smola:1 logistic:1 lei:1 usa:2 effect:1 ye:1 xs0:1 calibrating:1 ccf:2 regularization:2 reformulating:1 symmetric:2 deal:2 width:4 m:3 generalized:1 l1:3 allen:3 recently:2 volume:1 sarwate:2 m1:1 accumulate:1 refer:2 composition:2 feldman:1 rd:2 mathematics:1 proxsvrg:1 recent:1 showed:4 perspective:2 hide:1 certain:1 nonconvex:1 inequality:4 vt:4 preserving:5 minimum:1 seen:1 converge:1 forty:2 ii:1 desirable:1 d0:6 x10:1 smooth:35 faster:4 long:2 regression:5 expectation:1 arxiv:6 iteration:4 agarwal:1 achieved:3 c1:6 addition:1 xst:2 median:1 singular:1 unlike:1 sigact:2 privacypreserving:1 orecchia:1 reddi:1 vadhan:1 near:8 hb:1 zi:6 gave:1 lasso:1 polyak:11 reduce:1 idea:1 whether:2 utility:28 ultimate:1 poczos:1 york:3 speaking:1 remark:3 deep:1 extensively:1 locally:1 category:1 diameter:2 reduced:2 outperform:1 exist:3 nsf:1 revisit:1 ist:2 achieving:1 drawn:1 subgradient:1 injects:1 year:1 sum:1 naughton:1 talwar:4 almost:1 wu:2 bound:41 centrally:2 oracle:1 annual:6 constraint:4 n3:2 dominated:1 min:6 minkowski:2 ball:1 smaller:1 y0:1 wi:1 restricted:1 erm:13 fienberg:1 ln:13 mechanism:5 needed:1 know:4 nitanda:1 end:6 lojasiewicz:8 kifer:1 save:1 batch:1 coin:1 schmidt:1 rp:11 clustering:1 running:1 ensure:1 remaining:1 log2:3 newton:1 xc:1 sigmod:1 perturb:3 objective:14 dependence:6 gradient:47 dp:32 minx:3 topic:1 nissim:3 innovation:1 lg:1 kaplan:1 zt:2 twenty:1 upper:9 kothari:1 datasets:3 descent:7 buffalo:9 jin:2 extended:1 communication:1 gc:7 perturbation:23 august:2 bk:2 required:1 specified:1 z1:2 security:1 nip:1 beyond:3 usually:1 ev:1 xm:2 challenge:1 event:1 critical:1 advanced:2 zhu:3 improve:1 discovery:1 n2p:5 asymptotic:1 loss:28 interesting:1 foundation:2 minp:3 principle:2 xiao:1 supported:1 last:3 qualified:1 svrg:11 fullinformation:1 neighbor:1 face:1 fifth:1 overcome:1 dimension:6 author:2 commonly:1 log3:1 functionals:1 excess:6 ignore:4 preferred:1 global:2 rid:1 xi:2 shwartz:1 decade:2 table:5 learn:1 zk:4 robust:1 ca:1 sra:1 investigated:2 necessarily:1 european:1 domain:1 did:1 universe:1 linearly:1 noise:4 n2:6 cryptography:1 xu:1 referred:1 bassily:1 cubic:1 ny:3 slow:1 supd:1 mcmahan:1 vanish:1 levy:1 third:1 minz:1 hw:2 theorem:16 z0:1 rk:3 xt:11 tnm:1 list:1 explored:1 insignificant:1 x:3 svm:1 intractable:1 exists:1 essential:1 adding:1 effectively:1 supplement:1 mirror:2 easier:1 logarithmic:1 kurdyka:1 g2:6 maxw:2 springer:3 nutini:1 satisfies:2 acm:11 goal:1 acceleration:3 lipschitz:17 price:1 lemma:3 principal:2 called:2 total:2 experimental:1 svd:1 gauss:1 highdimensional:3 accelerated:2 dept:3 |
6,485 | 6,866 | Variational Inference via
? Upper Bound Minimization
Adji B. Dieng
Columbia University
Dustin Tran
Columbia University
John Paisley
Columbia University
Rajesh Ranganath
Princeton University
David M. Blei
Columbia University
Abstract
Variational inference (VI) is widely used as an efficient alternative to Markov
chain Monte Carlo. It posits a family of approximating distributions q and finds
the closest member to the exact posterior p. Closeness is usually measured via a
divergence D(q||p) from q to p. While successful, this approach also has problems.
Notably, it typically leads to underestimation of the posterior variance. In this paper
we propose CHIVI, a black-box variational inference algorithm that minimizes
D? (p||q), the ?-divergence from p to q. CHIVI minimizes an upper bound of the
model evidence, which we term the ? upper bound (CUBO). Minimizing the
CUBO leads to improved posterior uncertainty, and it can also be used with the
classical VI lower bound (ELBO) to provide a sandwich estimate of the model
evidence. We study CHIVI on three models: probit regression, Gaussian process
classification, and a Cox process model of basketball plays. When compared to
expectation propagation and classical VI, CHIVI produces better error rates and
more accurate estimates of posterior variance.
1
Introduction
Bayesian analysis provides a foundation for reasoning with probabilistic models. We first set a joint
distribution p(x, z) of latent variables z and observed variables x. We then analyze data through the
posterior, p(z | x). In most applications, the posterior is difficult to compute because the marginal
likelihood p(x) is intractable. We must use approximate posterior inference methods such as Monte
Carlo [1] and variational inference [2]. This paper focuses on variational inference.
Variational inference approximates the posterior using optimization. The idea is to posit a family of
approximating distributions and then to find the member of the family that is closest to the posterior.
Typically, closeness is defined by the Kullback-Leibler (KL) divergence KL(q k p), where q(z; ?) is
a variational family indexed by parameters ?. This approach, which we call KLVI, also provides the
evidence lower bound (ELBO), a convenient lower bound of the model evidence log p(x).
KLVI scales well and is suited to applications that use complex models to analyze large data sets [3].
But it has drawbacks. For one, it tends to favor underdispersed approximations relative to the exact
posterior [4, 5]. This produces difficulties with light-tailed posteriors when the variational distribution
has heavier tails. For example, KLVI for Gaussian process classification typically uses a Gaussian
approximation; this leads to unstable optimization and a poor approximation [6].
One alternative to KLVI is expectation propagation (EP), which enjoys good empirical performance
on models with light-tailed posteriors [7, 8]. Procedurally, EP reverses the arguments in the KL divergence and performs local minimizations of KL(p k q); this corresponds to iterative moment matching
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
on partitions of the data. Relative to KLVI, EP produces overdispersed approximations. But EP also
has drawbacks. It is not guaranteed to converge [7, Figure 3.6]; it does not provide an easy estimate
of the marginal likelihood; and it does not optimize a well-defined global objective [9].
In this paper we develop a new algorithm for approximate posterior inference, ?-divergence variational
inference (CHIVI). CHIVI minimizes the ?-divergence from the posterior to the variational family,
D?2 (p k q) = Eq(z;?)
h p(z | x) 2
i
?1 .
(1)
q(z; ?)
CHIVI enjoys advantages of both EP and KLVI . Like EP , it produces overdispersed approximations;
like KLVI, it optimizes a well-defined objective and estimates the model evidence.
As we mentioned, KLVI optimizes a lower bound on the model evidence. The idea behind CHIVI
is to optimize an upper bound, which we call the ? upper bound (CUBO). Minimizing the CUBO
is equivalent to minimizing the ?-divergence. In providing an upper bound, CHIVI can be used (in
concert with KLVI) to sandwich estimate the model evidence. Sandwich estimates are useful for
tasks like model selection [10]. Existing work on sandwich estimation relies on MCMC and only
evaluates simulated data [11]. We derive a sandwich theorem (Section 2) that relates CUBO and
ELBO . Section 3 demonstrates sandwich estimation on real data.
Aside from providing an upper bound, there are two additional benefits to CHIVI. First, it is a
black-box inference algorithm [12] in that it does not need model-specific derivations and it is easy to
apply to a wide class of models. It minimizes an upper bound in a principled way using unbiased
reparameterization gradients [13, 14] of the exponentiated CUBO.
Second, it is a viable alternative to EP. The ?-divergence enjoys the same ?zero-avoiding? behavior of
EP , which seeks to place positive mass everywhere, and so CHIVI is useful when the KL divergence is
not a good objective (such as for light-tailed posteriors). Unlike EP, CHIVI is guaranteed to converge;
provides an easy estimate of the marginal likelihood; and optimizes a well-defined global objective.
Section 3 shows that CHIVI outperforms KLVI and EP for Gaussian process classification.
The rest of this paper is organized as follows. Section 2 derives the CUBO, develops CHIVI, and
expands on its zero-avoiding property that finds overdispersed posterior approximations. Section 3
applies CHIVI to Bayesian probit regression, Gaussian process classification, and a Cox process
model of basketball plays. On Bayesian probit regression and Gaussian process classification, it
yielded lower classification error than KLVI and EP. When modeling basketball data with a Cox
process, it gave more accurate estimates of posterior variance than KLVI.
Related work. The most widely studied variational objective is KL(q k p). The main alternative
is EP [15, 7], which locally minimizes KL(p k q). Recent work revisits EP from the perspective of
distributed computing [16, 17, 18] and also revisits [19], which studies local minimizations with the
general family of ?-divergences [20, 21]. CHIVI relates to EP and its extensions in that it leads to
overdispersed approximations relative to KLVI. However, unlike [19, 20], CHIVI does not rely on
tying local factors; it optimizes a well-defined global objective. In this sense, CHIVI relates to the
recent work on alternative divergence measures for variational inference [21, 22].
A closely related work is [21]. They perform black-box variational inference using the reverse ?divergence D? (q k p), which is a valid divergence when ? > 01 . Their work shows that minimizing
D? (q k p) is equivalent to maximizing a lower bound of the model evidence. No positive value of
? in D? (q k p) leads to the ?-divergence. Even though taking ? ? 0 leads to CUBO, it does not
correspond to a valid divergence in D? (q k p). The algorithm in [21] also cannot minimize the upper
bound we study in this paper. In this sense, our work complements [21].
An exciting concurrent work by [23] also studies the ?-divergence. Their work focuses on upper
bounding the partition function in undirected graphical models. This is a complementary application:
Bayesian inference and undirected models both involve an intractable normalizing constant.
?-Divergence Variational Inference
2
We present the ?-divergence for variational inference. We describe some of its properties and develop
CHIVI , a black box algorithm that minimizes the ?-divergence for a large class of models.
1
It satisfies D(p k q) ? 0 and D(p k q) = 0 ?? p = q almost everywhere
2
Variational inference (VI) casts Bayesian inference as optimization [24]. VI posits a family of
approximating distributions and finds the closest member to the posterior. In its typical formulation, VI
minimizes the Kullback-Leibler divergence from q(z; ?) to p(z | x). Minimizing the KL divergence
is equivalent to maximizing the ELBO, a lower bound to the model evidence log p(x).
2.1
The ?-divergence
Maximizing the ELBO imposes properties on the resulting approximation such as underestimation of
the posterior?s support [4, 5]. These properties may be undesirable, especially when dealing with
light-tailed posteriors such as in Gaussian process classification [6].
We consider the ?-divergence (Equation 1). Minimizing the ?-divergence induces alternative properties on the resulting approximation. (See Appendix 5 for more details on all these properties.) Below
we describe a key property which leads to overestimation of the posterior?s support.
Zero-avoiding behavior: Optimizing the ?-divergence leads to a variational distribution with a
zero-avoiding behavior, which is similar to EP [25]. Namely, the ?-divergence is infinite whenever
q(z; ?) = 0 and p(z | x) > 0. Thus when minimizing it, setting p(z | x) > 0 forces q(z; ?) > 0.
This means q avoids having zero mass at locations where p has nonzero mass.
The classical objective KL(q k p) leads to approximate posteriors with the opposite behavior, called
zero-forcing. Namely, KL(q k p) is infinite when p(z | x) = 0 and q(z; ?) > 0. Therefore the optimal
variational distribution q will be 0 when p(z | x) = 0. This zero-forcing behavior leads to degenerate
solutions during optimization, and is the source of ?pruning? often reported in the literature (e.g.,
[26, 27]). For example, if the approximating family q has heavier tails than the target posterior p, the
variational distributions must be overconfident enough that the heavier tail does not allocate mass
outside the lighter tail?s support.2
2.2
CUBO: the ? Upper Bound
We derive a tractable objective for variational inference with the ?2 -divergence and also generalize it
to the ?n -divergence for n > 1. Consider the optimization problem of minimizing Equation 1. We
seek to find a relationship between the ?2 -divergence and log p(x). Consider
h p(x, z) 2 i
= 1 + D?2 (p(z|x) k q(z; ?)) = p(x)2 [1 + D?2 (p(z|x) k q(z; ?))].
Eq(z;?)
q(z; ?)
Taking logarithms on both sides, we find a relationship analogous to how KL(q k p) relates to the
ELBO . Namely, the ?2 -divergence satisfies
h p(x, z) 2 i
1
1
log(1 + D?2 (p(z|x) k q(z; ?))) = ? log p(x) + log Eq(z;?)
.
2
2
q(z; ?)
By monotonicity of log, and because log p(x) is constant, minimizing the ?2 -divergence is equivalent
to minimizing
h p(x, z) 2 i
1
L?2 (?) = log Eq(z;?)
.
2
q(z; ?)
Furthermore, by nonnegativity of the ?2 -divergence, this quantity is an upper bound to the model
evidence. We call this objective the ? upper bound (CUBO).
A general upper bound. The derivation extends to upper bound the general ?n -divergence,
h p(x, z) n i
1
L?n (?) = log Eq(z;?)
= CUBOn .
n
q(z; ?)
(2)
This produces a family of bounds. When n < 1, CUBOn is a lower bound, and minimizing it for
these values of n does not minimize the ?-divergence (rather, when n < 1, we recover the reverse
?-divergence and the VR-bound [21]). When n = 1, the bound is tight where CUBO1 = log p(x).
For n ? 1, CUBOn is an upper bound to the model evidence. In this paper we focus on n = 2. Other
2
Zero-forcing may be preferable in settings such as multimodal posteriors with unimodal approximations:
for predictive tasks, it helps to concentrate on one mode rather than spread mass over all of them [5]. In this
paper, we focus on applications with light-tailed posteriors and one to relatively few modes.
3
values of n are possible depending on the application and dataset. We chose n = 2 because it is
the most standard, and is equivalent to finding the optimal proposal in importance sampling. See
Appendix 4 for more details.
Sandwiching the model evidence. Equation 2 has practical value. We can minimize the CUBOn
and maximize the ELBO. This produces a sandwich on the model evidence. (See Appendix 8 for a
simulated illustration.) The following sandwich theorem states that the gap induced by CUBOn and
ELBO increases with n. This suggests that letting n as close to 1 as possible enables approximating
log p(x) with higher precision. When we further decrease n to 0, CUBOn becomes a lower bound
and tends to the ELBO.
Theorem 1 (Sandwich Theorem): Define CUBOn as in Equation 2. Then the following holds:
? ?n ? 1 ELBO ? log p(x) ? CUBOn .
? ?n ? 1 CUBOn is a non-decreasing function of the order n of the ?-divergence.
? limn?0 CUBOn = ELBO.
See proof in Appendix 1. Theorem 1 can be utilized for estimating log p(x), which is important for
many applications such as the evidence framework [28], where the marginal likelihood is argued to
embody an Occam?s razor. Model selection based solely on the ELBO is inappropriate because of
the possible variation in the tightness of this bound. With an accompanying upper bound, one can
perform what we call maximum entropy model selection in which each model evidence values are
chosen to be that which maximizes the entropy of the resulting distribution on models. We leave this
as future work. Theorem 1 can also help estimate Bayes factors [29]. In general, this technique is
important as there is little existing work: for example, Ref. [11] proposes an MCMC approach and
evaluates simulated data. We illustrate sandwich estimation in Section 3 on UCI datasets.
2.3
Optimizing the CUBO
We derived the CUBOn , a general upper bound to the model evidence that can be used to minimize
the ?-divergence. We now develop CHIVI, a black box algorithm that minimizes CUBOn .
The goal in CHIVI is to minimize the CUBOn with respect to variational parameters,
h p(x, z) n i
1
CUBO n (?) = log Eq(z;?)
.
n
q(z; ?)
The expectation in the CUBOn is usually intractable. Thus we use Monte Carlo to construct an
estimate. One approach is to naively perform Monte Carlo on this objective,
CUBO n (?) ?
S
1
1 X h p(x, z(s) ) n i
,
log
n
S s=1 q(z(s) ; ?)
for S samples z(1) , ..., z(S) ? q(z; ?). However, by Jensen?s inequality, the log transform of the
expectation implies that this is a biased estimate of CUBOn (?):
"
#
S
1
1 X h p(x, z(s) ) n i
Eq
log
6= CUBOn .
n
S s=1 q(z(s) ; ?)
In fact this expectation changes during optimization and depends on the sample size S. The objective
is not guaranteed to be an upper bound if S is not chosen appropriately from the beginning. This
problem does not exist for lower bounds because the Monte Carlo approximation is still a lower bound;
this is why the approach in [21] works for lower bounds but not for upper bounds. Furthermore,
gradients of this biased Monte Carlo objective are also biased.
We propose a way to minimize upper bounds which also can be used for lower bounds. The approach
keeps the upper bounding property intact. It does so by minimizing a Monte Carlo approximation of
the exponentiated upper bound,
L = exp{n ? CUBOn (?)}.
4
Algorithm 1: ?-divergence variational inference (CHIVI)
Input: Data x, Model p(x, z), Variational family q(z; ?).
Output: Variational parameters ?.
Initialize ? randomly.
while not converged do
Draw S samples z(1) , ..., z(S) from q(z; ?) and a data subsample {xi1 , ..., xiM }.
Set ?t according to a learning rate schedule.
PM
N
(s)
; ?t ), s ? {1, ..., S}.
Set log w(s) = log p(z(s) ) + M
j=1 p(xij | z) ? log q(z
Set w(s) = exp(log w(s) ? max log w(s) ), s ? {1, ..., S}.
s
i
PS h (s) n
t
(s)
Update ?t+1 = ?t ? (1?n)??
w
?
log
q(z
;
?
)
.
?
t
s=1
S
end
By monotonicity of exp, this objective admits the same optima as CUBOn (?). Monte Carlo produces
an unbiased estimate, and the number of samples only affects the variance of the gradients. We
minimize it using reparameterization gradients [13, 14]. These gradients apply to models with
differentiable latent variables. Formally, assume we can rewrite the generative process as z = g(?, )
where ? p() and for some deterministic function g. Then
B
X
p(x, g(?, (b) )) n
?= 1
L
B
q(g(?, (b) ); ?)
b=1
is an unbiased estimator of L and its gradient is
B
p(x, g(?, (b) ))
X
p(x, g(?, (b) )) n
?= n
.
?? L
?
log
?
B
q(g(?, (b) ); ?)
q(g(?, (b) ); ?)
b=1
(3)
(See Appendix 7 for a more detailed derivation and also a more general alternative with score function
gradients [30].)
Computing Equation 3 requires the full dataset x. We can apply the ?average likelihood? technique
from EP [18, 31]. Consider data {x1 , . . . , xN } and a subsample {xi1 , ..., xiM }.. We approximate
the full log-likelihood by
M
N X
log p(xij | z).
log p(x | z) ?
M j=1
Using this proxy to the full dataset we derive CHIVI, an algorithm in which each iteration depends on
only a mini-batch of data. CHIVI is a black box algorithm for performing approximate inference with
the ?n -divergence. Algorithm 1 summarizes the procedure. In practice, we subtract the maximum of
the logarithm of the importance weights, defined as
log w = log p(x, z) ? log q(z; ?).
to avoid underflow. Stochastic optimization theory still gives us convergence with this approach [32].
3
Empirical Study
We developed CHIVI, a black box variational inference algorithm for minimizing the ?-divergence.
We now study CHIVI with several models: probit regression, Gaussian process (GP) classification,
and Cox processes. With probit regression, we demonstrate the sandwich estimator on real and
synthetic data. CHIVI provides a useful tool to estimate the marginal likelihood. We also show that
for this model where ELBO is applicable CHIVI works well and yields good test error rates.
5
Sandwich
Plot Using CHIVI and BBVI On Ionosphere Dataset
1.0
Sandwich Plot Using CHIVI and BBVI On Heart Dataset
1.0
upper bound
lower bound
1.5
1.5
2.0
2.5
objective
objective
2.0
upper bound
lower bound
3.0
2.5
3.0
3.5
3.5
4.0
4.0
4.5
0
20
40
epoch
60
80
100
4.5
0
50
100
epoch
150
200
Figure 1: Sandwich gap via CHIVI and BBVI on different datasets. The first two plots correspond to
sandwich plots for the two UCI datasets Ionosphere and Heart respectively. The last plot corresponds
to a sandwich for generated data where we know the log marginal likelihood of the data. There the
gap is tight after only few iterations. More sandwich plots can be found in the appendix.
Table 1: Test error for Bayesian probit regression. The lower the better. CHIVI (this paper) yields
lower test error rates when compared to BBVI [12], and EP on most datasets.
Dataset
BBVI
EP
CHIVI
Pima
Ionos
Madelon
Covertype
0.235 ? 0.006
0.123 ? 0.008
0.457 ? 0.005
0.157 ? 0.01
0.234 ? 0.006
0.124 ? 0.008
0.445 ? 0.005
0.155 ? 0.018
0.222 ? 0.048
0.116 ? 0.05
0.453 ? 0.029
0.154 ? 0.014
Second, we compare CHIVI to Laplace and EP on GP classification, a model class for which KLVI
fails (because the typical chosen variational distribution has heavier tails than the posterior).3 In these
settings, EP has been the method of choice. CHIVI outperforms both of these methods.
Third, we show that CHIVI does not suffer from the posterior support underestimation problem
resulting from maximizing the ELBO. For that we analyze Cox processes, a type of spatial point
process, to compare profiles of different NBA basketball players. We find CHIVI yields better
posterior uncertainty estimates (using HMC as the ground truth).
3.1
Bayesian Probit Regression
We analyze inference for Bayesian probit regression. First, we illustrate sandwich estimation on UCI
datasets. Figure 1 illustrates the bounds of the log marginal likelihood given by the ELBO and the
CUBO . Using both quantities provides a reliable approximation of the model evidence. In addition,
these figures show convergence for CHIVI, which EP does not always satisfy.
We also compared the predictive performance of CHIVI, EP, and KLVI. We used a minibatch size
of 64 and 2000 iterations for each batch. We computed the average classification error rate and the
standard deviation using 50 random splits of the data. We split all the datasets with 90% of the
data for training and 10% for testing. For the Covertype dataset, we implemented Bayesian probit
regression to discriminate the class 1 against all other classes. Table 1 shows the average error rate
for KLVI, EP, and CHIVI. CHIVI performs better for all but one dataset.
3.2
Gaussian Process Classification
GP classification is an alternative to probit regression. The posterior is analytically intractable because
the likelihood is not conjugate to the prior. Moreover, the posterior tends to be skewed. EP has been
the method of choice for approximating the posterior [8]. We choose a factorized Gaussian for the
variational distribution q and fit its mean and log variance parameters.
With UCI benchmark datasets, we compared the predictive performance of CHIVI to EP and Laplace.
Table 2 summarizes the results. The error rates for CHIVI correspond to the average of 10 error
rates obtained by dividing the data into 10 folds, applying CHIVI to 9 folds to learn the variational
parameters and performing prediction on the remainder. The kernel hyperparameters were chosen
3
For KLVI, we use the black box variational inference (BBVI) version [12] specifically via Edward [33].
6
Table 2: Test error for Gaussian process classification. The lower the better. CHIVI (this paper)
yields lower test error rates when compared to Laplace and EP on most datasets.
Dataset
Laplace
EP
CHIVI
Crabs
Sonar
Ionos
0.02
0.154
0.084
0.02
0.139
0.08 ? 0.04
0.03 ? 0.03
0.055 ? 0.035
0.069 ? 0.034
Table 3: Average L1 error for posterior uncertainty estimates (ground truth from HMC). We find that
CHIVI is similar to or better than BBVI at capturing posterior uncertainties. Demarcus Cousins, who
plays center, stands out in particular. His shots are concentrated near the basket, so the posterior is
uncertain over a large part of the court Figure 2.
CHIVI
BBVI
Curry
Demarcus
Lebron
Duncan
0.060
0.066
0.073
0.082
0.0825
0.0812
0.0849
0.0871
using grid search. The error rates for the other methods correspond to the best results reported in [8]
and [34]. On all the datasets CHIVI performs as well or better than EP and Laplace.
3.3
Cox Processes
Finally we study Cox processes. They are Poisson processes with stochastic rate functions. They
capture dependence between the frequency of points in different regions of a space. We apply
Cox processes to model the spatial locations of shots (made and missed) from the 2015-2016 NBA
season [35]. The data are from 308 NBA players who took more than 150, 000 shots in total. The
nth player?s set of Mn shot attempts are xn = {xn,1 , ..., xn,Mn }, and the location of the mth shot
by the nth player in the basketball court is xn,m ? [?25, 25] ? [0, 40]. Let PP(?) denote a Poisson
process with intensity function ?, and K be a covariance matrix resulting from a kernel applied to
every location of the court. The generative process for the nth player?s shot is
1
||xi ? xj ||2 )
2?2
f ? GP(0, k(?, ?)) ; ? = exp(f) ; xn,k ? PP(?) for k ? {1, ..., Mn }.
Ki,j = k(xi , xj ) = ? 2 exp(?
The kernel of the Gaussian process encodes the spatial correlation between different areas of the
basketball court. The model treats the N players as independent. But the kernel K introduces
correlation between the shots attempted by a given player.
Our goal is to infer the intensity functions ?(.) for each player. We compare the shooting profiles
of different players using these inferred intensity surfaces. The results are shown in Figure 2. The
shooting profiles of Demarcus Cousins and Stephen Curry are captured by both BBVI and CHIVI.
BBVI has lower posterior uncertainty while CHIVI provides more overdispersed solutions. We plot
the profiles for two more players, LeBron James and Tim Duncan, in the appendix.
In Table 3, we compare the posterior uncertainty estimates of CHIVI and BBVI to that of HMC, a
computationally expensive Markov chain Monte Carlo procedure that we treat as exact. We use the
average L1 distance from HMC as error measure. We do this on four different players: Stephen Curry,
Demarcus Cousins, LeBron James, and Tim Duncan. We find that CHIVI is similar or better than
BBVI , especially on players like Demarcus Cousins who shoot in a limited part of the court.
4
Discussion
We described CHIVI, a black box algorithm that minimizes the ?-divergence by minimizing the
CUBO . We motivated CHIVI as a useful alternative to EP . We justified how the approach used in
CHIVI enables upper bound minimization contrary to existing ?-divergence minimization techniques.
This enables sandwich estimation using variational inference instead of Markov chain Monte Carlo.
7
Curry Shot Chart
Curry Posterior Intensity (KLQP)
225
Demarcus Shot Chart
Curry Posterior Intensity (Chi)
200
175
150
125
225
Curry Posterior Intensity (HMC)
225
200
200
175
175
150
150
125
125
100
100
100
75
75
75
50
50
50
25
25
25
0
0
0
Demarcus Posterior Intensity (KLQP)
360
Demarcus Posterior Intensity (Chi)
320
320
280
280
240
240
Demarcus Posterior Intensity (HMC)
320
280
240
200
200
160
160
120
120
80
80
40
40
40
0
0
0
200
160
120
80
Curry Posterior Uncertainty (KLQP)
1.0
Curry Posterior Uncertainty (Chi)
1.0
Curry Posterior Uncertainty (HMC)
1.0
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0.0
0.0
0.0
Demarcus Posterior Uncertainty (KLQP)
1.0
Demarcus Posterior Uncertainty (Chi)
1.0
Demarcus Posterior Uncertainty (HMC)
1.0
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0.0
0.0
0.0
Figure 2: Basketball players shooting profiles as inferred by BBVI [12], CHIVI (this paper), and
Hamiltonian Monte Carlo (HMC). The top row displays the raw data, consisting of made shots
(green) and missed shots (red). The second and third rows display the posterior intensities inferred
by BBVI, CHIVI, and HMC for Stephen Curry and Demarcus Cousins respectively. Both BBVI and
CHIVI capture the shooting behavior of both players in terms of the posterior mean. The last two
rows display the posterior uncertainty inferred by BBVI, CHIVI, and HMC for Stephen Curry and
Demarcus Cousins respectively. CHIVI tends to get higher posterior uncertainty for both players in
areas where data is scarce compared to BBVI. This illustrates the variance underestimation problem
of KLVI, which is not the case for CHIVI. More player profiles with posterior mean and uncertainty
estimates can be found in the appendix.
We illustrated this by showing how to use CHIVI in concert with KLVI to sandwich-estimate the
model evidence. Finally, we showed that CHIVI is an effective algorithm for Bayesian probit
regression, Gaussian process classification, and Cox processes.
Performing VI via upper bound minimization, and hence enabling overdispersed posterior approximations, sandwich estimation, and model selection, comes with a cost. Exponentiating the original
CUBO bound leads to high variance during optimization even with reparameterization gradients.
Developing variance reduction schemes for these types of objectives (expectations of likelihood
ratios) is an open research problem; solutions will benefit this paper and related approaches.
8
Acknowledgments
We thank Alp Kucukelbir, Francisco J. R. Ruiz, Christian A. Naesseth, Scott W. Linderman,
Maja Rudolph, and Jaan Altosaar for their insightful comments. This work is supported by NSF
IIS-1247664, ONR N00014-11-1-0651, DARPA PPAML FA8750-14-2-0009, DARPA SIMPLEX
N66001-15-C-4032, the Alfred P. Sloan Foundation, and the John Simon Guggenheim Foundation.
References
[1] C. Robert and G. Casella. Monte Carlo Statistical Methods. Springer-Verlag, 2004.
[2] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. Introduction to variational methods for
graphical models. Machine Learning, 37:183?233, 1999.
[3] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR,
2013.
[4] K. P. Murphy. Machine Learning: A Probabilistic Perspective. MIT press, 2012.
[5] C. M. Bishop. Pattern recognition. Machine Learning, 128, 2006.
[6] J. Hensman, M. Zwie?ele, and N. D. Lawrence. Tilted variational Bayes. JMLR, 2014.
[7] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001.
[8] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process
classification. JMLR, 6:1679?1704, 2005.
[9] M. J. Beal. Variational algorithms for approximate Bayesian inference. University of London,
2003.
[10] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415?447, 1992.
[11] R. B. Grosse, Z. Ghahramani, and R. P. Adams. Sandwiching the marginal likelihood using
bidirectional monte carlo. arXiv preprint arXiv:1511.02543, 2015.
[12] R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In AISTATS, 2014.
[13] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[14] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic Backpropagation and Approximate
Inference in Deep Generative Models. In ICML, 2014.
[15] M. Opper and O. Winther. Gaussian processes for classification: Mean-field algorithms. Neural
Computation, 12(11):2655?2684, 2000.
[16] Andrew Gelman, Aki Vehtari, Pasi Jyl?nki, Tuomas Sivula, Dustin Tran, Swupnil Sahai, Paul
Blomstedt, John P Cunningham, David Schiminovich, and Christian Robert. Expectation
propagation as a way of life: A framework for Bayesian inference on partitioned data. arXiv
preprint arXiv:1412.4869, 2017.
[17] Y. W. Teh, L. Hasenclever, T. Lienart, S. Vollmer, S. Webb, B. Lakshminarayanan, and C. Blundell. Distributed Bayesian learning with stochastic natural-gradient expectation propagation
and the posterior server. arXiv preprint arXiv:1512.09327, 2015.
[18] Y. Li, J. M. Hern?ndez-Lobato, and R. E. Turner. Stochastic Expectation Propagation. In NIPS,
2015.
[19] T. Minka. Power EP. Technical report, Microsoft Research, 2004.
[20] J. M. Hern?ndez-Lobato, Y. Li, D. Hern?ndez-Lobato, T. Bui, and R. E. Turner. Black-box
?-divergence minimization. ICML, 2016.
[21] Y. Li and R. E. Turner. Variational inference with R?nyi divergence. In NIPS, 2016.
[22] Rajesh Ranganath, Jaan Altosaar, Dustin Tran, and David M. Blei. Operator variational
inference. In NIPS, 2016.
9
[23] Volodymyr Kuleshov and Stefano Ermon. Neural variational inference and learning in undirected
graphical models. In NIPS, 2017.
[24] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational
methods for graphical models. Machine Learning, 37(2):183?233, 1999.
[25] T. Minka. Divergence measures and message passing. Technical report, Microsoft Research,
2005.
[26] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance Weighted Autoencoders. In
International Conference on Learning Representations, 2016.
[27] Matthew D Hoffman. Learning Deep Latent Gaussian Models with Markov Chain Monte Carlo.
In International Conference on Machine Learning, 2017.
[28] D. J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge Univ.
Press, 2003.
[29] A. E. Raftery. Bayesian model selection in social research. Sociological methodology, 25:111?
164, 1995.
[30] J. Paisley, D. Blei, and M. Jordan. Variational Bayesian inference with stochastic search. In
ICML, 2012.
[31] G. Dehaene and S. Barthelm?. Expectation propagation in the large-data limit. In NIPS, 2015.
[32] Peter Sunehag, Jochen Trumpf, SVN Vishwanathan, Nicol N Schraudolph, et al. Variable metric
stochastic approximation theory. In AISTATS, pages 560?566, 2009.
[33] Dustin Tran, Alp Kucukelbir, Adji B Dieng, Maja Rudolph, Dawen Liang, and David M
Blei. Edward: A library for probabilistic modeling, inference, and criticism. arXiv preprint
arXiv:1610.09787, 2016.
[34] H. Kim and Z. Ghahramani. The em-ep algorithm for gaussian process classification. In
Proceedings of the Workshop on Probabilistic Graphical Models for Classification (ECML),
pages 37?48, 2003.
[35] A. Miller, L. Bornn, R. Adams, and K. Goldsberry. Factorized point process intensities: A
spatial analysis of professional basketball. In ICML, 2014.
10
| 6866 |@word madelon:1 cox:9 version:1 open:1 seek:2 covariance:1 shot:11 reduction:1 moment:1 ndez:3 score:1 fa8750:1 outperforms:2 existing:3 must:2 john:3 tilted:1 partition:2 enables:3 christian:2 plot:7 concert:2 update:1 aside:1 generative:3 beginning:1 hamiltonian:1 blei:6 provides:6 location:4 wierstra:1 viable:1 shooting:4 notably:1 embody:1 behavior:6 chi:4 salakhutdinov:1 decreasing:1 little:1 inappropriate:1 becomes:1 estimating:1 moreover:1 maja:2 maximizes:1 mass:5 factorized:2 what:1 tying:1 minimizes:9 developed:1 finding:1 adji:2 every:1 expands:1 preferable:1 demonstrates:1 positive:2 local:3 treat:2 tends:4 limit:1 encoding:1 solely:1 interpolation:1 black:11 chose:1 studied:1 suggests:1 limited:1 practical:1 acknowledgment:1 testing:1 practice:1 backpropagation:1 procedure:2 area:2 empirical:2 convenient:1 matching:1 get:1 cannot:1 undesirable:1 selection:5 close:1 gelman:1 altosaar:2 operator:1 applying:1 optimize:2 equivalent:5 deterministic:1 center:1 maximizing:4 lobato:3 estimator:2 his:1 reparameterization:3 sahai:1 variation:1 analogous:1 laplace:5 target:1 play:3 exact:3 lighter:1 vollmer:1 us:1 kuleshov:1 expensive:1 recognition:1 utilized:1 observed:1 ep:31 preprint:4 wang:1 capture:2 region:1 decrease:1 mentioned:1 principled:1 vehtari:1 overestimation:1 tight:2 rewrite:1 predictive:3 multimodal:1 joint:1 darpa:2 ppaml:1 derivation:3 univ:1 describe:2 effective:1 monte:14 london:1 outside:1 widely:2 tightness:1 elbo:15 favor:1 gp:4 transform:1 rudolph:2 beal:1 advantage:1 differentiable:1 took:1 propose:2 tran:4 remainder:1 uci:4 degenerate:1 convergence:2 xim:2 p:1 optimum:1 assessing:1 produce:7 adam:2 leave:1 help:2 derive:3 develop:3 depending:1 illustrate:2 tim:2 measured:1 andrew:1 eq:7 edward:2 dividing:1 implemented:1 implies:1 revers:1 come:1 concentrate:1 posit:3 drawback:2 closely:1 stochastic:8 alp:2 ermon:1 argued:1 extension:1 hold:1 accompanying:1 crab:1 ground:2 exp:5 lawrence:1 matthew:1 klvi:19 estimation:6 ruslan:1 applicable:1 pasi:1 concurrent:1 tool:1 weighted:1 hoffman:2 minimization:7 mit:2 gaussian:17 always:1 rather:2 avoid:1 season:1 jaakkola:2 derived:1 focus:4 rezende:1 likelihood:12 criticism:1 kim:1 sense:2 inference:37 underdispersed:1 typically:3 cunningham:1 mth:1 classification:18 proposes:1 spatial:4 initialize:1 mackay:2 marginal:8 field:1 construct:1 having:1 beach:1 sampling:1 ionos:2 icml:4 jochen:1 future:1 simplex:1 report:2 develops:1 few:2 randomly:1 divergence:45 murphy:1 consisting:1 sandwich:21 microsoft:2 attempt:1 message:1 bbvi:17 introduces:1 light:5 behind:1 chain:4 accurate:2 rajesh:2 indexed:1 logarithm:2 uncertain:1 jyl:1 modeling:2 cost:1 deviation:1 successful:1 reported:2 barthelm:1 synthetic:1 st:1 winther:1 international:2 probabilistic:4 xi1:2 thesis:1 kucukelbir:2 choose:1 li:3 volodymyr:1 lakshminarayanan:1 satisfy:1 sloan:1 vi:7 depends:2 analyze:4 sandwiching:2 red:1 recover:1 bayes:3 dieng:2 simon:1 minimize:7 chart:2 variance:8 who:3 miller:1 correspond:4 yield:4 generalize:1 bayesian:17 raw:1 carlo:14 kuss:1 converged:1 casella:1 whenever:1 basket:1 evaluates:2 against:1 frequency:1 pp:2 james:2 minka:3 mohamed:1 proof:1 dataset:9 ele:1 organized:1 schedule:1 bidirectional:1 higher:2 methodology:1 improved:1 formulation:1 box:11 though:1 furthermore:2 jaan:2 roger:1 correlation:2 autoencoders:1 propagation:6 minibatch:1 mode:2 curry:12 usa:1 unbiased:3 overdispersed:6 analytically:1 hence:1 leibler:2 nonzero:1 illustrated:1 during:3 basketball:8 skewed:1 aki:1 razor:1 demonstrate:1 performs:3 l1:2 stefano:1 reasoning:1 variational:41 shoot:1 bornn:1 tail:5 approximates:1 hasenclever:1 cambridge:1 paisley:3 lienart:1 grid:1 pm:1 surface:1 closest:3 posterior:55 recent:2 showed:1 perspective:2 optimizing:2 optimizes:4 reverse:2 forcing:3 n00014:1 verlag:1 server:1 inequality:1 onr:1 binary:1 life:1 yuri:1 captured:1 additional:1 converge:2 maximize:1 stephen:4 relates:4 full:3 unimodal:1 ii:1 infer:1 technical:2 schraudolph:1 long:1 prediction:1 regression:11 expectation:10 poisson:2 metric:1 arxiv:8 iteration:3 kernel:4 proposal:1 addition:1 justified:1 source:1 limn:1 appropriately:1 biased:3 rest:1 unlike:2 comment:1 induced:1 undirected:3 dehaene:1 member:3 contrary:1 jordan:3 call:4 near:1 split:2 easy:3 enough:1 affect:1 fit:1 gave:1 xj:2 opposite:1 idea:2 court:5 svn:1 cousin:6 blundell:1 motivated:1 heavier:4 allocate:1 suffer:1 peter:1 passing:1 deep:2 useful:4 detailed:1 involve:1 locally:1 induces:1 concentrated:1 exist:1 xij:2 nsf:1 alfred:1 key:1 four:1 n66001:1 everywhere:2 uncertainty:15 procedurally:1 place:1 family:11 almost:1 extends:1 missed:2 draw:1 appendix:8 summarizes:2 duncan:3 nba:3 capturing:1 bound:45 ki:1 guaranteed:3 display:3 fold:2 yielded:1 covertype:2 vishwanathan:1 encodes:1 schiminovich:1 argument:1 performing:3 relatively:1 developing:1 according:1 overconfident:1 poor:1 guggenheim:1 conjugate:1 em:1 partitioned:1 heart:2 computationally:1 equation:5 hern:3 know:1 letting:1 tractable:1 end:1 linderman:1 apply:4 alternative:9 batch:2 professional:1 original:1 top:1 graphical:5 ghahramani:4 especially:2 approximating:6 classical:3 nyi:1 objective:16 quantity:2 dependence:1 gradient:9 iclr:1 distance:1 thank:1 simulated:3 unstable:1 tuomas:1 relationship:2 illustration:1 providing:2 minimizing:14 mini:1 ratio:1 liang:1 difficult:1 hmc:11 webb:1 robert:2 pima:1 perform:3 teh:1 upper:27 markov:4 datasets:9 benchmark:1 enabling:1 ecml:1 intensity:11 inferred:4 david:4 complement:1 cast:1 namely:3 kl:11 kingma:1 nip:6 usually:2 below:1 scott:1 pattern:1 max:1 reliable:1 green:1 power:1 difficulty:1 rely:1 force:1 nki:1 natural:1 scarce:1 turner:3 nth:3 mn:3 scheme:1 library:1 raftery:1 auto:1 columbia:4 epoch:2 literature:1 prior:1 nicol:1 relative:3 probit:11 sociological:1 foundation:3 proxy:1 imposes:1 exciting:1 occam:1 row:3 supported:1 last:2 rasmussen:1 enjoys:3 sunehag:1 side:1 exponentiated:2 burda:1 wide:1 saul:2 taking:2 benefit:2 distributed:2 hensman:1 xn:6 valid:2 avoids:1 stand:1 opper:1 made:2 exponentiating:1 welling:1 social:1 ranganath:3 approximate:9 pruning:1 kullback:2 bui:1 keep:1 dealing:1 monotonicity:2 global:3 francisco:1 xi:2 search:2 latent:3 iterative:1 tailed:5 why:1 table:6 sonar:1 learn:1 ca:1 complex:1 aistats:2 main:1 spread:1 bounding:2 revisits:2 subsample:2 profile:6 hyperparameters:1 paul:1 complementary:1 ref:1 x1:1 grosse:2 vr:1 precision:1 fails:1 nonnegativity:1 jmlr:3 third:2 dustin:4 ruiz:1 theorem:6 specific:1 bishop:1 showing:1 insightful:1 jensen:1 admits:1 ionosphere:2 closeness:2 evidence:18 intractable:4 derives:1 normalizing:1 naively:1 workshop:1 importance:3 phd:1 illustrates:2 gap:3 subtract:1 suited:1 entropy:2 applies:1 springer:1 corresponds:2 underflow:1 satisfies:2 relies:1 truth:2 gerrish:1 goal:2 dawen:1 change:1 naesseth:1 typical:2 infinite:2 specifically:1 called:1 total:1 discriminate:1 player:16 attempted:1 underestimation:4 intact:1 formally:1 support:4 mcmc:2 princeton:1 avoiding:4 |
6,486 | 6,867 | On Quadratic Convergence of DC Proximal Newton
Algorithm in Nonconvex Sparse Learning
Xingguo Li1,4 Lin F. Yang2? Jason Ge2 Jarvis Haupt1 Tong Zhang3 Tuo Zhao4?
1
University of Minnesota 2 Princeton University 3 Tencent AI Lab 4 Georgia Tech
Abstract
We propose a DC proximal Newton algorithm for solving nonconvex regularized
sparse learning problems in high dimensions. Our proposed algorithm integrates the
proximal newton algorithm with multi-stage convex relaxation based on difference
of convex (DC) programming, and enjoys both strong computational and statistical
guarantees. Specifically, by leveraging a sophisticated characterization of sparse
modeling structures (i.e., local restricted strong convexity and Hessian smoothness),
we prove that within each stage of convex relaxation, our proposed algorithm
achieves (local) quadratic convergence, and eventually obtains a sparse approximate
local optimum with optimal statistical properties after only a few convex relaxations.
Numerical experiments are provided to support our theory.
1
Introduction
We consider a high dimensional regression or classification problem: Given n independent observations {xi , yi }ni=1 ? Rd ? R sampled from a joint distribution D(X, Y ), we are interested in learning
the conditional distribution P(Y |X) from the data. A popular modeling approach is the Generalized
Linear Model (GLM) [20], which assumes
?
?
Y X > ??
(X > ?? )
P (Y |X; ?? ) / exp
,
c( )
where c( ) is a scaling parameter, and is the cumulant function. A natural approach to estimate
?? is the Maximum Likelihood Estimation (MLE) [25], which essentially minimizes the negative
log-likelihood of the data given parameters. However, MLE often performs poorly in parameter
estimation in high dimensions due to the curse of dimensionality [6].
To address this issue, machine learning researchers and statisticians follow Occam?s razor principle,
and propose sparse modeling approaches [3, 26, 30, 32]. These sparse modeling approaches assume
that ?? is a sparse vector with only s? nonzero entries, where s? < n ? d. This implies that
many variables in X are essentially irrelevant to modeling, which is very natural to many real world
applications such as genomics and medical imaging [7, 21]. Many empirical results have corroborated
the success of sparse modeling in high dimensions. Specifically, many sparse modeling approaches
obtain a sparse estimator of ?? by solving the following regularized optimization problem,
? = argmin L(?) + R
?2Rd
tgt
(?),
(1)
where L : Rd ! R is the convex negative log-likelihood (or pseudo-likelihood) function, R tgt :
Pd
Rd ! R is a sparsity-inducing decomposable regularizer, i.e., R tgt (?) = j=1 r tgt (?j ) with
r tgt : R ! R, and tgt > 0 is the regularization parameter. Many existing sparse modeling
approaches can be cast as special examples of (1), such as sparse linear regression [30], sparse logistic
regression [32], and sparse Poisson regression [26].
?
The work was done while the author was at Johns Hopkins University.
The authors acknowledge support from DARPA YFA N66001-14-1-4047 and NSF Grant IIS-1447639.
Correspondence to: Xingguo Li <[email protected]> and Tuo Zhao <[email protected]>.
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Given a convex regularizer, e.g., Rtgt (?) = tgt ||?||1 [30], we can obtain global optima in polynomial
time and characterize their statistical properties. However, convex regularizers incur large estimation
bias. To address this issue, several nonconvex regularizers are proposed, including the minimax
concave penalty (MCP, [39]), smooth clipped absolute deviation (SCAD, [8]), and capped `1 regularization [40]. The obtained estimator (e.g., hypothetically global optima to (1)) can achieve
faster statistical rates of convergence than their convex counterparts [9, 16, 22, 34].
Despite of these superior statistical guarantees, nonconvex regularizers raise greater computational
challenge than convex regularizers in high dimensions. Popular iterative algorithms for convex
optimization, such as proximal gradient descent [2, 23] and coordinate descent [17, 29], no longer
have strong global convergence guarantees for nonconvex optimization. Therefore, establishing
statistical properties of the estimators obtained by these algorithms becomes very challenging, which
explains why existing theoretical studies on computational and statistical guarantees for nonconvex
regularized sparse modeling approaches are so limited until recent rise of a new area named ?statistical
optimization?. Specifically, machine learning researchers start to incorporate certain structures of
sparse modeling (e.g. restricted strong convexity, large regularization effect) into the algorithmic
design and convergence analysis for optimization. This further motivates a few recent progresses:
[16] propose proximal gradient algorithms for a family of nonconvex regularized estimators with a
linear convergence to an approximate local optimum with suboptimal statistical guarantees; [34, 43]
further propose homotopy proximal gradient and coordinate gradient descent algorithms with a linear
convergence to a local optimum and optimal statistical guarantees; [9, 41] propose a multistage
convex relaxation-based (also known as Difference of Convex (DC) Programming) proximal gradient
algorithm, which can guarantee an approximate local optimum with optimal statistical properties.
Their computational analysis further shows that within each stage of the convex relaxation, the
proximal gradient algorithm achieves a (local) linear convergence to a unique sparse global optimum
for the relaxed convex subproblem.
The aforementioned approaches only consider first order algorithms, such as proximal gradient
descent and proximal coordinate gradient descent. The second order algorithms with theoretical
guarantees are still largely missing for high dimensional nonconvex regularized sparse modeling
approaches, but this does not suppress the enthusiasm of applying heuristic second order algorithms
to real world problems. Some evidences have already corroborated their superior computational
performance over first order algorithms (e.g. glmnet [10]). This further motivates our attempt
towards understanding the second order algorithms in high dimensions.
In this paper, we study a multistage convex relaxation-based proximal Newton algorithm for nonconvex regularized sparse learning. This algorithm is not only highly efficient in practice, but also enjoys
strong computational and statistical guarantees in theory. Specifically, by leveraging a sophisticated
characterization of local restricted strong convexity and Hessian smoothness, we prove that within
each stage of convex relaxation, our proposed algorithm maintains the solution sparsity, and achieves
a (local) quadratic convergence, which is a significant improvement over (local) linear convergence
of proximal gradient algorithm in [9] (See more details in later sections). This eventually allows us to
obtain an approximate local optimum with optimal statistical properties after only a few relaxations.
Numerical experiments are provided to support our theory. To the best of our knowledge, this is the
first of second order based approaches for high dimensional sparse learning using convex/nonconvex
regularizers with strong statistical and computational guarantees.
Pd
Notations: Given a vector v 2 Rd , we denote the p-norm as ||v||p = ( j=1 |vj |p )1/p for
P
a real p > 0 and the number of nonzero entries as ||v||0 =
j 1(vj 6= 0) and v\j =
>
d 1
(v1 , . . . , vj 1 , vj+1 , . . . , vd ) 2 R
as the subvector with the j-th entry removed. Given an
index set A ? {1, ..., d}, A? = {j | j 2 {1, ..., d}, j 2
/ A} is the complementary set to A. We use
vA to denote a subvector of v indexed by A. Given a matrix A 2 Rd?d , we use A?j (Ak? ) to denote
the j-th column (k-th row) and ?max (A) (?
pmin (A)) as the largest (smallest) eigenvalue of A. We
P
2
2
define ||A||F = j ||A?j ||2 and ||A||2 = ?max (A> A). We denote A\i\j as the submatrix of A
with the i-th row and the j-th column removed, A\ij (Ai\j ) as the j-th column (i-th row) of A with
its i-th (j-th) entry removed, and AAA as a submatrix
of A with both row and column indexed by
p
>
A. If A is a PSD matrix, we define ||v||A = v Av as the induced seminorm for vector v. We use
conventional notation O(?), ?(?),?(?) to denote the limiting behavior, ignoring constant, and OP (?)
to denote the limiting behavior in probability. C1 , C2 , . . . are denoted as generic positive constants.
2
2
DC Proximal Newton Algorithm
Throughout the rest of the paper, we assume: (1) L(?) is nonstrongly convex and twice continuously
differentiable, e.g., the negative log-likelihoodP
function of the generalized linear model (GLM);
n
(2) L(?) takes an additive form, i.e., L(?) = n1 i=1 `i (?), where each `i (?) is associated with an
observation (xi , yi ) for i = 1, ..., n. Take GLM as an example, we have `i (?) = (x>
yi x >
i ?)
i ?,
where is the cumulant function.
For nonconvex regularization, we use the capped `1 regularizer [40] defined as
d
d
X
X
R tgt (?) =
rtgt (?j ) = tgt
min{|?j |, tgt },
j=1
j=1
where > 0 is an additional tuning parameter. Our algorithm and theory can also be extended to the
SCAD and MCP regularizers in a straightforward manner [8, 39]. As shown in Figure 1, r tgt (?j )
can be decomposed as the difference of two convex functions [5], i.e.,
2
r (?j ) = |?j | max{ |?j |
, 0} .
| {z } |
{z
}
convex
convex
This motivates us to apply the difference
of convex (DC) programming approach
to solve the nonconvex problem. We then
introduce the DC proximal Newton algo?j
?j
?j
rithm, which contains three components:
Figure 1: The capped `1 regularizer is the difference of two con- the multistage convex relaxation, warm
vex functions. This allows us to relax the nonconvex regularizer initialization, and proximal Newton algobased the concave duality.
rithm.
=
(I) The multistage convex relaxation is essentially a sequential optimization framework [40]. At
the (K + 1)-th stage, we have the output solution from the previous stage ?b{K} . For notational
{K+1}
{K+1} >
{K+1}
simplicity, we define a regularization vector as {K+1} = ( 1
, ..., d
) , where j
=
{K}
b
| ? tgt ) for all j = 1, . . . , d. Let be the Hadamard (entrywise) product. We solve
tgt ? 1(|?
j
a convex relaxation of (1) at ? = ?b{K} as follows,
?
{K+1}
= argmin F
?2Rd
where ||
{K+1}
?||1 =
a convex relaxation of R
{K}
{K+1}
Pd
j=1
tgt
(?), where F
{K+1}
|?j |.
j
b{K}
(?) at ? = ?
{K+1}
(?) = L(?) + ||
One can verify that ||
{K+1}
{K+1}
?||1 ,
(2)
?||1 is essentially
based on the concave duality in DC programming.
We emphasis that ?
denotes the unique sparse global optimum for (2) (The uniqueness will be
elaborated in later sections), and ?b{K} denotes the output solution for (2) when we terminate the
iteration at the K-th convex relaxation stage. The stopping criterion will be explained later.
(II) The warm initialization is the first stage of DC programming, where we solve the `1 regularized
counterpart of (1),
?
{1}
= argmin L(?) +
?2Rd
tgt ||?||1 .
(3)
This is an intuitive choice for sparse statistical recovery, since the `1 regularized estimator can give
us a good initialization, which is sufficiently close to ?? . Note that this is equivalent to (2) with
{1}
= tgt for all j = 1, . . . , d, which can be viewed as the convex relaxation of (1) at ?b{0} = 0
j
for the first stage.
(III) The proximal Newton algorithm proposed in [12] is then applied to solve the convex subproblem (2) at each stage, including the warm initialization (3). For notational simplicity, we omit
the stage index {K} for all intermediate updates of ?, and only use (t) as the iteration index within
the K-th stage for all K 1. Specifically, at the K-th stage, given ?(t) at the t-th iteration of the
proximal Newton algorithm, we consider a quadratic approximation of (2) at ?(t) as follows,
1
Q(?; ?(t) , {K} ) = L(?(t) ) + (? ?(t) )> rL(?(t) ) + ||? ?(t) ||2r2 L(?(t) ) + || {K} ?||1 , (4)
2
3
where ||?
1
?(t) )> r2 L(?(t) )(?
?(t) ). We then take ?(t+ 2 ) =
P
n
argmin? Q(?; ?(t) , {K} ). Since L(?) = n1 i=1 `i (?) takes an additive form, we can avoid
directly computing the d by d Hessian matrix in (4). Alternatively, in order to reduce the memory
usage when d is large, we rewrite (4) as a regularized weighted least square problem as follows
n
1X
2
{K}
Q(?; ?(t) ) =
wi (zi x>
?||1 + constant,
(5)
i ?) + ||
n i=1
?(t) ||2r2 L(?(t) ) = (?
where wi ?s and zi ?s are some easy to compute constants depending on ?(t) , `i (?(t) )?s, xi ?s, and yi ?s.
Remark 1. Existing literature has shown that (5) can be efficiently solved by coordinate descent
algorithms in conjunction with the active set strategy [43]. See more details in [10] and Appendix B.
For the first stage (i.e., warm initialization), we require an additional backtracking line search
procedure to guarantee the descent of the objective value [12]. Specifically, we denote
1
?(t) = ?(t+ 2 ) ?(t) .
Then we start from ?t = 1 and use backtracking line search to find the optimal ?t 2 (0, 1] such that
the Armijo condition [1] holds. Specifically, given a constant ? 2 (0.9, 1), we update ?t = ?q from
q = 0 and find the smallest integer q such that
F
{1}
(?(t) + ?t ?(t) ) ? F
where ? 2 (0, 12 ) is a fixed constant and
? ?>
(t)
? ?(t) + ||
t = rL ?
{1}
?
{1}
?(t) +
(?(t) ) + ??t t ,
?
?(t) ||1
||
{1}
?(t) ||1 .
We then set ?(t+1) as ?(t+1) = ?(t) + ?t ?(t) . We terminate the iterations when the following
approximate KKT condition holds:
? ?
! {1} ?(t) := min ||rL(?(t) ) + {1} ?||1 ? ",
?2@||? (t) ||1
where " is a predefined precision parameter. Then we set the output solution as ?b{1} = ?(t) . Note that
?b{1} is then used as the initial solution for the second stage of convex relaxation (2). The proximal
Newton algorithm with backtracking line search is summarized in Algorithm 2 in Appendix.
Such a backtracking line search procedure is not necessary at K-th stage for all K
2. In other
1
words, we simply take ?t = 1 and ?(t+1) = ?(t+ 2 ) for all t 0 when K 2. This leads to more
efficient updates for the proximal Newton algorithm from the second stage of convex relaxation (2).
We summarize our proposed DC proximal Newton algorithm in Algorithm 1 in Appendix.
3
Computational and Statistical Theories
Before we present our theoretical results, we first introduce some preliminaries, including important
definitions and assumptions. We define the largest and smallest s-sparse eigenvalues as follows.
Definition 2. We define the largest and smallest s-sparse eigenvalues of r2 L(?) as
v > r2 L(?)v
v> v
kvk0 ?s
?+
s = sup
for any positive integer s. We define ?s =
?+
s
?s
v > r2 L(?)v
v> v
kvk0 ?s
and ?s = inf
as the s-sparse condition number.
The sparse eigenvalue (SE) conditions are widely studied in high dimensional sparse modeling problems, and are closely related to restricted strong convexity/smoothness properties and restricted eigenvalue properties [22, 27, 33, 44]. For notational convenience, given a parameter ?2 Rd and a real constant R > 0, we define a neighborhood of ? with radius R as B(?, R) :=
2 Rd | ||
?||2 ? R .
Our first assumption is for the sparse eigenvalues of the Hessian matrix over a sparse domain.
Assumption 1. Given ? 2 B(?? , R) for a generic constant R, there exists a generic constant C0 such
that r2 L(?) satisfies SE with parameters 0 < ?s? +2es < ?+
e C0 ?2s? +2es s?
s? +2e
s < +1, where s
and ?s? +2es =
?+
s? +2e
s
?s? +2e
s
.
4
Assumption 1 requires that L(?) has finite largest and positive smallest sparse eigenvalues, given ? is
sufficiently sparse and close to ?? . Analogous conditions are widely used in high dimensional analysis
[13, 14, 34, 35, 43], such as the restricted strong convexity/smoothness of L(?) (RSC/RSS, [6]). Given
any ?, ?0 2 Rd , the RSC/RSS parameter can be defined as (?0 , ?) := L(?0 ) L(?) rL(?)> (?0 ?).
For notational simplicity, we define S = {j | ?j? 6= 0} and S? = {j | ?j? = 0}. The following
proposition connects the SE property to the RSC/RSS property.
Proposition 3. Given ?, ?0 2 B(?? , R) with ||?S? ||0 ? se and ||?S0 ? ||0 ? se, L(?) satisfies
1
0
s k?
2 ?s? +2e
0
?k22 ? (?0 , ?) ? 12 ?+
s? +2e
s k?
?k22 .
The proof of Proposition 3 is provided in [6], and therefore is omitted. Proposition 3 implies that
L(?) is essentially strongly convex, but only over a sparse domain (See Figure 2).
The second assumption requires r2 L(?) to be smooth over the sparse domain.
Assumption 2 (Local Restricted Hessian Smoothness). Recall that se is defined in Assumption 1.
There exist generic constants Ls? +2es and R such that for any ?, ?0 2 B(?? , R) with ||?S? ||0 ? se and
||?S0 ? ||0 ? se, we have ||r2 L(?) r2 L(?0 )||2 ? Ls? +2es ||? ?0 ||2 .
Assumption 2 guarantees that r2 L(?) is Lipschitz continuous within a neighborhood of ?? over a
sparse domain. The local restricted Hessian smoothness is parallel to the local Hessian smoothRestricted Strongly Convex
ness for analyzing the proximal Newton method
in low dimensions [12].
In our analysis, we set the radius R as R :=
?s? +2e
s
,
2Ls? +2e
s
?
?
s
where 2R = Lss?+2e
is the radius of the
+2e
s
region centered at the unique global minimizer
of (2) for quadratic convergence of the proximal
Newton algorithm. This is parallel to the radius
in low dimensions [12], except that we restrict
the parameters over the sparse domain.
Nonstrongly Convex
Figure 2: An illustrative two dimensional example of
the restricted strong convexity. L(?) is not strongly
convex. But if we restrict ? to be sparse (Black Curve),
L(?) behaves like a strongly convex function.
The third assumption requires the choice of tgt to be appropriate.
Assumption 3. q
Given the true modeling parameter ?? , there exists a generic constant C1 such
p
that tgt = C1 logn d
4||rL(?? )||1 . Moreover, for a large enough n, we have s? tgt ?
C2 R?s? +2es .
Assumption 3 guarantees that the regularization is sufficiently large to eliminate irrelevant coordinates
such that the obtained solution is sufficiently sparse [4, 22]. In addition, tgt can not be too
large, which guarantees that the estimator is close enough to the true model parameter. The above
assumptions are deterministic. We will verify them under GLM in the statistical analysis.
Our last assumption is on the predefined precision parameter " as follows.
Assumption 4. For each stage of solving the convex relaxation subproblems (2) for all K
C3
exists a generic constant C3 such that " satisfies " = p
? tgt
8 .
n
1, there
Assumption 4 guarantees that the output solution ?b{K} at each stage for all K 1 has a sufficient
precision, which is critical for our convergence analysis of multistage convex relaxation.
3.1
Computational Theory
We first characterize the convergence for the first stage of our proposed DC proximal Newton
algorithm, i.e., the warm initialization for solving (3).
Theorem 4 (Warm Initialization, K = 1). Suppose that Assumptions 1 ? 4 hold. After sufficiently
many iterations T < 1, the following results hold for all t T :
||?(t)
?? ||2 ? R and F
{1}
(?(t) ) ? F
5
{1}
(?? ) +
15
2
?
tgt s
4?s? +2es
,
which further guarantee
(t)
?t = 1, ||?S? ||0 ? se and ||?(t+1)
{1}
?
{1}
||2 ?
Ls? +2es (t)
||?
2?s? +2es
?
{1} 2
||2 ,
{1}
where ?
is the unique sparse global minimizer of (3) satisfying ||?S? ||0 ? se and !
Moreover, we need at most
!
3?+
s? +2e
s
T + log log
"
{1}
(?
{1}
) = 0.
iterations to terminate the proximal Newton algorithm for the warm initialization (3), where the
output solution ?b{1} satisfies
p
18 tgt s?
{1}
||?bS? ||0 ? se, ! {1} (?b{1} ) ? ", and ||?b{1} ?? ||2 ?
.
?s? +2es
The proof of Theorem 4 is provided in Appendix C.1. Theorem 4 implies: (I) The objective value is
sufficiently small after finite T iterations of the proximal Newton algorithm, which further guarantees
sparse solutions and good computational performance in all follow-up iterations. (II) The solution
enters the ball B(?? , R) after finite T iterations. Combined with the sparsity of the solution, it further
guarantees that the solution enters the region of quadratic convergence. Thus the backtracking line
search stops immediately and output ?t = 1 for all t T . (III) The total number of iterations is at
most O(T + log log 1" ) to achieve the approximate KKT condition ! {1} (?(t) ) ? ", which serves as
the stopping criterion of the warm initialization (3).
Given these good properties of the output solution ?b{1} obtained from the warm initialization, we can
further show that our proposed DC proximal Newton algorithm for all follow-up stages (i.e., K 2)
achieves better computational performance than the first stage. This is characterized by the following
theorem. For notational simplicity, we omit the iteration index {K} for the intermediate updates
within each stage for the multistage convex relaxation.
Theorem 5 (Stage K, K 2). Suppose Assumptions 1 ? 4 hold. Then for all iterations t = 1, 2, ...
within each stage K 2, we have
(t)
which further guarantee
?t = 1, ||?(t+1)
where ?
and !
{K}
{K}
?
{K}
||?S? ||0 ? se and ||?(t)
||2 ?
Ls? +2es (t)
||?
2?s? +2es
?
?? ||2 ? R,
{K} 2
||2 ,
and F
{K}
(?(t+1) ) < F
{K}
(?(t) ),
{K}
is the unique sparse global minimizer of (2) at the K-th stage satisfying ||?S? ||0 ? se
(?
{K}
) = 0. Moreover, we need at most
!
3?+
s? +2e
s
.
"
log log
iterations to terminate the proximal Newton algorithm for the K-th stage of convex relaxation (2),
{K}
where the output solution ?b{K} satisfies ||?bS? ||0 ? se, ! {K} (?b{K} ) ? ", and
0
1
sX
p
||?b{K} ?? ||2 ? C2 @krL(?? )S k2 + tgt
1(|?j? | ? tgt )2 + " s? A
j2S
+ C3 0.7K
for some generic constants C2 and C3 .
1
||?b{1}
?? ||2 ,
The proof of Theorem 5 is provided in Appendix C.2. A geometric interpretation for the computational
theory of local quadratic convergence for our proposed algorithm is provided in Figure 3. From the
second stage of convex relaxation (2), i.e., K 2, Theorem 5 implies: (I) Within each stage, the al6
Neighborhood of
: B( , R)
Initial Solution for Warm Initialization
Output Solution for Warm Initialization
Output Solution for the
..
.
2nd
Stage
Output Solution for the Last Stage
{0}
{1}
{2}
{K}
Region of Quadratic Convergence
Figure 3: A geometric interpretation of local quadratic convergence: the warm initialization enters the region of quadratic
convergence (orange region) after finite iterations and the
follow-up stages remain in the region of quadratic convere
gence. The final estimator ?b{K} has a better estimation error
{1}
b
than the estimator ?
obtained from the convex warm initialization.
gorithm maintains a sparse solution
throughout all iterations t
1. The sparsity further guarantees that the SE property
and the restrictive Hessian smoothness hold,
which are necessary conditions for the fast
convergence of the proximal Newton algorithm. (II) The solution is maintained in
the region B(?? , R) for all t
1. Combined with the sparsity of the solution, we
have that the solution enters the region of
quadratic convergence. This guarantees that
we only need to set the step size ?t = 1 and
the objective value is monotonely decreasing without the sophisticated backtracking
line search procedure. Thus, the proximal
Newton algorithm enjoys the same fast convergence as in low dimensional optimization problems [12].
(III) With the quadratic convergence rate, the number of iterations is at most O(log log 1" ) to attain
the approximate KKT condition ! {K} (?(t) ) ? ", which is the stopping criteria at each stage.
3.2
Statistical Theory
Recall that our computational theory relies on deterministic assumptions (Assumptions 1 ? 3).
However, these assumptions involve data, which are sampled from certain statistical distribution.
Therefore, we need to verify that these assumptions hold with high probability under mild data
generation process of (i.e., GLM) in high dimensions in the following lemma.
Lemma 6. Suppose that xi ?s are i.i.d. sampled from a zero-mean distribution with covariance matrix
Cov(xi ) = ? such that 1 > cmax
?max (?)
?min (?)
cmin > 0, and for any v 2 Rd ,
>
2
v xi is sub-Gaussian with variance at most a||v||2 , where cmax , cmin , and a are generic constants.
Moreover, for some constant M > 0, at least one of the following two conditions holds: (I) The
Hessian of the cumulant function is uniformly bounded: || 00 ||1 ? M , or (II) The covariates are
bounded ||xi ||1 ? 1, and E[max|u|?1 [ 00 (x> ?? ) + u]p ] ? M for some p > 2. Then Assumption 1
? 3 hold with high probability.
The proof of Lemma 6 is provided in Appendix F. Given that these assumptions hold with high
probability, we know that the proximal Newton algorithm attains quadratic rate convergence within
each stage of convex relaxation with high probability. Then we establish the statistical rate of
convergence for the obtained estimator in parameter estimation.
Theorem 7. Suppose the observations are generated from GLM satisfying the condition in Lemma 6
for large enough n such that n C4 s? log d and = C5 /cmin is a constant defined in Section 2 for
generic constants C4 and C5 , then with high probability, the output solution ?b{K} satisfies
!
!
r
r
r
s?
s0 log d
s? log d
{K}
?
K
b
||?
? ||2 ? C6
+
+ C7 0.7
n
n
n
P
0
?
for generic constants C6 and C7 , where s = j2S 1(|?j | ? tgt )).
Theorem 7 is a direct result combining Theorem 5 and the analysis in [40]. As can be seen, s0
is essentially the number of nonzero ?j ?s with smaller magnitudes than
tgt , which are often
considered as ?weak? signals. Theorem 7 essentially implies that by exploiting the multi-stage convex
relaxation framework, our proposed DC proximal Newton algorithm gradually reduces the estimation
bias for ?strong? signals, and eventually obtains an estimator with better statistical properties than
e
e stages of
the `1 -regularized estimator. Specifically,
such that after K
?q let K
? be the smallest
?q integer
q
e
s? log d
s0 log d
s?
convex relaxation we have C7 0.7K
? C6 max
, which is equivalent
n
n,
n
e = O(log log d). This implies the total number of the proximal Newton updates being
to requiring K
at most O T + log log 1" ? (1 + log log d) . In addition, the obtained estimator attains the optimal
7
statistical properties in parameter estimation:
?q
?
q
e
s0 log d
s?
||?b{K} ?? ||2 ? OP
+
n
n
v.s.
||?b{1}
?? ||2 ? OP
?q
s? log d
n
?
.
(6)
Recall that ?b{1} is obtained by the warm initialization (3). As illustrated in Figure 3, this implies
e
the statistical rate in (6) for ||?b{K} ?? ||2 obtained from the multistage convex relaxation for the
nonconvex regularized problem (1) is a significant improvement over ||?b{1} ?? ||2 obtained from
the convex problem (3). Especially when s0 is small, i.e., most of nonzero ?j ?s are strong signals, our
?q ?
s?
result approaches the oracle bound3 OP
[8] as illustrated in Figure 4.
n
4
Experiments
Estimation Error
{K}
2
We compare our DC Proximal Newton (DC+PN) algos log d
Slow Bound: Convex OP
rithm with two competing algorithms for solving the
n
nonconvex regularized sparse logistic regression prob!
r
r
lem. They are accelerated proximal gradient algorithm
s?
s0 log d
OP
+
Fa
n
n
s
(APG) implemented in the SPArse Modeling Software
tB
ou
nd
:N
(SPAMS, coded in C++ [18]), and accelerated coordinate
on
con
descent (ACD) algorithm implemented in R package
vex
gcdnet (coded in Fortran, [36]). We further optimize
the active set strategy in gcdnet to boost its computational performance. To integrate these two algorithms
s
Oracle Bound: OP
n
with the multistage convex relaxation framework, we
revise their source code.
Percentage of Strong Signals s s s
To further boost the computational efficiency at each Figure 4: An illustration of the statistical rates
stage of the convex relaxation, we apply the pathwise of convergence in parameter estimation. Our
optimization [10] for all algorithms in practice. Specifi- obtained estimator has an error bound between
cally, at each stage, we use a geometrically decreas- the oracle bound and the slow bound from the
ing sequence of regularization parameters { [m] = convex problem in general. When the0 percentage
of strong signals increases, i.e., s decreases,
?m [0] }M
m=1 , where [0] is the smallest value such that then our result approaches the oracle bound.
the corresponding solution is zero, ? 2 (0, 1) is a shrinkage parameter and tgt = [M ] . For each [m] , we apply the corresponding algorithm (DC+PN,
DC+APG, and DC+ACD) to solve the nonconvex regularized problem (1). Moreover, we initialize
the solution for a new regularization parameter [m+1] using the output solution obtained with [m] .
Such a pathwise optimization scheme has achieved tremendous success in practice [10, 15, 42]. We
refer [43] for detailed discussion of pathwise optimization.
Our comparison contains 3 datasets: ?madelon? (n = 2000, d = 500, [11]), ?gisette? (n = 2000,d =
5000, [11]), and three simulated datasets: ?sim_1k? (d=1000), ?sim_5k? (d=5000), and
p ?sim_10k?
(d=10000) with the sample size n = 1000 for all three datasets. We set tgt = 0.25 log d/n and
= 0.2 for all settings here. We generate each row of the design matrix X independently from a
d-dimensional normal distribution N (0, ?), where ?jk = 0.5|j k| for j, k = 1, ..., d. We generate
y ? Bernoulli(1/[1 + exp( X?? )]), where ?? has all 0 entries except randomly selected 20 entries.
These nonzero entries are independently sampled from Uniform(0, 1). The stopping criteria for all
algorithms are tuned such that they attain similar optimization errors.
All three algorithms are compared in wall clock time. Our DC+PN algorithm is implemented in
C with double precisions and called from R by a wrapper. All experiments are performed on a
computer with 2.6GHz Intel Core i7 and 16GB RAM. For each algorithm and dataset, we repeat
the algorithm 10 times and report the average value and standard deviation of the wall clock time in
Table 1. As can be seen, our DC+PN algorithm significantly outperforms the competing algorithms.
We remark that for increasing d, the superiority of DC+PN over DC+ACD becomes less significant
as the Newton method is more sensitive to ill conditioned problems. This can be mitigated by using a
denser sequence of { [m] } along the solutions path.
We then illustrate the quadratic convergence of our DC+PN algorithm within each stage of the
convex relaxation using the ?sim? dataset. Specifically, we plot the gap towards the optimal objective
3
The oracle bound assumes that we know which variables are relevant in advance. It is not a realistic bound,
but only for comparison purpose.
8
{K}
F {K} (?
) of the K-th stage versus the wall clock time in Figure 5. We see that our proposed DC
proximal Newton algorithm achieves quadratic convergence, which is consistent with our theory.
Table 1: Quantitive timing comparisons for the nonconvex regularized sparse logistic regression. The average
values and the standard deviations (in parenthesis) of the timing performance (in seconds) over 10 random trials
are presented.
DC+PN
DC+ACD
DC+APG
madelon
1.51(?0.01)s
obj value: 0.52
5.83(?0.03)s
obj value: 0.52
1.60(?0.03)s
obj value: 0.52
gisette
5.35(?0.11)s
obj value: 0.01
18.92(?2.25)s
obj value: 0.01
207(?2.25)s
obj value: 0.01
sim_1k
1.07(?0.02)s
obj value: 0.01
9.46(?0.09) s
obj value: 0.01
17.8(?1.23) s
obj value: 0.01
(a) Simulated Data
sim_5k
4.53(?0.06)s
obj value: 0.01
16.20(?0.24) s
obj value: 0.01
111(?1.28) s
obj value: 0.01
sim_10k
8.82(?0.04)s
obj value: 0.01
19.1(?0.56) s
obj value: 0.01
222(?5.79) s
obj value: 0.01
(b) Gissete Data
Figure 5: Timing comparisons in wall clock time. Our proposed DC proximal Newton algorithm demonstrates
superior quadratic convergence and significantly outperforms the DC proximal gradient algorithm.
5
Discussions
We provide further discussions on the superior performance of our DC proximal Newton. There exist
two major drawbacks of existing multi-stage convex relaxation based first order algorithms:
(I) The first order algorithms have significant computational overhead in each iteration. For example,
for GLM, computing gradients requires frequently evaluating the cumulant function and its derivatives.
This often involves extensive non-arithmetic operations such as log(?) and exp(?) functions, which
naturally appear in the cumulant function and its derivates, are computationally expensive. To the best
of our knowledge, even if we use some efficient numerical methods for calculating exp(?) in [28, 19],
the computation still need at least 10 30 times more CPU cycles than basic arithmetic operations,
e.g., multiplications. Our proposed DC Proximal Newton algorithm cannot avoid calculating the
cumulant function and its derivatives, when computing quadratic approximations. The computation,
however, is much less intense, since the convergence is quadratic.
(II) The first order algorithms are computationally expensive with the step size selection. Although
for certainPGLM, e.g., sparse logistic regression, we can choose the step size parameter as ? =
n
1
4?max
( n1 i=1 xi x>
i ), such a step size often leads to poor empirical performance. In contrast, as
our theoretical analysis and experiments suggest, the proposed DC proximal Newton algorithm needs
very few line search steps, which saves much computational effort.
Some recent works on proximal Newton or inexact proximal Newton also demonstrate local quadratic
convergence guarantees [37, 38]. However, the conditions there are much more stringent than the
SE property in terms of the dependence on the problem dimensions. Specifically, their quadratic
convergence can only be guaranteed in a much smaller neighborhood. For example, the constant
nullspace strong convexity in [37], which plays the rule as the smallest sparse eigenvalue ?s? +2es
in our analysis, is as small as d1 . Note that ?s? +2es can be (almost) independent of d in our case
[6]. Therefore, instead of a constant radius as in our analysis, they can only guarantee the quadratic
convergence in a region with radius O d1 , which is very small in high dimensions. A similar issue
exists in [38] that the region of quadratic convergence is too small.
9
References
[1] Larry Armijo. Minimization of functions having lipschitz continuous first partial derivatives. Pacific
Journal of Mathematics, 16(1):1?3, 1966.
[2] Amir Beck and Marc Teboulle. Fast gradient-based algorithms for constrained total variation image
denoising and deblurring problems. IEEE Transactions on Image Processing, 18(11):2419?2434, 2009.
[3] Alexandre Belloni, Victor Chernozhukov, and Lie Wang. Square-root lasso: pivotal recovery of sparse
signals via conic programming. Biometrika, 98(4):791?806, 2011.
[4] Peter J Bickel, Yaacov Ritov, and Alexandre B Tsybakov. Simultaneous analysis of Lasso and Dantzig
selector. The Annals of Statistics, 37(4):1705?1732, 2009.
[5] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge University Press, 2004.
[6] Peter B?hlmann and Sara Van De Geer. Statistics for high-dimensional data: methods, theory and
applications. Springer Science & Business Media, 2011.
[7] Ani Eloyan, John Muschelli, Mary Beth Nebel, Han Liu, Fang Han, Tuo Zhao, Anita D Barber, Suresh
Joel, James J Pekar, Stewart H Mostofsky, et al. Automated diagnoses of attention deficit hyperactive
disorder using magnetic resonance imaging. Frontiers in Systems Neuroscience, 6, 2012.
[8] Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties.
Journal of the American Statistical Association, 96(456):1348?1360, 2001.
[9] Jianqing Fan, Han Liu, Qiang Sun, and Tong Zhang. TAC for sparse learning: Simultaneous control of
algorithmic complexity and statistical error. arXiv preprint arXiv:1507.01037, 2015.
[10] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models
via coordinate descent. Journal of Statistical Software, 33(1):1, 2010.
[11] Isabelle Guyon, Steve Gunn, Asa Ben-Hur, and Gideon Dror. Result analysis of the nips 2003 feature
selection challenge. In Advances in neural information processing systems, pages 545?552, 2005.
[12] Jason D Lee, Yuekai Sun, and Michael A Saunders. Proximal newton-type methods for minimizing
composite functions. SIAM Journal on Optimization, 24(3):1420?1443, 2014.
[13] Xingguo Li, Jarvis Haupt, Raman Arora, Han Liu, Mingyi Hong, and Tuo Zhao. A first order free lunch
for sqrt-lasso. arXiv preprint arXiv:1605.07950, 2016.
[14] Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, and Jarvis Haupt. Stochastic variance reduced optimization
for nonconvex sparse learning. In International Conference on Machine Learning, pages 917?925, 2016.
[15] Xingguo Li, Tuo Zhao, Tong Zhang, and Han Liu. The picasso package for nonconvex regularized
m-estimation in high dimensions in R. Technical Report, 2015.
[16] Po-Ling Loh and Martin J Wainwright. Regularized m-estimators with nonconvexity: Statistical and
algorithmic theory for local optima. Journal of Machine Learning Research, 2015. to appear.
[17] Zhi-Quan Luo and Paul Tseng. On the linear convergence of descent methods for convex essentially
smooth minimization. SIAM Journal on Control and Optimization, 30(2):408?425, 1992.
[18] Julien Mairal, Francis Bach, Jean Ponce, et al. Sparse modeling for image and vision processing. Foundations and Trends R in Computer Graphics and Vision, 8(2-3):85?283, 2014.
[19] A Cristiano I Malossi, Yves Ineichen, Costas Bekas, and Alessandro Curioni. Fast exponential computation
on simd architectures. Proc. of HIPEAC-WAPCO, Amsterdam NL, 2015.
[20] Peter McCullagh. Generalized linear models. European Journal of Operational Research, 16(3):285?292,
1984.
[21] Benjamin M Neale, Yan Kou, Li Liu, Avi Ma?Ayan, Kaitlin E Samocha, Aniko Sabo, Chiao-Feng Lin,
Christine Stevens, Li-San Wang, Vladimir Makarov, et al. Patterns and rates of exonic de novo mutations
in autism spectrum disorders. Nature, 485(7397):242?245, 2012.
[22] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework for highdimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538?557,
2012.
10
[23] Yu Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming,
140(1):125?161, 2013.
[24] Yang Ning, Tianqi Zhao, and Han Liu. A likelihood ratio framework for high dimensional semiparametric
regression. arXiv preprint arXiv:1412.2295, 2014.
[25] Johann Pfanzagl. Parametric statistical theory. Walter de Gruyter, 1994.
[26] Maxim Raginsky, Rebecca M Willett, Zachary T Harmany, and Roummel F Marcia. Compressed sensing
performance bounds under poisson noise. IEEE Transactions on Signal Processing, 58(8):3990?4002,
2010.
[27] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Restricted eigenvalue properties for correlated
Gaussian designs. Journal of Machine Learning Research, 11(8):2241?2259, 2010.
[28] Nicol N Schraudolph. A fast, compact approximation of the exponential function. Neural Computation,
11(4):853?862, 1999.
[29] Shai Shalev-Shwartz and Ambuj Tewari. Stochastic methods for `1 -regularized loss minimization. Journal
of Machine Learning Research, 12:1865?1892, 2011.
[30] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), pages 267?288, 1996.
[31] Robert Tibshirani, Jacob Bien, Jerome Friedman, Trevor Hastie, Noah Simon, Jonathan Taylor, and Ryan J
Tibshirani. Strong rules for discarding predictors in Lasso-type problems. Journal of the Royal Statistical
Society: Series B (Statistical Methodology), 74(2):245?266, 2012.
[32] Sara A van de Geer. High-dimensional generalized linear models and the Lasso. The Annals of Statistics,
36(2):614?645, 2008.
[33] Sara A van de Geer and Peter B?hlmann. On the conditions used to prove oracle results for the Lasso.
Electronic Journal of Statistics, 3:1360?1392, 2009.
[34] Zhaoran Wang, Han Liu, and Tong Zhang. Optimal computational and statistical rates of convergence for
sparse nonconvex learning problems. The Annals of Statistics, 42(6):2164?2201, 2014.
[35] Lin Xiao and Tong Zhang. A proximal-gradient homotopy method for the sparse least-squares problem.
SIAM Journal on Optimization, 23(2):1062?1091, 2013.
[36] Yi Yang and Hui Zou. An efficient algorithm for computing the hhsvm and its generalizations. Journal of
Computational and Graphical Statistics, 22(2):396?415, 2013.
[37] Ian En-Hsu Yen, Cho-Jui Hsieh, Pradeep K Ravikumar, and Inderjit S Dhillon. Constant nullspace strong
convexity and fast convergence of proximal methods under high-dimensional settings. In Advances in
Neural Information Processing Systems, pages 1008?1016, 2014.
[38] Man-Chung Yue, Zirui Zhou, and Anthony Man-Cho So. Inexact regularized proximal newton method:
provable convergence guarantees for non-smooth convex minimization without strong convexity. arXiv
preprint arXiv:1605.07522, 2016.
[39] Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of
Statistics, 38(2):894?942, 2010.
[40] Tong Zhang. Analysis of multi-stage convex relaxation for sparse regularization. Journal of Machine
Learning Research, 11:1081?1107, 2010.
[41] Tong Zhang et al. Multi-stage convex relaxation for feature selection. Bernoulli, 19(5B):2277?2293, 2013.
[42] Tuo Zhao, Han Liu, Kathryn Roeder, John Lafferty, and Larry Wasserman. The huge package for highdimensional undirected graph estimation in R. Journal of Machine Learning Research, 13:1059?1062,
2012.
[43] Tuo Zhao, Han Liu, and Tong Zhang. Pathwise coordinate optimization for sparse learning: Algorithm and
theory. arXiv preprint arXiv:1412.7477, 2014.
[44] Shuheng Zhou. Restricted eigenvalue conditions on subgaussian random matrices. arXiv preprint
arXiv:0912.4045, 2009.
11
| 6867 |@word mild:1 madelon:2 trial:1 polynomial:1 norm:1 nd:2 c0:2 r:3 covariance:1 jacob:1 hsieh:1 wrapper:1 liu:10 contains:2 series:2 initial:2 tuned:1 outperforms:2 existing:4 luo:1 john:3 numerical:3 additive:2 realistic:1 plot:1 update:5 selected:1 harmany:1 amir:1 runze:1 core:1 characterization:2 c6:3 zhang:8 mathematical:1 along:1 c2:4 direct:1 j2s:2 prove:3 overhead:1 introduce:2 manner:1 shuheng:1 behavior:2 frequently:1 multi:5 kou:1 decomposed:1 decreasing:1 zhi:1 cpu:1 curse:1 increasing:1 becomes:2 provided:7 notation:2 moreover:5 bounded:2 gisette:2 mitigated:1 medium:1 hyperactive:1 argmin:4 minimizes:1 dror:1 unified:1 guarantee:24 pseudo:1 concave:4 biometrika:1 k2:1 demonstrates:1 control:2 medical:1 grant:1 omit:2 superiority:1 appear:2 positive:3 before:1 local:18 timing:3 despite:1 ak:1 analyzing:1 establishing:1 path:2 black:1 twice:1 initialization:15 emphasis:1 studied:1 dantzig:1 challenging:1 sara:3 limited:1 unique:5 practice:3 procedure:3 suresh:1 area:1 empirical:2 yan:1 attain:2 significantly:2 boyd:1 composite:2 word:1 jui:1 suggest:1 convenience:1 close:3 cannot:1 selection:6 applying:1 optimize:1 conventional:1 equivalent:2 deterministic:2 missing:1 straightforward:1 attention:1 l:6 convex:58 independently:2 decomposable:2 simplicity:4 recovery:2 immediately:1 disorder:2 wasserman:1 estimator:15 rule:2 vandenberghe:1 fang:1 coordinate:8 variation:1 analogous:1 limiting:2 annals:4 suppose:4 play:1 programming:7 kathryn:1 deblurring:1 trend:1 satisfying:3 jk:1 expensive:2 gorithm:1 gunn:1 corroborated:2 subproblem:2 preprint:6 solved:1 enters:4 wang:3 region:10 cycle:1 sun:2 decrease:1 removed:3 acd:4 alessandro:1 pd:3 convexity:9 complexity:1 covariates:1 benjamin:1 multistage:8 nesterov:1 raise:1 solving:5 rewrite:1 algo:1 incur:1 asa:1 efficiency:1 po:1 joint:1 darpa:1 regularizer:5 walter:1 fast:6 picasso:1 neighborhood:4 avi:1 saunders:1 shalev:1 jean:1 heuristic:1 widely:2 solve:5 denser:1 relax:1 compressed:1 novo:1 cov:1 statistic:7 final:1 sequence:2 eigenvalue:10 differentiable:1 propose:5 product:1 jarvis:3 relevant:1 hadamard:1 combining:1 poorly:1 achieve:2 intuitive:1 inducing:1 exploiting:1 convergence:37 double:1 optimum:10 tianqi:1 ben:1 depending:1 illustrate:1 ij:1 op:7 progress:1 sim:1 strong:18 implemented:3 involves:1 implies:7 ning:1 radius:6 closely:1 drawback:1 stevens:1 stochastic:2 centered:1 stringent:1 larry:2 cmin:3 bin:2 explains:1 require:1 generalization:1 wall:4 homotopy:2 preliminary:1 proposition:4 ryan:1 frontier:1 hold:10 sufficiently:6 considered:1 normal:1 exp:4 algorithmic:3 major:1 achieves:5 bickel:1 smallest:8 omitted:1 nebel:1 purpose:1 uniqueness:1 estimation:11 chernozhukov:1 integrates:1 proc:1 sensitive:1 largest:4 weighted:1 minimization:4 pekar:1 beth:1 gaussian:2 avoid:2 pn:7 shrinkage:2 zhou:2 gatech:1 conjunction:1 ponce:1 improvement:2 notational:5 bernoulli:2 likelihood:6 methodological:1 tech:1 contrast:1 attains:2 roeder:1 stopping:4 chiao:1 anita:1 eliminate:1 interested:1 issue:3 classification:1 aforementioned:1 ill:1 denoted:1 logn:1 resonance:1 constrained:1 special:1 ness:1 orange:1 initialize:1 simd:1 having:1 beach:1 qiang:1 yu:3 nearly:1 report:2 few:4 randomly:1 beck:1 connects:1 statistician:1 n1:3 attempt:1 psd:1 friedman:2 huge:1 highly:1 joel:1 umn:1 nl:1 pradeep:2 yfa:1 regularizers:7 predefined:2 partial:1 necessary:2 intense:1 indexed:2 taylor:1 theoretical:4 rsc:3 column:4 modeling:15 teboulle:1 stewart:1 hlmann:2 deviation:3 entry:7 uniform:1 predictor:1 too:2 graphic:1 characterize:2 proximal:46 combined:2 cho:2 st:1 international:1 siam:3 negahban:1 lee:1 michael:1 hopkins:1 continuously:1 choose:1 american:1 zhao:9 derivative:3 chung:1 pmin:1 li:7 de:5 summarized:1 zhaoran:1 later:3 performed:1 jason:2 lab:1 root:1 sup:1 francis:1 start:2 maintains:2 parallel:2 shai:1 simon:1 mutation:1 elaborated:1 yen:1 square:3 ni:1 yves:1 variance:2 largely:1 efficiently:1 weak:1 researcher:2 autism:1 sqrt:1 simultaneous:2 trevor:2 definition:2 inexact:2 bekas:1 c7:3 james:1 naturally:1 associated:1 proof:4 con:2 hsu:1 sampled:4 stop:1 dataset:2 costa:1 popular:2 revise:1 recall:3 knowledge:2 hur:1 dimensionality:1 hhsvm:1 ou:1 sophisticated:3 alexandre:2 steve:1 follow:4 methodology:1 entrywise:1 ritov:1 done:1 strongly:4 stage:43 until:1 clock:4 jerome:2 logistic:4 seminorm:1 mary:1 usa:1 effect:1 usage:1 verify:3 unbiased:1 k22:2 counterpart:2 true:2 regularization:10 requiring:1 nonzero:5 dhillon:1 illustrated:2 exonic:1 razor:1 illustrative:1 maintained:1 criterion:4 generalized:5 hong:1 demonstrate:1 performs:1 christine:1 image:3 krl:1 yaacov:1 superior:4 garvesh:1 behaves:1 raskutti:1 rl:5 enthusiasm:1 association:1 interpretation:2 lieven:1 willett:1 significant:4 refer:1 isabelle:1 cambridge:1 ai:2 tac:1 smoothness:7 rd:12 tuning:1 mathematics:1 minnesota:1 han:10 longer:1 recent:3 irrelevant:2 inf:1 certain:2 nonconvex:20 jianqing:2 success:2 yi:5 victor:1 seen:2 greater:1 relaxed:1 additional:2 ge2:1 signal:7 ii:6 arithmetic:2 stephen:1 yuekai:1 reduces:1 smooth:4 ing:1 faster:1 characterized:1 technical:1 bach:1 long:1 lin:3 schraudolph:1 mle:2 ravikumar:2 coded:2 va:1 parenthesis:1 regression:9 basic:1 essentially:8 vision:2 poisson:2 arxiv:12 iteration:17 achieved:1 c1:3 addition:2 semiparametric:1 ayan:1 source:1 rest:1 yue:1 induced:1 undirected:1 quan:1 leveraging:2 nonconcave:1 lafferty:1 obj:15 integer:3 subgaussian:1 yang:2 intermediate:2 iii:3 easy:1 enough:3 automated:1 tgt:29 zi:2 architecture:1 hastie:2 li1:1 restrict:2 suboptimal:1 competing:2 reduce:1 lasso:7 i7:1 gb:1 sahand:1 effort:1 penalty:2 loh:1 peter:4 hessian:9 remark:2 tewari:1 se:16 involve:1 detailed:1 tsybakov:1 reduced:1 generate:2 exist:2 percentage:2 nsf:1 neuroscience:1 tibshirani:4 diagnosis:1 ani:1 nonconvexity:1 n66001:1 imaging:2 v1:1 relaxation:31 geometrically:1 ram:1 graph:1 raginsky:1 prob:1 package:3 named:1 clipped:1 family:1 throughout:2 almost:1 guyon:1 electronic:1 raman:2 vex:2 appendix:6 scaling:1 submatrix:2 bound:9 apg:3 guaranteed:1 correspondence:1 fan:2 quadratic:23 oracle:7 noah:1 belloni:1 software:2 min:3 the0:1 xingguo:5 martin:3 pacific:1 scad:2 ball:1 poor:1 remain:1 smaller:2 wi:2 cun:1 rob:1 aaa:1 b:2 lem:1 lunch:1 explained:1 restricted:11 gradually:1 glm:7 computationally:2 mcp:2 eventually:3 fortran:1 know:2 serf:1 operation:2 apply:3 generic:10 appropriate:1 magnetic:1 save:1 assumes:2 denotes:2 graphical:1 newton:35 cmax:2 cally:1 calculating:2 restrictive:1 especially:1 establish:1 society:2 feng:1 objective:4 already:1 strategy:2 fa:1 dependence:1 parametric:1 gradient:15 deficit:1 simulated:2 gence:1 vd:1 barber:1 tseng:1 provable:1 code:1 index:4 illustration:1 ratio:1 minimizing:2 vladimir:1 robert:2 subproblems:1 negative:3 rise:1 suppress:1 design:3 motivates:3 av:1 observation:3 datasets:3 acknowledge:1 finite:4 descent:10 extended:1 dc:32 kvk0:2 tuo:9 rebecca:1 cast:1 subvector:2 c3:4 extensive:1 c4:2 bound3:1 tremendous:1 boost:2 nip:2 address:2 capped:3 pattern:1 bien:1 sparsity:5 challenge:2 summarize:1 gideon:1 tb:1 ambuj:1 including:3 max:7 memory:1 royal:2 wainwright:3 critical:1 natural:2 warm:14 regularized:18 business:1 minimax:2 scheme:1 julien:1 conic:1 arora:2 genomics:1 understanding:1 literature:1 geometric:2 multiplication:1 nicol:1 nonstrongly:2 haupt:2 loss:1 generation:1 versus:1 foundation:1 integrate:1 quantitive:1 sufficient:1 consistent:1 s0:8 xiao:1 principle:1 occam:1 cristiano:1 row:5 penalized:1 repeat:1 last:2 free:1 enjoys:3 bias:2 absolute:1 sparse:54 ghz:1 van:3 curve:1 dimension:11 zachary:1 world:2 evaluating:1 author:2 c5:2 san:1 spam:1 transaction:2 approximate:7 obtains:2 selector:1 compact:1 global:8 active:2 kkt:3 mairal:1 xi:8 shwartz:1 alternatively:1 spectrum:1 search:7 iterative:1 continuous:2 why:1 table:2 yang2:1 nature:1 terminate:4 ca:1 ignoring:1 operational:1 tencent:1 european:1 zou:1 anthony:1 domain:5 vj:4 marc:1 ling:1 paul:1 noise:1 complementary:1 pivotal:1 intel:1 en:1 rithm:3 georgia:1 slow:2 tong:8 precision:4 sub:1 exponential:2 lie:1 isye:1 third:1 nullspace:2 ian:1 neale:1 theorem:11 discarding:1 sensing:1 r2:11 evidence:1 exists:4 sequential:1 maxim:1 hui:2 magnitude:1 conditioned:1 sx:1 gap:1 backtracking:6 simply:1 glmnet:1 amsterdam:1 pathwise:4 inderjit:1 springer:1 minimizer:3 satisfies:6 relies:1 mingyi:1 ma:1 gruyter:1 conditional:1 viewed:1 towards:2 lipschitz:2 man:2 mccullagh:1 specifically:10 except:2 uniformly:1 denoising:1 lemma:4 total:3 called:1 geer:3 duality:2 e:14 highdimensional:2 hypothetically:1 support:3 armijo:2 cumulant:6 jonathan:1 accelerated:2 incorporate:1 johann:1 princeton:1 d1:2 correlated:1 |
6,487 | 6,868 | #Exploration: A Study of Count-Based Exploration
for Deep Reinforcement Learning
Haoran Tang1? , Rein Houthooft34? , Davis Foote2 , Adam Stooke2 , Xi Chen2? ,
Yan Duan2? , John Schulman4 , Filip De Turck3 , Pieter Abbeel 2?
1
UC Berkeley, Department of Mathematics
2
UC Berkeley, Department of Electrical Engineering and Computer Sciences
3
Ghent University ? imec, Department of Information Technology
4
OpenAI
Abstract
Count-based exploration algorithms are known to perform near-optimally when
used in conjunction with tabular reinforcement learning (RL) methods for solving
small discrete Markov decision processes (MDPs). It is generally thought that
count-based methods cannot be applied in high-dimensional state spaces, since
most states will only occur once. Recent deep RL exploration strategies are able to
deal with high-dimensional continuous state spaces through complex heuristics,
often relying on optimism in the face of uncertainty or intrinsic motivation. In
this work, we describe a surprising finding: a simple generalization of the classic
count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash
codes, which allows to count their occurrences with a hash table. These counts
are then used to compute a reward bonus according to the classic count-based
exploration theory. We find that simple hash functions can achieve surprisingly
good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis
reveals important aspects of a good hash function: 1) having appropriate granularity
and 2) encoding information relevant to solving the MDP. This exploration strategy
achieves near state-of-the-art performance on both continuous control tasks and
Atari 2600 games, hence providing a simple yet powerful baseline for solving
MDPs that require considerable exploration.
1
Introduction
Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning
through trial and error to maximize rewards. It is impossible for the agent to act near-optimally until
it has sufficiently explored the environment and identified all of the opportunities for high reward, in
all scenarios. A core challenge in RL is how to balance exploration?actively seeking out novel states
and actions that might yield high rewards and lead to long-term gains; and exploitation?maximizing
short-term rewards using the agent?s current knowledge. While there are exploration techniques
for finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for highdimensional state spaces; therefore, developing more general and robust exploration techniques is an
active area of research.
?
These authors contributed equally. Correspondence to: Haoran Tang <[email protected]>, Rein
Houthooft <[email protected]>
?
Work done at OpenAI
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Most of the recent state-of-the-art RL results have been obtained using simple exploration strategies
such as uniform sampling [21] and i.i.d./correlated Gaussian noise [19, 30]. Although these heuristics
are sufficient in tasks with well-shaped rewards, the sample complexity can grow exponentially (with
state space size) in tasks with sparse rewards [25]. Recently developed exploration strategies for
deep RL have led to significantly improved performance on environments with sparse rewards. Bootstrapped DQN [24] led to faster learning in a range of Atari 2600 games by training an ensemble of
Q-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance
on Montezuma?s Revenge, an extremely challenging Atari 2600 game [4]. Variational Information
Maximizing Exploration (VIME, [13]) encourages the agent to explore by acquiring information
about environment dynamics, and performs well on various robotic locomotion problems with sparse
rewards. However, we have not seen a very simple and fast method that can work across different
domains.
Some of the classic, theoretically-justified exploration methods are based on counting state-action
visitations, and turning this count into a bonus reward. In the bandit setting,
qthe well-known UCB
log t
algorithm of [18] chooses the action at at time t that maximizes r?(at ) + 2n(a
where r?(at ) is
t)
the estimated reward, and n(at ) is the number of times action at was previously chosen. In the
MDP setting, some of the algorithms have similar structure, for example, Model Based Interval
Estimation?Exploration Bonus (MBIE-EB) of [34] counts state-action pairs with a table n(s, a) and
adding a bonus reward of the form ? ?
to encourage exploring less visited pairs. [16] show
n(s,a)
that the inverse-square-root dependence is optimal. MBIE and related algorithms assume that the
augmented MDP is solved analytically at each timestep, which is only practical for small finite state
spaces.
This paper presents a simple approach for exploration, which extends classic counting-based methods
to high-dimensional, continuous state spaces. We discretize the state space with a hash function and
apply a bonus based on the state-visitation count. The hash function can be chosen to appropriately
balance generalization across states, and distinguishing between states. We select problems from rllab
[8] and Atari 2600 [3] featuring sparse rewards, and demonstrate near state-of-the-art performance on
several games known to be hard for na?ve exploration strategies. The main strength of the presented
approach is that it is fast, flexible and complementary to most existing RL algorithms.
In summary, this paper proposes a generalization of classic count-based exploration to highdimensional spaces through hashing (Section 2); demonstrates its effectiveness on challenging deep
RL benchmark problems and analyzes key components of well-designed hash functions (Section 4).
2
2.1
Methodology
Notation
This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by
(S, A, P, r, ?0 , ?, T ), in which S is the state space, A the action space, P a transition probability distribution, r : S ? A ? R a reward function, ?0 an initial state distribution, ? ? (0, 1] a
discount factor,
The goal of RL is to maximize the total expected discounted
hPand T the horizon.
i
T
t
reward E?,P
t=0 ? r(st , at ) over a policy ?, which outputs a distribution over actions given a
state.
2.2
Count-Based Exploration via Static Hashing
Our approach discretizes the state space with a hash function ? : S ? Z. An exploration bonus
r+ : S ? R is added to the reward function, defined as
r+ (s) = p
?
n(?(s))
,
(1)
where ? ? R?0 is the bonus coefficient. Initially the counts n(?) are set to zero for the whole range
of ?. For every state st encountered at time step t, n(?(st )) is increased by one. The agent is trained
with rewards (r + r+ ), while performance is evaluated as the sum of rewards without bonuses.
2
Algorithm 1: Count-based exploration through static hashing, using SimHash
1
2
3
4
5
6
7
8
Define state preprocessor g : S ? RD
(In case of SimHash) Initialize A ? Rk?D with entries drawn i.i.d. from the standard Gaussian
distribution N (0, 1)
Initialize a hash table with values n(?) ? 0
for each iteration j do
M
Collect a set of state-action samples {(sm , am )}m=0 with policy ?
Compute hash codes through any LSH method, e.g., for SimHash, ?(sm ) = sgn(Ag(sm ))
Update the hash table counts ?m : 0 ? m ? M as n(?(sm )) ? n(?(sm )) + 1
M
?
Update the policy ? using rewards r(sm , am ) + ?
with any RL algorithm
n(?(sm ))
m=0
Note that our approach is a departure from count-based exploration methods such as MBIE-EB
since we use a state-space count n(s) rather than a state-action count n(s, a). State-action counts
n(s, a) are investigated in the Supplementary Material, but no significant performance gains over
state counting could be witnessed. A possible reason is that the policy itself is sufficiently random to
try most actions at a novel state.
Clearly the performance of this method will strongly depend on the choice of hash function ?. One
important choice we can make regards the granularity of the discretization: we would like for ?distant?
states to be be counted separately while ?similar? states are merged. If desired, we can incorporate
prior knowledge into the choice of ?, if there would be a set of salient state features which are known
to be relevant. A short discussion on this matter is given in the Supplementary Material.
Algorithm 1 summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to
convert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash
functions for querying nearest neighbors based on certain similarity metrics [2]. A computationally
efficient type of LSH is SimHash [6], which measures similarity by angular distance. SimHash
retrieves a binary code of state s ? S as
?(s) = sgn(Ag(s)) ? {?1, 1}k ,
(2)
D
where g : S ? R is an optional preprocessing function and A is a k ? D matrix with i.i.d. entries
drawn from a standard Gaussian distribution N (0, 1). The value for k controls the granularity: higher
values lead to fewer collisions and are thus more likely to distinguish states.
2.3
Count-Based Exploration via Learned Hashing
When the MDP states have a complex structure, as is the case with image observations, measuring
their similarity directly in pixel space fails to provide the semantic similarity measure one would desire.
Previous work in computer vision [7, 20, 36] introduce manually designed feature representations
of images that are suitable for semantic tasks including detection and classification. More recent
methods learn complex features directly from data by training convolutional neural networks [12,
17, 31]. Considering these results, it may be difficult for a method such as SimHash to cluster states
appropriately using only raw pixels.
Therefore, rather than using SimHash, we propose to use an autoencoder (AE) to learn meaningful
hash codes in one of its hidden layers as a more advanced LSH method. This AE takes as input
states s and contains one special dense layer comprised of D sigmoid functions. By rounding the
sigmoid activations b(s) of this layer to their closest binary number bb(s)e ? {0, 1}D , any state s
can be binarized. This is illustrated in Figure 1 for a convolutional AE.
A problem with this architecture is that dissimilar inputs si , sj can map to identical hash codes
bb(si )e = bb(sj )e, but the AE still reconstructs them perfectly. For example, if b(si ) and b(sj ) have
values 0.6 and 0.7 at a particular dimension, the difference can be exploited by deconvolutional
layers in order to reconstruct si and sj perfectly, although that dimension rounds to the same binary
value. One can imagine replacing the bottleneck layer b(s) with the hash codes bb(s)e, but then
gradients cannot be back-propagated through the rounding function. A solution is proposed by Gregor
et al. [10] and Salakhutdinov & Hinton [28] is to inject uniform noise U (?a, a) into the sigmoid
3
downsample
code
6?6
6?6
b?e
6?6
96 ? 5 ? 5
96 ? 11 ? 11
96 ? 24 ? 24
6?6
b(?)
512
1024
2400
6?6
linear
6?6
softmax
96 ? 5 ? 5
96 ? 10 ? 10
96 ? 24 ? 24
1 ? 52 ? 52 64 ? 52 ? 52
1 ? 52 ? 52
Figure 1: The autoencoder (AE) architecture for ALE; the solid block represents the dense sigmoidal
binary code layer, after which noise U (?a, a) is injected.
Algorithm 2: Count-based exploration using learned hash codes
1
2
3
4
5
6
7
8
9
10
11
12
Define state preprocessor g : S ? {0, 1}D as the binary code resulting from the autoencoder
(AE)
Initialize A ? Rk?D with entries drawn i.i.d. from the standard Gaussian distribution N (0, 1)
Initialize a hash table with values n(?) ? 0
for each iteration j do
M
Collect a set of state-action samples {(sm , am )}m=0 with policy ?
M
Add the state samples {sm }m=0 to a FIFO replay pool R
if j mod jupdate = 0 then
Update the AE loss function in Eq. (3) using samples drawn from the replay pool
{sn }N
n=1 ? R, for example using stochastic gradient descent
Compute g(sm ) = bb(sm )e, the D-dim rounded hash code for sm learned by the AE
Project g(sm ) to a lower dimension k via SimHash as ?(sm ) = sgn(Ag(sm ))
Update the hash table counts ?m : 0 ? m ? M as n(?(sm )) ? n(?(sm )) + 1
M
?
with any RL algorithm
Update the policy ? using rewards r(sm , am ) + ?
n(?(sm ))
m=0
activations. By choosing uniform noise with a > 14 , the AE is only capable of (always) reconstructing
distinct state inputs si 6= sj , if it has learned to spread the sigmoid outputs sufficiently far apart,
|b(si ) ? b(sj )| > , in order to counteract the injected noise.
As such, the loss function over a set of collected states {si }N
i=1 is defined as
N
1 Xh
L {sn }N
=
?
log p(sn ) ?
n=1
N n=1
?
K
n
oi
2
2
min
(1
?
b
(s
))
,
b
(s
)
,
i
n
i
n
i=1
PD
(3)
with p(sn ) the AE output. This objective function consists of a negative log-likelihood term and a
term that pressures the binary code layer to take on binary values, scaled by ? ? R?0 . The reasoning
behind this latter term is that it might happen that for particular states, a certain sigmoid unit is never
used. Therefore, its value might fluctuate around 12 , causing the corresponding bit in binary code
bb(s)e to flip over the agent lifetime. Adding this second loss term ensures that an unused bit takes
on an arbitrary binary value.
For Atari 2600 image inputs, since the pixel intensities are discrete values in the range [0, 255],
we make use of a pixel-wise softmax output layer [37] that shares weights between all pixels. The
architectural details are described in the Supplementary Material and are depicted in Figure 1. Because
the code dimension often needs to be large in order to correctly reconstruct the input, we apply a
downsampling procedure to the resulting binary code bb(s)e, which can be done through random
projection to a lower-dimensional space via SimHash as in Eq. (2).
On the one hand, it is important that the mapping from state to code needs to remain relatively
consistent over time, which is nontrivial as the AE is constantly updated according to the latest data
(Algorithm 2 line 8). A solution is to downsample the binary code to a very low dimension, or by
slowing down the training process. On the other hand, the code has to remain relatively unique
4
for states that are both distinct and close together on the image manifold. This is tackled both by
the second term in Eq. (3) and by the saturating behavior of the sigmoid units. States already well
represented by the AE tend to saturate the sigmoid activations, causing the resulting loss gradients to
be close to zero, making the code less prone to change.
3
Related Work
Classic count-based methods such as MBIE [33], MBIE-EB and [16] solve an approximate Bellman
equation as an inner loop before the agent takes an action [34]. As such, bonus rewards are propagated
immediately throughout the state-action space. In contrast, contemporary deep RL algorithms
propagate the bonus signal based on rollouts collected from interacting with environments, with
value-based [21] or policy gradient-based [22, 30] methods, at limited speed. In addition, our
proposed method is intended to work with contemporary deep RL algorithms, it differs from classical
count-based method in that our method relies on visiting unseen states first, before the bonus reward
can be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling
the gaps between our method and classic theories is an important direction of future research.
A related line of classical exploration methods is based on the idea of optimism in the face of
uncertainty [5] but not restricted to using counting to implement ?optimism?, e.g., R-Max [5], UCRL
[14], and E3 [15]. These methods, similar to MBIE and MBIE-EB, have theoretical guarantees in
tabular settings.
Bayesian RL methods [9, 11, 16, 35], which keep track of a distribution over MDPs, are an alternative
to optimism-based methods. Extensions to continuous state space have been proposed by [27] and
[25].
Another type of exploration is curiosity-based exploration. These methods try to capture the agent?s
surprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers
novel states. We refer the reader to [29] and [26] for an extensive review on curiosity and intrinsic
rewards.
Several exploration strategies for deep RL have been proposed to handle high-dimensional state
space recently. [13] propose VIME, in which information gain is measured in Bayesian neural
networks modeling the MDP dynamics, which is used an exploration bonus. [32] propose to use the
prediction error of a learned dynamics model as an exploration bonus. Thompson sampling through
bootstrapping is proposed by [24], using bootstrapped Q-functions.
The most related exploration strategy is proposed by [4], in which an exploration bonus is added
inversely proportional to the square root of a pseudo-count quantity. A state pseudo-count is derived
from its log-probability improvement according to a density model over the state space, which in the
limit converges to the empirical count. Our method is similar to pseudo-count approach in the sense
that both methods are performing approximate counting to have the necessary generalization over
unseen states. The difference is that a density model has to be designed and learned to achieve good
generalization for pseudo-count whereas in our case generalization is obtained by a wide range of
simple hash functions (not necessarily SimHash). Another interesting connection is that our method
also implies a density model ?(s) = n(?(s))
over all visited states, where N is the total number of
N
states visited. Another method similar to hashing is proposed by [1], which clusters states and counts
cluster centers instead of the true states, but this method has yet to be tested on standard exploration
benchmark problems.
4
Experiments
Experiments were designed to investigate and answer the following research questions:
1. Can count-based exploration through hashing improve performance significantly across
different domains? How does the proposed method compare to the current state of the art in
exploration for deep RL?
2. What is the impact of learned or static state preprocessing on the overall performance when
image observations are used?
5
To answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that
feature sparse rewards, and compare it to other state-of-the-art algorithms. Question 2 is answered by
trying out different image preprocessors on Atari 2600 games. Trust Region Policy Optimization
(TRPO, [30]) is chosen as the RL algorithm for all experiments, because it can handle both discrete
and continuous action spaces, can conveniently ensure stable improvement in the policy performance,
and is relatively insensitive to hyperparameter changes. The hyperparameters settings are reported in
the Supplementary Material.
4.1
Continuous Control
The rllab benchmark [8] consists of various control tasks to test deep RL algorithms. We selected
several variants of the basic and locomotion tasks that use sparse rewards, as shown in Figure 2, and
adopt the experimental setup as defined in [13]?a description can be found in the Supplementary
Material. These tasks are all highly difficult to solve with na?ve exploration strategies, such as adding
Gaussian noise to the actions.
Figure 2: Illustrations of the rllab tasks used in the continuous control experiments, namely MountainCar, CartPoleSwingup, SimmerGather, and HalfCheetah; taken from [8].
(a) MountainCar
(b) CartPoleSwingup
(c) SwimmerGather
(d) HalfCheetah
Figure 3: Mean average return of different algorithms on rllab tasks with sparse rewards. The solid
line represents the mean average return, while the shaded area represents one standard deviation, over
5 seeds for the baseline and SimHash (the baseline curves happen to overlap with the axis).
Figure 3 shows the results of TRPO (baseline), TRPO-SimHash, and VIME [13] on the classic tasks
MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and the hierarchical task
SwimmerGather. Using count-based exploration with hashing is capable of reaching the goal in all
environments (which corresponds to a nonzero return), while baseline TRPO with Gaussia n control
noise fails completely. Although TRPO-SimHash picks up the sparse reward on HalfCheetah, it does
not perform as well as VIME. In contrast, the performance of SimHash is comparable with VIME on
MountainCar, while it outperforms VIME on SwimmerGather.
4.2
Arcade Learning Environment
The Arcade Learning Environment (ALE, [3]), which consists of Atari 2600 video games, is an
important benchmark for deep RL due to its high-dimensional state space and wide variety of
games. In order to demonstrate the effectiveness of the proposed exploration strategy, six games are
selected featuring long horizons while requiring significant exploration: Freeway, Frostbite, Gravitar,
Montezuma?s Revenge, Solaris, and Venture. The agent is trained for 500 iterations in all experiments,
with each iteration consisting of 0.1 M steps (the TRPO batch size, corresponds to 0.4 M frames).
Policies and value functions are neural networks with identical architectures to [22]. Although the
policy and baseline take into account the previous four frames, the counting algorithm only looks at
the latest frame.
6
Table 1: Atari 2600: average total reward after training for 50 M time steps. Boldface numbers
indicate best results. Italic numbers are the best among our methods.
Freeway
Frostbite
Gravitar
Montezuma
Solaris
Venture
TRPO (baseline)
TRPO-pixel-SimHash
TRPO-BASS-SimHash
TRPO-AE-SimHash
16.5
31.6
28.4
33.5
2869
4683
3150
5214
486
468
604
482
0
0
238
75
2758
2897
1201
4467
121
263
616
445
Double-DQN
Dueling network
Gorila
DQN Pop-Art
A3C+
pseudo-count
33.3
0.0
11.7
33.4
27.3
29.2
1683
4672
605
3469
507
1450
412
588
1054
483
246
?
0
0
4
0
142
3439
3068
2251
N/A
4544
2175
?
98.0
497
1245
1172
0
369
BASS To compare with the autoencoder-based learned hash code, we propose using Basic Abstraction of the ScreenShots (BASS, also called Basic; see [3]) as a static preprocessing function g.
BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS builds on the
following observations specific to Atari: 1) the game screen has a low resolution, 2) most objects are
large and monochrome, and 3) winning depends mostly on knowing object locations and motions.
We designed an adapted version of BASS3 , that divides the RGB screen into square cells, computes
the average intensity of each color channel inside a cell, and assigns the resulting values to bins that
uniformly partition the intensity range [0, 255]. Mathematically, let C be the cell size (width and
height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the channel, then
k
j
P
B
(4)
feature(i, j, z) = 255C
2
(x,y)? cell(i,j) I(x, y, z) .
Afterwards, the resulting integer-valued feature tensor is converted to an integer hash code (?(st ) in
Line 6 of Algorithm 1). A BASS feature can be regarded as a miniature that efficiently encodes object
locations, but remains invariant to negligible object motions. It is easy to implement and introduces
little computation overhead. However, it is designed for generic Atari game images and may not
capture the structure of each specific game very well.
We compare our results to double DQN [39], dueling network [40], A3C+ [4], double DQN with
pseudo-counts [4], Gorila [23], and DQN Pop-Art [38] on the ?null op? metric4 . We show training
curves in Figure 4 and summarize all results in Table 1. Surprisingly, TRPO-pixel-SimHash already
outperforms the baseline by a large margin and beats the previous best result on Frostbite. TRPOBASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on Montezuma?s
Revenge and Venture, where it captures object locations better than other methods.5 TRPO-AESimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris.
As observed in Table 1, preprocessing images with BASS or using a learned hash code through the
AE leads to much better performance on Gravitar, Montezuma?s Revenge and Venture. Therefore, a
static or adaptive preprocessing step can be important for a good hash function.
In conclusion, our count-based exploration method is able to achieve remarkable performance gains
even with simple hash functions like SimHash on the raw pixel space. If coupled with domaindependent state preprocessing techniques, it can sometimes achieve far better results.
A reason why our proposed method does not achieve state-of-the-art performance on all games is that
TRPO does not reuse off-policy experience, in contrast to DQN-based algorithms [4, 23, 38]), and is
3
The original BASS exploits the fact that at most 128 colors can appear on the screen. Our adapted version
does not make this assumption.
4
The agent takes no action for a random number (within 30) of frames at the beginning of each episode.
5
We provide videos of example game play and visualizations of the difference bewteen Pixel-SimHash and
BASS-SimHash at https://www.youtube.com/playlist?list=PLAd-UMX6FkBQdLNWtY8nH1-pzYJA_1T55
7
35
1000
10000
30
900
8000
25
20
800
700
6000
TRPO-AE-SimHash
TRPO
TRPO-BASS-SimHash
TRPO-pixel-SimHash
600
15
4000
500
10
400
2000
5
0
300
200
0
?5
0
100
200
300
400
500
0
100
(a) Freeway
200
300
400
500
100
0
100
(b) Frostbite
500
400
300
400
500
400
500
(c) Gravitar
7000
1200
6000
1000
5000
300
200
800
4000
600
3000
200
400
2000
100
200
1000
0
0
0
0
100
200
300
400
(d) Montezuma?s Revenge
500
?1000
0
100
200
300
(e) Solaris
400
500
?200
0
100
200
300
(f) Venture
Figure 4: Atari 2600 games: the solid line is the mean average undiscounted return per iteration,
while the shaded areas represent the one standard deviation, over 5 seeds for the baseline, TRPOpixel-SimHash, and TRPO-BASS-SimHash, while over 3 seeds for TRPO-AE-SimHash.
hence less efficient in harnessing extremely sparse rewards. This explanation is corroborated by the
experiments done in [4], in which A3C+ (an on-policy algorithm) scores much lower than DQN (an
off-policy algorithm), while using the exact same exploration bonus.
5
Conclusions
This paper demonstrates that a generalization of classical counting techniques through hashing is able
to provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs
using function approximators, resulting in near state-of-the-art performance across benchmarks. It
provides a simple yet powerful baseline for solving MDPs that require informed exploration.
Acknowledgments
We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This
research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a
Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a
Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through
grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through
the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully
acknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft was
supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO).
References
[1] Abel, David, Agarwal, Alekh, Diaz, Fernando, Krishnamurthy, Akshay, and Schapire, Robert E.
Exploratory gradient boosting for reinforcement learning in complex domains. arXiv preprint
arXiv:1603.04119, 2016.
[2] Andoni, Alexandr and Indyk, Piotr. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Proceedings of the 47th Annual IEEE Symposium on
Foundations of Computer Science (FOCS), pp. 459?468, 2006.
[3] Bellemare, Marc G, Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning
environment: An evaluation platform for general agents. Journal of Artificial Intelligence
Research, 47:253?279, 06 2013.
8
[4] Bellemare, Marc G, Srinivasan, Sriram, Ostrovski, Georg, Schaul, Tom, Saxton, David, and
Munos, Remi. Unifying count-based exploration and intrinsic motivation. In Advances in
Neural Information Processing Systems 29 (NIPS), pp. 1471?1479, 2016.
[5] Brafman, Ronen I and Tennenholtz, Moshe. R-max-a general polynomial time algorithm for
near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213?231, 2002.
[6] Charikar, Moses S. Similarity estimation techniques from rounding algorithms. In Proceedings
of the 34th Annual ACM Symposium on Theory of Computing (STOC), pp. 380?388, 2002.
[7] Dalal, Navneet and Triggs, Bill. Histograms of oriented gradients for human detection. In
Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition
(CVPR), pp. 886?893, 2005.
[8] Duan, Yan, Chen, Xi, Houthooft, Rein, Schulman, John, and Abbeel, Pieter. Benchmarking
deep reinforcement learning for continous control. In Proceedings of the 33rd International
Conference on Machine Learning (ICML), pp. 1329?1338, 2016.
[9] Ghavamzadeh, Mohammad, Mannor, Shie, Pineau, Joelle, and Tamar, Aviv. Bayesian reinforcement learning: A survey. Foundations and Trends in Machine Learning, 8(5-6):359?483,
2015.
[10] Gregor, Karol, Besse, Frederic, Jimenez Rezende, Danilo, Danihelka, Ivo, and Wierstra, Daan.
Towards conceptual compression. In Advances in Neural Information Processing Systems 29
(NIPS), pp. 3549?3557. 2016.
[11] Guez, Arthur, Heess, Nicolas, Silver, David, and Dayan, Peter. Bayes-adaptive simulation-based
search with value function approximation. In Advances in Neural Information Processing
Systems (Advances in Neural Information Processing Systems (NIPS)), pp. 451?459, 2014.
[12] He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image
recognition. 2015.
[13] Houthooft, Rein, Chen, Xi, Duan, Yan, Schulman, John, De Turck, Filip, and Abbeel, Pieter.
VIME: Variational information maximizing exploration. In Advances in Neural Information
Processing Systems 29 (NIPS), pp. 1109?1117, 2016.
[14] Jaksch, Thomas, Ortner, Ronald, and Auer, Peter. Near-optimal regret bounds for reinforcement
learning. Journal of Machine Learning Research, 11:1563?1600, 2010.
[15] Kearns, Michael and Singh, Satinder. Near-optimal reinforcement learning in polynomial time.
Machine Learning, 49(2-3):209?232, 2002.
[16] Kolter, J Zico and Ng, Andrew Y. Near-bayesian exploration in polynomial time. In Proceedings
of the 26th International Conference on Machine Learning (ICML), pp. 513?520, 2009.
[17] Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classification with deep
convolutional neural networks. In Advances in Neural Information Processing Systems 25
(NIPS), pp. 1097?1105, 2012.
[18] Lai, Tze Leung and Robbins, Herbert. Asymptotically efficient adaptive allocation rules.
Advances in Applied Mathematics, 6(1):4?22, 1985.
[19] Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa,
Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning.
arXiv preprint arXiv:1509.02971, 2015.
[20] Lowe, David G. Object recognition from local scale-invariant features. In Proceedings of the
7th IEEE International Conference on Computer Vision (ICCV), pp. 1150?1157, 1999.
[21] Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare,
Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al.
Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
9
[22] Mnih, Volodymyr, Badia, Adria Puigdomenech, Mirza, Mehdi, Graves, Alex, Lillicrap, Timothy P, Harley, Tim, Silver, David, and Kavukcuoglu, Koray. Asynchronous methods for deep
reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
[23] Nair, Arun, Srinivasan, Praveen, Blackwell, Sam, Alcicek, Cagdas, Fearon, Rory, De Maria,
Alessandro, Panneershelvam, Vedavyas, Suleyman, Mustafa, Beattie, Charles, Petersen,
Stig, et al. Massively parallel methods for deep reinforcement learning. arXiv preprint
arXiv:1507.04296, 2015.
[24] Osband, Ian, Blundell, Charles, Pritzel, Alexander, and Van Roy, Benjamin. Deep exploration
via bootstrapped DQN. In Advances in Neural Information Processing Systems 29 (NIPS), pp.
4026?4034, 2016.
[25] Osband, Ian, Van Roy, Benjamin, and Wen, Zheng. Generalization and exploration via randomized value functions. In Proceedings of the 33rd International Conference on Machine Learning
(ICML), pp. 2377?2386, 2016.
[26] Oudeyer, Pierre-Yves and Kaplan, Frederic. What is intrinsic motivation? A typology of
computational approaches. Frontiers in Neurorobotics, 1:6, 2007.
[27] Pazis, Jason and Parr, Ronald. PAC optimal exploration in continuous space Markov decision
processes. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI), 2013.
[28] Salakhutdinov, Ruslan and Hinton, Geoffrey. Semantic hashing. International Journal of
Approximate Reasoning, 50(7):969 ? 978, 2009.
[29] Schmidhuber, J?rgen. Formal theory of creativity, fun, and intrinsic motivation (1990?2010).
IEEE Transactions on Autonomous Mental Development, 2(3):230?247, 2010.
[30] Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust
region policy optimization. In Proceedings of the 32nd International Conference on Machine
Learning (ICML), 2015.
[31] Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
[32] Stadie, Bradly C, Levine, Sergey, and Abbeel, Pieter. Incentivizing exploration in reinforcement
learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
[33] Strehl, Alexander L and Littman, Michael L. A theoretical analysis of model-based interval
estimation. In Proceedings of the 21st International Conference on Machine Learning (ICML),
pp. 856?863, 2005.
[34] Strehl, Alexander L and Littman, Michael L. An analysis of model-based interval estimation for
Markov decision processes. Journal of Computer and System Sciences, 74(8):1309?1331, 2008.
[35] Sun, Yi, Gomez, Faustino, and Schmidhuber, J?rgen. Planning to be surprised: Optimal
Bayesian exploration in dynamic environments. In Proceedings of the 4th International Conference on Artificial General Intelligence (AGI), pp. 41?51. 2011.
[36] Tola, Engin, Lepetit, Vincent, and Fua, Pascal. DAISY: An efficient dense descriptor applied to
wide-baseline stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(5):
815?830, 2010.
[37] van den Oord, Aaron, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural
networks. In Proceedings of the 33rd International Conference on Machine Learning (ICML),
pp. 1747?1756, 2016.
[38] van Hasselt, Hado, Guez, Arthur, Hessel, Matteo, and Silver, David. Learning functions across
many orders of magnitudes. arXiv preprint arXiv:1602.07714, 2016.
[39] van Hasselt, Hado, Guez, Arthur, and Silver, David. Deep reinforcement learning with double
Q-learning. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), 2016.
[40] Wang, Ziyu, de Freitas, Nando, and Lanctot, Marc. Dueling network architectures for deep
reinforcement learning. In Proceedings of the 33rd International Conference on Machine
Learning (ICML), pp. 1995?2003, 2016.
10
| 6868 |@word trial:1 exploitation:1 version:2 dalal:1 polynomial:3 compression:1 nd:1 triggs:1 pieter:5 simulation:1 propagate:1 rgb:1 pressure:1 pick:1 solid:3 lepetit:1 necessity:1 initial:1 contains:1 score:1 typology:1 jimenez:1 bootstrapped:3 deconvolutional:1 outperforms:2 existing:1 hasselt:2 current:2 com:2 discretization:1 surprising:1 freitas:1 activation:3 yet:3 si:7 guez:3 john:5 ronald:2 distant:1 happen:2 partition:1 cartpoleswingup:3 designed:7 neurorobotics:1 update:5 hash:28 intelligence:5 fewer:1 selected:2 ivo:1 slowing:1 beginning:2 core:1 short:2 mental:1 provides:1 math:1 boosting:1 location:5 mannor:1 philipp:1 sigmoidal:1 zhang:1 height:1 mathematical:1 wierstra:2 symposium:2 surprised:1 focs:1 consists:3 pritzel:2 overhead:1 inside:1 introduce:1 preprocessors:1 excellence:1 theoretically:1 expected:1 behavior:1 planning:1 bellman:1 salakhutdinov:2 relying:1 discounted:2 duan:3 little:1 freeway:4 considering:1 project:1 notation:1 bonus:15 maximizes:1 null:1 what:2 atari:12 developed:1 informed:1 finding:1 ag:3 bootstrapping:1 transformation:1 guarantee:2 pseudo:7 berkeley:6 every:1 binarized:1 act:1 fun:1 demonstrates:2 scaled:1 control:9 unit:2 grant:1 enjoy:1 appear:1 zico:1 alcicek:1 danihelka:1 before:2 negligible:1 engineering:1 local:1 limit:1 encoding:1 matteo:1 might:3 eb:4 collect:2 challenging:3 shaded:2 limited:1 hunt:1 range:5 practical:1 unique:1 acknowledgment:1 alexandr:1 block:1 implement:2 differs:1 regret:1 procedure:1 area:3 riedmiller:1 empirical:1 yan:4 thought:1 significantly:2 projection:1 gorila:2 arcade:3 petersen:1 cannot:2 close:2 impossible:1 bellemare:3 optimize:1 www:1 map:1 bill:1 center:1 maximizing:3 latest:2 thompson:1 survey:1 resolution:1 immediately:1 assigns:1 chen2:1 rule:1 regarded:1 classic:8 handle:2 exploratory:1 autonomous:1 krishnamurthy:1 updated:1 imagine:1 play:1 exact:1 distinguishing:1 locomotion:3 trend:1 roy:2 satisfying:1 recognition:4 hessel:1 corroborated:1 observed:1 levine:2 preprint:7 electrical:1 solved:1 capture:3 wang:1 region:2 ensures:1 sun:2 episode:1 bass:11 contemporary:2 alessandro:1 benjamin:2 environment:10 pd:1 complexity:1 abel:1 reward:29 littman:2 saxton:1 dynamic:5 ghavamzadeh:1 trained:2 depend:1 solving:4 singh:1 predictive:1 completely:1 various:3 retrieves:1 represented:1 distinct:2 fast:2 describe:1 artificial:4 choosing:1 rein:6 kalchbrenner:1 harnessing:1 heuristic:2 supplementary:5 solve:2 valued:1 cvpr:1 vime:7 reconstruct:2 simonyan:1 unseen:2 itself:1 indyk:1 propose:4 causing:2 relevant:2 loop:1 halfcheetah:4 achieve:6 schaul:1 description:1 venture:5 sutskever:1 cluster:3 double:4 undiscounted:1 karol:1 adam:2 converges:1 silver:6 object:6 tim:1 andrew:2 recurrent:1 measured:1 agi:1 uninformed:1 nearest:2 op:1 eq:3 implies:1 indicate:1 direction:1 screenshots:1 merged:1 stochastic:1 exploration:52 human:2 sgn:3 nando:1 material:5 bin:2 require:2 abbeel:5 generalization:8 creativity:1 mathematically:1 exploring:1 stooke:1 extension:1 frontier:2 sufficiently:3 around:1 seed:3 mapping:1 solaris:4 parr:1 rgen:2 miniature:1 achieves:3 adopt:1 estimation:4 ruslan:1 faustino:1 visited:3 sensitive:1 robbins:1 arun:1 clearly:1 gaussian:5 always:1 rather:2 reaching:1 fluctuate:1 rusu:1 conjunction:1 ucrl:1 derived:1 monochrome:1 rezende:1 improvement:3 maria:1 likelihood:1 contrast:3 baseline:11 am:4 sense:1 dim:1 dayan:1 downsample:2 abstraction:1 huawei:1 leung:1 qthe:1 initially:2 hidden:1 bandit:1 fl110100281:1 playlist:1 pixel:13 overall:1 classification:2 flexible:1 among:1 pascal:1 proposes:1 development:1 art:12 special:1 initialize:4 uc:2 softmax:2 platform:1 once:1 never:1 shaped:1 piotr:1 koray:3 beach:1 having:1 sampling:2 manually:1 identical:2 represents:3 filling:1 look:1 icml:7 tabular:2 future:1 mirza:1 ortner:1 wen:1 oriented:1 ve:2 intended:1 rollouts:1 consisting:1 harley:1 detection:2 ostrovski:2 sriram:1 investigate:1 highly:1 mnih:2 zheng:1 joel:2 evaluation:1 introduces:1 behind:1 encourage:1 capable:2 necessary:1 experience:1 arthur:3 divide:1 puigdomenech:1 desired:1 a3c:3 theoretical:3 increased:1 witnessed:1 modeling:1 rllab:5 measuring:1 deviation:2 entry:3 uniform:3 comprised:1 krizhevsky:1 rounding:3 optimally:2 reported:1 tang1:1 answer:2 chooses:1 st:6 density:3 international:11 randomized:1 oord:1 fifo:1 gaussia:1 off:2 pool:2 rounded:1 together:1 pecase:1 michael:5 ilya:1 na:2 aaai:4 reconstructs:1 inject:1 return:4 actively:1 account:1 converted:1 volodymyr:2 de:4 haoran:2 fannie:1 coefficient:1 matter:1 kolter:1 depends:1 root:2 try:3 lab:2 lowe:1 jason:1 bayes:1 parallel:1 daisy:1 square:3 oi:1 yves:1 convolutional:4 descriptor:1 swimmergather:3 efficiently:1 ensemble:1 yield:1 ronen:1 raw:2 bayesian:5 kavukcuoglu:3 vincent:1 ren:1 veness:2 reach:1 colleague:1 pp:17 naturally:1 static:5 propagated:2 gain:4 popular:1 knowledge:2 color:2 auer:1 back:1 hashing:11 higher:1 danilo:1 methodology:1 tom:2 improved:1 zisserman:1 fua:1 done:3 evaluated:1 strongly:1 furthermore:1 angular:1 lifetime:1 until:1 hand:3 replacing:1 trust:2 mehdi:1 pineau:1 mdp:6 aviv:1 dqn:9 usa:1 engin:1 lillicrap:2 requiring:1 true:1 hence:2 analytically:1 assigned:1 moritz:1 nonzero:1 jaksch:1 semantic:3 illustrated:1 deal:1 round:1 game:15 width:1 encourages:1 bowling:1 davis:1 pazis:1 trying:1 demonstrate:2 mohammad:1 performs:1 motion:2 reasoning:2 image:11 variational:2 wise:1 novel:3 recently:2 discovers:1 funding:1 sigmoid:7 charles:2 rl:21 exponentially:1 insensitive:1 tassa:1 he:1 significant:3 refer:1 ai:2 rd:5 mathematics:2 erez:1 centre:1 gratefully:2 funded:1 lsh:5 stable:1 badia:1 similarity:5 alekh:1 add:1 closest:1 recent:3 apart:1 scenario:1 massively:1 certain:2 schmidhuber:2 binary:11 onr:1 approximators:1 joelle:1 exploited:1 yi:1 seen:1 analyzes:1 herbert:1 xiangyu:1 maximize:2 fernando:1 ale:3 signal:2 ii:1 afterwards:1 faster:1 long:3 lai:1 equally:1 award:1 impact:1 prediction:1 variant:1 basic:3 ae:16 vision:3 metric:1 arxiv:14 hado:2 iteration:5 sometimes:1 represent:1 histogram:1 agarwal:1 cell:5 sergey:2 justified:1 addition:1 whereas:1 separately:1 fellowship:6 interval:3 grow:1 imec:1 jian:1 suleyman:1 appropriately:2 tend:1 shie:1 mod:1 effectiveness:2 jordan:1 integer:2 near:12 counting:7 granularity:3 unused:1 fearon:1 easy:1 variety:1 architecture:4 identified:1 perfectly:2 inner:1 idea:2 andreas:1 knowing:1 tamar:1 blundell:1 bottleneck:1 six:1 optimism:4 reuse:1 osband:2 stereo:1 peter:2 karen:1 e3:1 shaoqing:1 action:17 deep:24 heess:2 generally:1 collision:1 detailed:1 fwo:1 discount:1 ph:1 http:1 schapire:1 nsf:1 moses:1 estimated:1 mbie:7 correctly:1 track:1 per:1 naddaf:1 discrete:4 hyperparameter:1 diaz:1 georg:2 srinivasan:2 visitation:2 key:1 salient:1 trpo:20 openai:4 four:1 drawn:4 frostbite:5 nal:1 timestep:1 asymptotically:1 sum:1 houthooft:5 convert:1 counteract:1 inverse:1 run:1 uncertainty:2 powerful:2 injected:2 extends:1 throughout:1 reader:1 architectural:1 decision:4 summarizes:1 rory:1 lanctot:1 comparable:1 bit:2 bound:1 layer:8 montezuma:6 distinguish:1 tackled:1 correspondence:1 gomez:1 encountered:1 annual:2 nontrivial:1 strength:1 occur:1 adapted:2 alex:3 encodes:1 aspect:1 speed:1 answered:1 extremely:2 min:1 performing:1 yavar:1 relatively:3 martin:1 department:3 developing:1 according:3 charikar:1 hertz:1 across:5 remain:2 reconstructing:1 sam:1 revenge:5 making:2 praveen:1 den:1 restricted:1 invariant:2 iccv:1 taken:1 computationally:1 equation:1 visualization:1 previously:1 remains:1 count:37 flip:1 panneershelvam:1 apply:2 discretizes:1 hierarchical:1 appropriate:2 generic:1 stig:1 pierre:1 occurrence:1 alternative:1 batch:1 original:1 thomas:1 assumes:1 ensure:1 opportunity:1 unifying:1 exploit:1 build:1 gregor:2 classical:3 seeking:1 objective:1 tensor:1 added:2 already:2 quantity:1 question:3 strategy:10 moshe:1 dependence:1 turck:1 italic:1 visiting:1 gradient:6 distance:1 thank:1 mapped:1 fidjeland:1 manifold:1 collected:2 reason:2 boldface:1 code:24 illustration:1 providing:1 balance:2 downsampling:1 difficult:2 setup:1 mostly:1 robert:1 stoc:1 negative:1 kaplan:1 policy:15 unknown:1 perform:2 contributed:1 discretize:1 observation:3 markov:4 sm:19 daan:2 benchmark:7 finite:3 descent:1 acknowledge:1 arc:2 optional:1 beat:1 hinton:3 frame:4 interacting:1 arbitrary:1 intensity:3 david:9 pair:2 namely:1 blackwell:1 extensive:1 connection:1 continous:1 imagenet:1 learned:10 pop:2 nip:7 able:3 tennenholtz:1 curiosity:2 pattern:2 departure:1 challenge:1 summarize:1 including:1 max:2 video:2 explanation:1 dueling:3 suitable:1 overlap:1 turning:1 residual:1 advanced:1 improve:2 technology:1 mdps:6 inversely:1 axis:1 acknowledges:1 tola:1 autoencoder:4 coupled:1 sn:4 prior:1 review:1 schulman:3 graf:2 fully:1 loss:4 interesting:1 proportional:1 allocation:1 querying:1 geoffrey:2 remarkable:1 foundation:4 agent:12 sufficient:1 consistent:1 share:1 navneet:1 strehl:2 prone:1 featuring:2 summary:1 surprisingly:2 supported:3 brafman:1 asynchronous:1 formal:1 neighbor:2 simhash:29 face:2 wide:3 akshay:1 munos:1 sparse:9 van:5 regard:1 curve:2 dimension:6 transition:2 computes:1 author:1 reinforcement:16 preprocessing:6 adaptive:3 counted:1 far:2 transaction:2 bb:7 sj:6 approximate:4 laureate:1 keep:1 satinder:1 mustafa:1 robotic:1 active:1 reveals:1 filip:2 conceptual:1 xi:4 ziyu:1 continuous:12 search:1 why:1 table:9 learn:2 channel:2 robust:1 ca:1 nicolas:2 nature:1 investigated:1 complex:4 necessarily:1 domain:3 marc:4 bradly:1 main:2 dense:3 spread:1 motivation:5 noise:7 whole:1 hyperparameters:1 complementary:1 augmented:1 benchmarking:1 ng:1 screen:3 besse:1 andrei:1 fails:2 stadie:1 xh:1 winning:1 replay:2 flanders:1 tang:1 ian:2 rk:2 preprocessor:2 down:1 saturate:1 specific:2 incentivizing:1 pac:1 insightful:1 explored:1 list:1 frederic:2 vedavyas:1 intrinsic:6 andoni:1 adding:3 magnitude:1 horizon:3 margin:1 gap:1 chen:3 surprise:2 locality:1 depicted:1 led:2 remi:1 tze:1 explore:1 likely:1 timothy:2 conveniently:1 desire:1 saturating:1 kaiming:1 acquiring:1 corresponds:2 constantly:1 relies:1 mountaincar:4 acm:1 nair:1 goal:2 adria:1 towards:1 considerable:1 hard:1 change:2 youtube:1 uniformly:1 yuval:1 acting:1 beattie:1 kearns:1 ghent:1 total:3 called:1 oudeyer:1 experimental:1 gravitar:4 ucb:1 meaningful:1 aaron:1 select:1 highdimensional:3 support:1 latter:1 jonathan:1 dissimilar:1 alexander:4 incorporate:1 tested:1 correlated:1 |
6,488 | 6,869 | An Empirical Study on The Properties of
Random Bases for Kernel Methods
Maximilian Alber, Pieter-Jan Kindermans, Kristof T. Sch?tt
Technische Universit?t Berlin
[email protected]
Klaus-Robert M?ller
Technische Universit?t Berlin
Korea University
Max Planck Institut f?r Informatik
Fei Sha
University of Southern California
[email protected]
Abstract
Kernel machines as well as neural networks possess universal function approximation properties. Nevertheless in practice their ways of choosing the appropriate
function class differ. Specifically neural networks learn a representation by adapting their basis functions to the data and the task at hand, while kernel methods
typically use a basis that is not adapted during training. In this work, we contrast
random features of approximated kernel machines with learned features of neural
networks. Our analysis reveals how these random and adaptive basis functions
affect the quality of learning. Furthermore, we present basis adaptation schemes
that allow for a more compact representation, while retaining the generalization
properties of kernel machines.
1
Introduction
Recent work on scaling kernel methods using random basis functions has shown that their performance
on challenging tasks such as speech recognition can match closely those by deep neural networks [22,
6, 35]. However, research also highlighted two disadvantages of random basis functions. First, a large
number of basis functions, i.e., features, are needed to obtain useful representations of the data. In a
recent empirical study [22], a kernel machine matching the performance of a deep neural network
required a much larger number of parameters. Second, a finite number of random basis functions
lead to an inferior kernel approximation error that is data-specific [30, 32, 36].
Deep neural networks learn representations that are adapted to the data using end-to-end training.
Kernel methods on the other hand can only achieve this by selecting the optimal kernels to represent
the data ? a challenge that persistently remains. Furthermore, there are interesting cases in which
learning with deep architectures is advantageous, as they require exponentially fewer examples [25].
Yet arguably both paradigms have the same modeling power as the number of training examples goes
to infinity. Moreover, empirical studies suggest that for real-world applications the advantage of one
method over the other is somewhat limited [22, 6, 35, 37].
Understanding the differences between approximated kernel methods and neural networks is crucial
to use them optimally in practice. In particular, there are two aspects that require investigation: (1)
How much performance is lost due to the kernel approximation error of the random basis? (2) What
is the possible gain of adapting the features to the task at hand? Since these effects are expected to be
data-dependent, we argue that an empirical study is needed to complement the existing theoretical
contributions [30, 36, 20, 32, 8].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this work, we investigate these issues by making use of the fact that approximated kernel methods
can be cast as shallow, one-hidden-layer neural networks. The bottom layers of these networks are
random basis functions that are generated in a data-agnostic manner and are not adapted during
training [30, 31, 20, 8]. This stands in stark contrast to, even the conventional single layer, neural
network where the bottom-layer parameters are optimized with respect to the data distribution and
the loss function. Specifically, we designed our experiments to distinguish four cases:
? Random Basis (RB): we use the (approximated) kernel machine in its traditional formulation [30, 8].
? Unsupervised Adapted Basis (UAB): we adapt the basis functions to better approximate
the true kernel function.
? Supervised Adapted Basis (SAB): we adapt the basis functions using kernel target alignment [5] to incorporate label information.
? Discriminatively Adapted Basis (DAB): we adapt the basis functions with a discriminative
loss function, i.e., optimize jointly over basis and classifier parameters. This corresponds to
conventional neural network optimization.
These experiments allow us to isolate the effect of the randomness of the basis and contrast it to dataand task-dependent adaptations. We found that adapted bases consistently outperform random ones:
an unsupervised basis adaption leads to a better kernel approximation than a random approximation,
and, when considering the task at hand, a supervised kernel basis leads to a even more compact
model while showing a superior performance compared to the task-agnostic bases. Remarkably, this
performance is retained after transferring the basis to another task and makes this adaption scheme a
viable alternative to a discriminatively adapted basis.
The remainder is structured as follows. After a presentation of related work we explain approximated
kernel machines in context of neural networks and describe our propositions in Sec. 3. In Sec. 4 we
quantify the benefit of adapted basis function in contrast to their random counterparts empirically.
Finally, we conclude in Sec. 5.
2
Related work
To overcome the limitations of kernel learning, several approximation methods have been proposed.
In addition to Nystr?m methods [34, 7], random Fourier features [30, 31] have gained a lot of
attention. Random features or (faster) enhancements [20, 9, 39, 8] were successfully applied in
many applications [6, 22, 14, 35], and were theoretically analyzed [36, 32]. They inspired scalable
approaches to learn kernels with Gaussian processes [35, 38, 23]. Notably, [2, 24] explore kernels in
the context of neural networks, and, in the field of RBF-networks, basis functions were adapted to the
data by [26, 27].
Our work contributes in several ways: we view kernel machines from a neural network perspective
and delineate the influence of different adaptation schemes. None of the above does this. The related
work [36] compares the data-dependent Nystr?m approximation to random features. While our
approach generalizes to structured matrices, i.e., fast kernel machines, Nystr?m does not. Most
similar to our work is [37]. They interpret the Fastfood kernel approximation as a neural network.
Their aim is to reduce the number of parameters in a convolutional neural network.
3
Methods
In this section we will detail the relation between kernel approximations with random basis functions
and neural networks. Then, we discuss the different approaches to adapt the basis in order to perform
our analysis.
3.1
Casting kernel approximations as shallow, random neural networks
Kernels are pairwise similarity functions k(x, x0 ) : Rd ?Rd 7? R between two data points x, x0 ? Rd .
They are equivalent to the inner-products in an intermediate, potentially infinite-dimensional feature
2
space produced by a function ? : Rd 7? RD
k(x, x0 ) = ?(x)T ?(x0 )
(1)
Non-linear kernel machines typically avoid using ? explicitly by applying the kernel trick. They work
in the dual space with the (Gram) kernel matrix. This imposes a quadratic dependence on the number
of samples n and prevents its application in large scale settings. Several methods have been proposed
to overcome this limitation by approximating a kernel machine with the following functional form
?
f (x) = W T ?(x)
+ b,
(2)
?
where ?(x)
is the approximated kernel feature map. Now, we will explain how to obtain this
approximation for the Gaussian and the ArcCos kernel [2]. We chose the Gaussian kernel because it
is the default choice for many tasks. On the other hand, the ArcCos kernel yields an approximation
consisting of rectified, piece-wise linear units (ReLU) as used in deep learning [28, 11, 19].
Gaussian kernel To obtain the approximation of the Gaussian kernel, we use the following property [30]. Given a smooth, shift-invariant kernel k(x ? x0 ) = k(z) with Fourier transform p(w),
then:
Z
T
k(z) =
p(w)ejw z dw.
(3)
Rd
Using the Gaussian distribution p(w) = N (0, ? ?1 ), we obtain the Gaussian kernel
kzk2
2
k(z) = exp? 2?2 .
?
? 0 ),
Thus, the kernel value k(x, x0 ) can be approximated by the inner product between ?(x)
and ?(x
where ?? is defined as
r
1
?
?(x)
=
[sin(WBT x), cos(WBT x)]
(4)
D
and WB ? Rd?D/2 as a random matrix with its entries drawn from N (0, ?). The resulting features
are then used to approximate the kernel machine with the implicitly infinite dimensional feature
space,
? T ?(x
? 0 ).
k(x, x0 ) ? ?(x)
ArcCos kernel
kernel [2]
(5)
To yield a better connection to state-of-the-art neural networks we use the ArcCos
1
kxk kx0 k J(?)
?
x?x0
0
with J(?) = (sin ? + (? ? ?) cos ?) and ? = cos?1 ( kxkkx
0 k ), the angle between x and x . The
approximation is not based on a Fourier transform, but is given by
r
1
?
?(x) =
max(0, WBT x)
(6)
D
k(x, x0 ) =
with WB ? Rd?D being a random Gaussian matrix. This makes the approximated feature map of the
ArcCos kernel closely related to ReLUs in deep neural networks.
?
Neural network interpretation The approximated kernel features ?(x)
can be interpreted as the
output of the hidden layer in a shallow neural network. To obtain the neural network interpretation,
we rewrite Eq. 2 as the following
f (x) = W T h(WBT x) + b,
(7)
with W ? RD?c with c number of classes, and b ? RD . Here, the non-linearity h corresponds to
the obtained
map. Now, we substitute z = WBT xp
in Eqs. 4 and 6 yielding
p kernel approximation
T
h(z) =
1/D[sin(z), cos(z)] for the Gaussian kernel and h(z) =
1/D max(0, z) for the
ArcCos kernel.
3
3.2
Adapting random kernel approximations
Having introduced the neural network interpretation of random features, the key difference between
both methods is which parameters are trained. For the neural network, one optimizes the parameters
in the bottom-layer and those in the upper layers jointly. For kernel machines, however, WB is fixed,
i.e., the features are not adapted to the data. Hyper-parameters (such as ? defining the bandwidth
of the Gaussian kernel) are selected with cross-validation or heuristics [12, 6, 8]. Consequently, the
basis is not directly adapted to the data, loss, and task at hand.
In our experiments, we consider the classification setting where for the given data X ? Rn?d
containing n samples with d input dimensions one seeks to predict the target labels Y ? [0, 1]n?c with
a one-hot encoding for c classes. We use accuracy as the performance measure and the multinomiallogistic loss as its surrogate. All our models have the same, generic form shown in Eq. 7. However,
we use different types of basis functions to analyze varying degrees of adaptation. In particular, we
study whether data-dependent basis functions improve over data-agnostic basis functions. On top
of that, we examine how well label-informative, thus task-adapted basis functions can perform in
contrast to the data-agnostic basis. Finally, we use end-to-end learning of all parameters to connect to
neural networks.
Random Basis - RB: For data-agnostic kernel approximation, we use the current state-of-the-art
of random features. Orthogonal random features [8, ORF] improve the convergence properties of the
Gaussian kernel approximation over random Fourier features [30, 31]. Practically, we substitute WB
with 1/? GB , sample GB ? Rd?D/2 from N (0, 1) and orthogonalize the matrix as given in [8] to
approximate the Gaussian kernel. The ArcCos kernel is applied as described above.
We also use these features as initialization of the following adaptive approaches. When adapting the
Gaussian kernel we optimize GB while keeping the scale 1/? fixed.
Unsupervised Adapted Basis - UAB: While the introduced random bases converge towards the
true kernel with an increasing number of features, it is to be expected that an optimized approximation
will yield a more compact representation. We address this by optimizing the sampled parameters WB
w.r.t. the kernel approximation error (KAE):
? T ?(x
? 0 ))2
? x0 ) = 1 (k(x, x0 ) ? ?(x)
L(x,
2
(8)
This objective is kernel- and data-dependent, but agnostic to the classification task.
Supervised Adapted Basis - SAB: As an intermediate step between task-agnostic kernel approximations and end-to-end learning, we propose to use kernel target alignment [5] to inject label
information. This is achieved by a target kernel function kY with kY (x, x0 ) = +1 if x and x0
belong to the same class and kY (x, x0 ) = 0 otherwise. We maximize the alignment between the
approximated kernel k and the target kernel kY for a given data set X:
?
A(X,
k, kY ) = p
with hKa , Kb i =
Pn
i,j
hK, KY i
hK, KihKY , KY i
(9)
ka (xi , xj )kb (xi , xj ).
Discriminatively Adapted Basis - DAB: The previous approach uses label information, but is
oblivious to the final classifier. On the other hand, a discriminatively adapted basis is trained jointly
with the classifier to minimize the classification objective, i.e., WB , W , b are optimized at the same
time. This is the end-to-end optimization performed in neural networks.
4
Experiments
In the following, we present the empirical results of our study, starting with a description the
experimental setup. Then, we proceed to present the results of using data-dependent and taskdependent basis approximations. In the end, we bridge our analysis to deep learning and fast kernel
machines.
4
Gisette
Gaussian
ArcCos
0
?1
7
0.9
10?2
10?3
0.8
10?4
105
10
102
0.6
100
1000 10000
# of Features
10
0.7
3
10
10
0.8
104
0.7
?5
0.9
106
Accuracy
10
Accuracy
KAE
KAE
10
10?6
1
108
1
10
100
1000 10000
# of Features
10
100
1000 10000
# of Features
10
100
1000 10000
# of Features
MNIST
Gaussian
ArcCos
0.8
10?3
0.6
10?4
10?5
0.4
10
100
1000 10000
# of Features
10
100
1000 10000
# of Features
106
105
104
103
102
101
100
10?1
1
0.8
0.6
Accuracy
KAE
10?2
Accuracy
KAE
1
10?1
0.4
10
100
1000 10000
# of Features
10
100
1000 10000
# of Features
CoverType
Gaussian
ArcCos
0
102
0.9
?2
10
0.8
10?3
0.7
10?4
0.6
10?4
0.5
10?6
10?5
10
100
1000 10000
# of Features
10
0.9
100
0.8
10?2
0.7
0.6
10
100
1000 10000
# of Features
Accuracy
10?1
Accuracy
KAE
KAE
10
100
1000 10000
# of Features
10
100
1000 10000
# of Features
CIFAR10
Gaussian
ArcCos
0.8
10?1
10?4
0.4
102
0.6
101
0.4
100
10?5
Accuracy
0.6
10?3
Accuracy
KAE
10?2
KAE
0.8
103
0.2
10
Basis:
100
1000 10000
# of Features
random (RB)
10
10
100
1000 10000
# of Features
unsupervised adapted (UAB)
100
1000 10000
# of Features
supervised adapted (SAB)
10
100
1000 10000
# of Features
discriminative adapted (DAB)
Figure 1: Adapting bases. The plots show the relationship between the number of features (X-Axis), the
KAE in logarithmic spacing(left, dashed lines) and the classification error (right, solid lines). Typically, the
KAE decreases with a higher number of features, while the accuracy increases. The KAE for SAB and DAB
(orange and red dotted line) hints how much the adaptation deviates from its initialization (blue dashed line).
Best viewed in digital and color.
4.1
Experimental setup
We used the following seven data sets for our study: Gisette [13], MNIST [21], CoverType [1],
CIFAR10 features from [4], Adult [18], Letter [10], USPS [15]. The results for the last three can be
found in the supplement. We center the data sets and scale them feature-wise into the range [?1, +1].
We use validation sets of size 1, 000 for Gisette, 10, 000 for MNIST, 50, 000 for CoverType, 5, 000
for CIFAR10, 3, 560 for Adult, 4, 500 for Letter, and 1, 290 for USPS. We repeat every test three
times and report the mean over these trials.
Optimization We train all models with mini-batch stochastic gradient descent. The batch size is
64 and as update rule we use ADAM [17]. We use early-stopping where we stop when the respective
loss on the validation set does not decrease for ten epochs. We use Keras [3], Scikit-learn [29],
NumPy [33] and SciPy [16]. We set the hyper-parameter ? for the Gaussian kernel heuristically
according to [39, 8].
UAB ans SAB learning problems scale quadratically in the number of samples n. Therefore, to
reduce memory requirements we optimize by sampling mini-batches from the kernel matrix. A batch
for UAB consists of 64 sample pairs x and x0 as input and the respective value of the kernel function
k(x, x0 ) as target value. Similarly for SAB, we sample 64 data points as input and generate the
5
target kernel matrix as target value. For each training epoch we randomly generate 10, 000 training
and 1, 000 validation batches, and, eventually, evaluate the performance on 1, 000 unseen, random
batches.
4.2
Analysis
Tab. 1 gives an overview of the best performances achieved by each basis on each data set.
Dataset
Gisette
MNIST
CoverType
CIFAR10
RB
98.1
98.2
91.9
76.4
Gaussian
UAB SAB
97.9
98.1
98.2
98.3
91.9
90.4
76.8
79.0
DAB
97.9
98.3
95.2
77.3
RB
97.7
97.2
83.6
74.9
ArcCos
UAB SAB
97.8
97.8
97.4
97.7
83.1
88.7
76.3
79.4
DAB
97.8
97.9
92.9
75.3
Table 1: Best accuracy in % for different bases.
Data-adapted kernel approximations First, we evaluate the effect of choosing a data-dependent
basis (UAB) over a random basis (RB). In Fig. 1, we show the kernel approximation error (KAE)
and the classification accuracy for a range from 10 to 30,000 features (in logarithmic scale). The
first striking observation is that a data-dependent basis can approximate the kernel equally well
with up to two orders of magnitude fewer features compared to the random baseline. This hold for
both the Gaussian and the ArcCos kernel. However, the advantage diminishes as the number of
features increases. When we relate the kernel approximation error to the accuracy, we observe that
initially a decrease in KAE correlates well with an increase in accuracy. However, once the kernel is
approximated sufficiently well, using more feature does not impact accuracy anymore.
We conclude that the choice between a random or data-dependent basis strongly depends on the
application. When a short training procedure is required, optimizing the basis could be too costly. On
the other hand, if the focus lies on fast inference, we argue to optimize the basis to obtain a compact
representation. In settings with restricted resources, e.g., mobile devices, this can be a key advantage.
Task-adapted kernels A key difference between kernel methods and neural networks originates
from the training procedure. In kernel methods the feature representation is fixed while the classifier is
optimized. In contrast, deep learning relies on end-to-end training such that the feature representation
is tightly coupled to the classifier. Intuitively, this allows the representation to be tailor-made for the
task at hand. Therefore, one would expect that this allows for an even more compact representation
than the previously examined data-adapted basis.
In Sec. 3, we proposed a task-adapted kernel (SAB). Fig. 1 shows that the approach is comparable in
terms of classification accuracy to discriminatively trained basis (DAB). Only for CoverType data
set SAB performs significantly worse due to the limited model capabilities, which we will discuss
below. Both task-adapted features improve significantly in accuracy compared to the random and
data-adaptive kernel approximations.
Transfer learning The beauty of kernel methods is, however, that a kernel function can be used
across a wide range of tasks and consistently result in good performance. Therefore, in the next
experiment, we investigate whether the resulting kernel retains this generalization capability when it
is task-adapted. To investigate the influence of task-dependent information, we randomly separate the
classes MNIST into two distinct subsets. The first task is to classify five randomly samples classes
and their respective data points, while the second task is to do the same with the remaining classes.
We train the previously presented model variants on task 1 and transfer their bases to task 2 where we
only learn the classifier. The experiment is repeated with five different splits and the mean accuracy
is reported.
Fig. 2 shows that on the transfer task, the random and the data-adapted bases RB and UAB approximately retain the accuracy achieved on task 1. The performance of the end-to-end trained basis
DAB drops significantly, however, yields still a better performance than the default random basis.
Surprisingly, the supervised basis SAB using kernel-target alignment retains its performance and
achieves the highest accuracy on task 2. This shows that using label information can indeed be
6
Task 1
Transferred - Task 2
1
0.8
0.8
0.6
0.6
10
100
1000
# of Features
Basis:
10
RB
Accuracy
Accuracy
1
100
1000
# of Features
UAB
SAB
DAB
Figure 2: Transfer learning. We train to discriminate a random subset of 5 classes on the MNIST data set
(left) and then transfer the basis function to a new task (right), i.e., train with the fixed basis from task 1 to
classify between the remaining classes.
CoverType
MNIST
0.6
0.6
0.4
0.4
0.2
0.2
Basis:
100
1000 10000
# of Features
random (RB)
10
Accuracy
Accuracy
0.8
10
ArcCos3
1
1
0.8
Accuracy
ArcCos2
ArcCos3
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
100
1000 10000
# of Features
0.5
10
unsupervised adapted (UAB)
Accuracy
ArcCos2
1
100
1000 10000
# of Features
supervised adapted (SAB)
10
100
1000 10000
# of Features
discriminative adapted (DAB)
MNIST
UAB
SAB
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
10
100
1000 10000
# of Features
10
DAB
1
Accuracy
Accuracy
Accuracy
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
100
1000 10000
# of Features
Accuracy
RB
0.2
10
100
1000 10000
# of Features
10
100
1000 10000
# of Features
CoverType
UAB
SAB
0.9
0.8
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.8
0.7
0.7
0.6
0.5
100
1000 10000
# of Features
10
100
1000 10000
# of Features
ArcCos
1
0.8
0.8
10
DAB
1
0.9
0.9
Accuracy
Accuracy
Accuracy
1
0.5
10
ArcCos2
Accuracy
RB
1
0.9
100
1000 10000
# of Features
10
100
1000 10000
# of Features
ArcCos3
Figure 3: Deep kernel machines. The plots show the classification performance of the ArcCos-kernels with
respect to the kernel (first part) and with respect to the number of layers (second part). Best viewed in digital
and color.
exploited in order to improve the efficiency and performance of kernel approximations without having
to sacrifice generalization. I.e., a target-driven kernel (SAB) can be an efficient and still general
alternative to the universal Gaussian kernel.
Deep kernel machines We extend our analysis and draw a link to deep learning by adding two deep
kernels [2]. As outlined in the aforementioned paper, stacking a Gaussian kernel is not useful instead
we use ArcCos kernels that are related to deep learning as described below. Recall the ArcCos kernel
from Eq. 3.1 as k1 (x, x0 ). Then the kernels ArcCos2 and ArcCos3 are defined by the inductive step
7
MNIST
RB
UAB
KAE
10
0.8
10?3
0.6
10?4
0.4
10?5
10
100
1000 10000
# of Features
10
1
?2
Accuracy
KAE
?2
10?1
10
0.8
10?3
0.6
10?4
0.4
10?5
100
1000 10000
# of Features
Accuracy
1
10?1
10
100
1000 10000
# of Features
10
100
1000 10000
# of Features
CoverType
RB
UAB
1
?1
1
?1
10
10
0.9
0.9
0.8
0.7
10?4
0.6
0.5
10?5
10?4
10?5
100
1000 10000
# of Features
0.5
10
100
1000 10000
# of Features
SAB
DAB
0.8
0.8
0.6
0.6
0.4
0.4
100
1000 10000
# of Features
10
GB
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
100
1000 10000
# of Features
Basis:
0.5
10
HD
DAB
1
1
Accuracy
Accuracy
Accuracy
1
10
100
1000 10000
# of Features
CoverType
MNIST
SAB
10
Accuracy
10
0.8
0.6
0.7
100
1000 10000
# of Features
10
10?3
10?3
10
Accuracy
KAE
KAE
10
Accuracy
?2
?2
HDHD
100
1000 10000
# of Features
10
100
1000 10000
# of Features
HDHDHD
Figure 4: Fast kernel machines. The plots show how replacing the basis GB with an fast approximation
influences the performance of a Gaussian kernel. I.e., GB is replaced by 1, 2, or 3 structured blocks HDi . Fast
approximations with 2 and 3 blocks might overlap with GB . Best viewed in digital and color.
ki+1 (x, x0 ) = ?1 [ki (x, x)ki (x0 , x0 )]?1/2 J(?i ) with ?i = cos?1 (ki (x, x0 )[ki (x, x)ki (x0 , x0 )]?1/2 ).
Similarly, the feature map of the ArcCos kernel is approximated by a one-layer neural network with
the ReLU-activation function and a random weight matrix WB
r
1
?
?
?ArcCos (x) = ?B (x) =
max(0, WBT x),
(10)
D
and the feature maps of the ArcCos2 and ArcCos3 kernels are then given by a 2- or 3-layer neural network with the ReLU-activations, i.e., ??ArcCos2 (x) = ??B1 (??B0 (x)) and ??ArcCos3 (x) =
??B2 (??B1 (??B0 (x))). The training procedure for the ArcCos2 and ArcCos3 kernels remains identical
to the training of the ArcCos kernel, i.e., the random matrices WBi are simultaneously adapted. Only,
now the basis consists of more than one layer, and, to remain comparable for a given number of
features, we split these features evenly over two layers for a 2-layer kernel and over three layers for a
3-layer kernel.
In the following we describe our results on the MNIST and CoverType data sets. We observed that the
so far described relationship between the cases RB, UAB, SAB, DAB also generalizes to deep models
(see Fig. 3, first part, and Fig. 7 in the supplement). I.e., UAB approximates the true kernel function
up to several magnitudes better than RB and leads to a better resulting classification performance.
Furthermore, SAB and DAB perform similarly well and clearly outperform the task-agnostic bases
RB and UAB.
We now compare the results across the ArcCos-kernels. Consider the third row of Fig. 3, which
depicts the performance of RB and UAB on the CoverType data set. For a limited number of features,
i.e., less than 3, 000, the deeper kernels perform worse than the shallow ones. Only given enough
capacity the deep kernels are able to perform as good as or better than the single-layer bases. On the
8
other hand for the CoverType data set, task related bases, i.e., SAB and DAB, benefit significantly
from a deeper structure and are thus more efficient. Comparing SAB with DAB, for the ArcCos
kernel with only one layer SAB leads to worse results than DAB. Given two layers the gap diminishes
and vanishes with three layers (see Fig. 3). This suggests that for this data set the evaluated shallow
models are not expressive enough to extract the task-related kernel information.
Fast kernel machines By using structured matrices one can speed up approximated kernel machines [20, 8]. We will now investigate how this important technique influences the presented basis
schemes. The approximation is achieved by replacing random Gaussian matrices with an approximation composed of diagonal and structured Hadamard matrices. The advantage of these matrix
types is that they allow for low storage costs as fast multiplications. Recall that the input dimension
is d and the number of features is D. By using the fast Hadamard-transform these algorithms only
need to store O(D) instead of O(dD) parameters and the kernel approximation can be computed in
O(D log d) rather than O(Dd).
We use the approximation from [8] and replace the random Gaussian matrix WB = 1/? GB in Eq. 4
with a chain of random, structured blocks WB ? 1/? HD1 . . . HDi . Each block HDi consists of
a diagonal matrix Di with entries sampled from the Rademacher distribution and a Hadamard matrix
H. More blocks lead to a better approximation, but consequently require more computation. We
found that the optimization is slightly more unstable and therefore stop early only after 20 epochs
without improvement. When adapting a basis we will only modify the diagonal matrices.
We re-conducted our previous experiments for the Gaussian kernel on the MNIST and CoverType
data sets (Fig. 4). In the first place one can notice that in most cases the approximation exhibits
no decline in performance and that it is a viable alternative for all basis adaption schemes. Two
major exceptions are the following. Consider first the left part of the second row which depicts a
approximated, random kernel machine (RB). The convergence of the kernel approximation stalls
when using a random basis with only one block. As a result the classification performance drops
drastically. This is not the case when the basis is adapted unsupervised, which is given in the right
part of the second row. Here one cannot notice a major difference between one or more blocks. This
means that for fast kernel machines an unsupervised adaption can lead to a more effective model
utilization, which is crucial for resource aware settings. Furthermore, a discriminatively trained basis,
i.e., a neural network, can be effected similarly from this re-parameterization (see Fig. 4, bottom row).
Here an order of magnitude more features is needed to achieve the same accuracy compared to an
exact representation, regardless how many blocks are used. In contrast, when adapting the kernel in
a supervised fashion no decline in performance is noticeable. This shows that this procedure uses
parameters very efficiently.
5
Conclusions
Our analysis shows how random and adaptive bases affect the quality of learning. For random features
this comes with the need for a large number of features and suggests that two issues severely limit
approximated kernel machines: the basis being (1) agnostic to the data distribution and (2) agnostic
to the task. We have found that data-dependent optimization of the kernel approximation consistently
results in a more compact representation for a given kernel approximation error. Moreover, taskadapted features could further improve upon this. Even with fast, structured matrices, the adaptive
features allow to further reduce the number of required parameters. This presents a promising strategy
when a fast and computationally cheap inference is required, e.g., on mobile device.
Beyond that, we have evaluated the generalization capabilities of the adapted variants on a transfer
learning task. Remarkably, all adapted bases outperform the random baseline here. We have found that
the kernel-task alignment works particularly well in this setting, having almost the same performance
on the transfer task as the target task. At the junction of kernel methods and deep learning, this
shows that incorporating label information can indeed be beneficial for performance without having
to sacrifice generalization capability. Investigating this in more detail appears to be highly promising
and suggests the path for future work.
9
Acknowledgments
MA, KS, KRM, and FS acknowledge support by the Federal Ministry of Education and Research
(BMBF) under 01IS14013A. PJK has received funding from the European Union?s Horizon 2020
research and innovation program under the Marie Sklodowska-Curie grant agreement NO 657679.
KRM further acknowledges partial funding by the Institute for Information & Communications
Technology Promotion (IITP) grant funded by the Korea government (No. 2017-0-00451), BK21
and by DFG. FS is partially supported by NSF IIS-1065243, 1451412, 1513966/1632803, 1208500,
CCF-1139148, a Google Research Award, an Alfred. P. Sloan Research Fellowship and ARO#
W911NF-12-1-0241 and W911NF-15-1-0484. This work was supported by NVIDIA with a hardware
donation.
References
[1] J. A. Blackard and Denis J. D. Comparative accuracies of artificial neural networks and
discriminant analysis in predicting forest cover types from cartographic variables. Computers
and Electronics in Agriculture, 24(3):131?151, 2000.
[2] Youngmin Cho and Lawrence K Saul. Kernel methods for deep learning. In Advances in neural
information processing systems, pages 342?350, 2009.
[3] Fran?ois Chollet et al. Keras. https://github.com/fchollet/keras, 2015.
[4] Adam Coates, Andrew Y. Ng, and Honglak Lee. An analysis of single-layer networks in
unsupervised feature learning. In International Conference on Artificial Intelligence and
Statistics, pages 215?223, 2011.
[5] Nello Cristianini, Andre Elisseeff, John Shawe-Taylor, and Jaz Kandola. On kernel-target
alignment. Advances in neural information processing systems, 2001.
[6] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song.
Scalable kernel methods via doubly stochastic gradients. In Advances in Neural Information
Processing Systems, pages 3041?3049, 2014.
[7] Petros Drineas and Michael W Mahoney. On the nystr?m method for approximating a gram
matrix for improved kernel-based learning. journal of machine learning research, 6(Dec):2153?
2175, 2005.
[8] X Yu Felix, Ananda Theertha Suresh, Krzysztof M Choromanski, Daniel N Holtmann-Rice,
and Sanjiv Kumar. Orthogonal random features. In Advances in Neural Information Processing
Systems, pages 1975?1983, 2016.
[9] Chang Feng, Qinghua Hu, and Shizhong Liao. Random feature mapping with signed circulant
matrix projection. In IJCAI, pages 3490?3496, 2015.
[10] Peter W Frey and David J Slate. Letter recognition using holland-style adaptive classifiers.
Machine learning, 6(2):161?182, 1991.
[11] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,
pages 315?323, 2011.
[12] Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Sch?lkopf. Measuring statistical
dependence with hilbert-schmidt norms. In Algorithmic learning theory, pages 63?77. Springer,
2005.
[13] Isabelle Guyon, Steve R Gunn, Asa Ben-Hur, and Gideon Dror. Result analysis of the nips 2003
feature selection challenge. In NIPS, volume 4, pages 545?552, 2004.
[14] Po-Sen Huang, Haim Avron, Tara N Sainath, Vikas Sindhwani, and Bhuvana Ramabhadran. Kernel methods match deep neural networks on timit. In Acoustics, Speech and Signal Processing
(ICASSP), 2014 IEEE International Conference on, pages 205?209. IEEE, 2014.
10
[15] Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on
pattern analysis and machine intelligence, 16(5):550?554, 1994.
[16] Eric Jones, Travis Oliphant, and Pearu Peterson. {SciPy}: open source scientific tools for
{Python}. 2014.
[17] D Kingma and J Ba Adam. A method for stochastic optimisation, 2015.
[18] Ron Kohavi. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. In
KDD, volume 96, pages 202?207, 1996.
[19] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097?1105, 2012.
[20] Quoc Le, Tamas Sarlos, and Alexander Smola. Fastfood ? computing hilbert space expansions
in loglinear time. Journal of Machine Learning Research, 28:244?252, 2013.
[21] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten
digits, 1998.
[22] Zhiyun Lu, Avner May, Kuan Liu, Alireza Bagheri Garakani, Dong Guo, Aur?lien Bellet, Linxi
Fan, Michael Collins, Brian Kingsbury, Michael Picheny, et al. How to scale up kernel methods
to be as good as deepneural nets. arXiv preprint arXiv:1411.4000, 2014.
[23] Miguel L?zaro-Gredilla, Joaquin Qui?onero-Candela, Carl Edward Rasmussen, and An?bal R.
Figueiras-Vidal. Sparse spectrum gaussian process regression. Journal of Machine Learning
Research, 11:1865?1881, 2010.
[24] Gr?goire Montavon, Mikio L Braun, and Klaus-Robert M?ller. Kernel analysis of deep networks.
Journal of Machine Learning Research, 12(Sep):2563?2581, 2011.
[25] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of
linear regions of deep neural networks. In Advances in neural information processing systems,
pages 2924?2932, 2014.
[26] John Moody and Christian J Darken. Fast learning in networks of locally-tuned processing
units. Neural computation, 1(2):281?294, 1989.
[27] Klaus-Robert M?ller, A Smola, Gunnar R?tsch, B Sch?lkopf, Jens Kohlmorgen, and Vladimir
Vapnik. Using support vector machines for time series prediction. Advances in kernel methods?support vector learning, pages 243?254, 1999.
[28] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines.
In Proceedings of the 27th international conference on machine learning (ICML-10), pages
807?814, 2010.
[29] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,
M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine
Learning Research, 12:2825?2830, 2011.
[30] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In J.C.
Platt, D. Koller, Y. Singer, and S.T. Roweis, editors, Advances in Neural Information Processing
Systems 20, pages 1177?1184. Curran Associates, Inc., 2008.
[31] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou,
editors, Advances in Neural Information Processing Systems 21, pages 1313?1320. Curran
Associates, Inc., 2009.
[32] Dougal J Sutherland and Jeff Schneider. On the error of random fourier features. AUAI, 2015.
[33] St?fan van der Walt, S Chris Colbert, and Gael Varoquaux. The numpy array: a structure for
efficient numerical computation. Computing in Science & Engineering, 13(2):22?30, 2011.
11
[34] Christopher KI Williams and Matthias Seeger. Using the nystr?m method to speed up kernel machines. In Proceedings of the 13th International Conference on Neural Information Processing
Systems, pages 661?667. MIT press, 2000.
[35] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. Deep kernel
learning. arXiv preprint arXiv:1511.02222, 2015.
[36] Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nystr?m method
vs random fourier features: A theoretical and empirical comparison. In Advances in neural
information processing systems, pages 476?484, 2012.
[37] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and
Ziyu Wang. Deep fried convnets. June 2015.
[38] Zichao Yang, Andrew Wilson, Alex Smola, and Le Song. A la carte ? learning fast kernels.
Journal of Machine Learning Research, 38:1098?1106, 2015.
[39] Felix X Yu, Sanjiv Kumar, Henry Rowley, and Shih-Fu Chang. Compact nonlinear maps and
circulant extensions. arXiv preprint arXiv:1503.03893, 2015.
12
| 6869 |@word trial:1 advantageous:1 norm:1 open:1 pieter:1 heuristically:1 seek:1 hu:2 orf:1 elisseeff:1 nystr:6 solid:1 electronics:1 liu:1 series:1 selecting:1 daniel:1 tuned:1 dubourg:1 existing:1 kx0:1 current:1 ka:1 comparing:1 com:1 jaz:1 activation:2 yet:1 freitas:1 john:2 sanjiv:2 numerical:1 informative:1 kdd:1 cheap:1 christian:1 designed:1 plot:3 update:1 drop:2 v:1 moczulski:1 intelligence:3 fewer:2 selected:1 device:2 parameterization:1 fried:1 short:1 pascanu:1 denis:1 ron:1 five:2 kingsbury:1 zhiyun:1 viable:2 consists:3 doubly:1 yingyu:1 manner:1 blondel:1 pairwise:1 x0:23 indeed:2 theoretically:1 sacrifice:2 notably:1 examine:1 expected:2 inspired:1 bhuvana:1 salakhutdinov:1 wbi:1 zhi:1 kohlmorgen:1 considering:1 increasing:1 moreover:2 linearity:1 gisette:4 agnostic:10 what:1 interpreted:1 dror:1 every:1 avron:1 auai:1 braun:1 universit:2 classifier:8 platt:1 utilization:1 unit:3 originates:1 grant:2 planck:1 arguably:1 felix:2 sutherland:1 frey:1 modify:1 engineering:1 limit:1 severely:1 encoding:1 path:1 approximately:1 might:1 chose:1 signed:1 initialization:2 k:1 examined:1 suggests:3 challenging:1 co:5 limited:3 youngmin:1 range:3 acknowledgment:1 lecun:1 zaro:1 practice:2 lost:1 block:8 union:1 razvan:1 digit:1 procedure:4 suresh:1 jan:1 universal:2 empirical:6 adapting:7 significantly:4 matching:1 projection:1 suggest:1 bk21:1 cannot:1 selection:1 storage:1 context:2 influence:4 applying:1 cartographic:1 optimize:4 conventional:2 equivalent:1 map:6 center:1 sarlos:1 go:1 attention:1 starting:1 regardless:1 sainath:1 williams:1 tianbao:1 scipy:2 is14013a:1 rule:1 array:1 dw:1 hd:1 target:12 exact:1 olivier:1 carl:1 us:2 guido:1 curran:2 agreement:1 trick:1 persistently:1 associate:2 approximated:15 recognition:3 particularly:1 gunn:1 database:2 bottom:4 observed:1 preprint:3 wang:1 region:1 montufar:1 iitp:1 decrease:3 highest:1 benjamin:2 vanishes:1 rowley:1 tsch:1 cristianini:1 trained:5 rewrite:1 passos:1 asa:1 ali:2 upon:1 efficiency:1 eric:2 basis:66 usps:2 drineas:1 sink:1 po:1 icassp:1 sep:1 slate:1 train:4 distinct:1 fast:14 describe:2 effective:1 artificial:3 klaus:3 hyper:2 choosing:2 heuristic:1 larger:1 otherwise:1 dab:19 statistic:2 unseen:1 highlighted:1 jointly:3 transform:3 final:1 kuan:1 advantage:4 net:1 matthias:1 sen:1 propose:1 aro:1 product:2 adaptation:5 remainder:1 tu:1 hadamard:3 achieve:2 roweis:1 description:1 ky:7 figueiras:1 sutskever:1 convergence:2 enhancement:1 requirement:1 ijcai:1 rademacher:1 comparative:1 adam:3 ben:1 donation:1 andrew:3 miguel:1 noticeable:1 received:1 b0:2 eq:5 edward:1 ois:1 come:1 quantify:1 differ:1 closely:2 stochastic:3 kb:2 hull:1 nando:1 education:1 require:3 government:1 pjk:1 generalization:5 investigation:1 randomization:1 proposition:1 brian:1 varoquaux:2 extension:1 rong:1 hold:1 practically:1 sufficiently:1 exp:1 lawrence:1 mapping:1 algorithmic:1 predict:1 major:2 achieves:1 early:2 agriculture:1 ruslan:1 diminishes:2 label:7 prettenhofer:1 bridge:1 successfully:1 tool:1 weighted:1 minimization:1 federal:1 mit:1 promotion:1 feisha:1 clearly:1 gaussian:27 aim:1 carte:1 rather:1 denil:1 sab:23 avoid:1 pn:1 beauty:1 varying:1 casting:1 mobile:2 wilson:2 zhou:1 focus:1 june:1 improvement:1 consistently:3 maria:1 grisel:1 hk:2 contrast:7 seeger:1 linxi:1 baseline:2 inference:2 dependent:11 stopping:1 typically:3 transferring:1 initially:1 hidden:2 relation:1 koller:2 lien:1 marcin:1 issue:2 dual:1 classification:10 aforementioned:1 retaining:1 arccos:22 art:2 gramfort:1 orange:1 field:1 once:1 aware:1 having:4 beach:1 sampling:1 ng:1 identical:1 yu:3 unsupervised:8 jones:1 icml:1 future:1 report:1 yoshua:2 gordon:1 hint:1 oblivious:1 randomly:3 composed:1 simultaneously:1 tightly:1 kandola:1 numpy:2 dfg:1 usc:1 replaced:1 kitchen:1 consisting:1 alber:2 dougal:1 investigate:4 highly:1 cournapeau:1 alignment:6 mahoney:1 analyzed:1 yielding:1 misha:1 chain:1 fu:1 partial:1 cifar10:4 arthur:1 korea:2 respective:3 institut:1 orthogonal:2 hdi:3 holtmann:1 taylor:1 tree:1 re:2 theoretical:2 classify:2 modeling:1 wb:9 cover:1 disadvantage:1 w911nf:2 retains:2 measuring:1 stacking:1 cost:1 technische:2 entry:2 subset:2 krizhevsky:1 conducted:1 gr:1 too:1 optimally:1 reported:1 connect:1 cho:2 st:2 recht:2 international:5 aur:1 retain:1 lee:1 dong:1 michael:3 ilya:1 moody:1 containing:1 huang:1 worse:3 inject:1 style:1 stark:1 michel:1 li:1 mahdavi:1 de:2 sec:4 b2:1 inc:2 jc:1 sloan:1 explicitly:1 kzk2:1 depends:1 hdhdhd:1 piece:1 view:1 lot:1 performed:1 candela:1 analyze:1 tab:1 red:1 sklodowska:1 relus:1 effected:1 capability:4 bayes:1 xing:1 curie:1 timit:1 contribution:1 minimize:1 accuracy:45 convolutional:2 efficiently:1 yield:4 lkopf:2 handwritten:2 produced:1 informatik:1 none:1 lu:1 onero:1 rectified:2 randomness:1 explain:2 hdhd:1 andre:1 di:1 petros:1 gain:1 sampled:2 stop:2 dataset:1 recall:2 color:3 hur:1 hilbert:2 appears:1 steve:1 higher:1 supervised:7 xie:1 improved:1 wei:1 formulation:1 evaluated:2 delineate:1 strongly:1 walt:1 furthermore:4 smola:5 convnets:1 hand:10 joaquin:1 replacing:3 expressive:1 christopher:2 scikit:2 nonlinear:1 google:1 quality:2 scientific:1 usa:1 effect:3 true:3 tamas:1 counterpart:1 inductive:1 ccf:1 xavier:1 kyunghyun:1 sin:3 during:2 inferior:1 bal:1 tt:1 performs:1 hka:1 balcan:1 wise:2 funding:2 superior:1 functional:1 empirically:1 overview:1 exponentially:1 volume:2 belong:1 interpretation:3 extend:1 approximates:1 interpret:1 he:1 isabelle:1 honglak:1 rd:11 outlined:1 similarly:4 shawe:1 henry:1 funded:1 similarity:1 base:13 bagheri:1 recent:2 perspective:1 optimizing:2 optimizes:1 driven:1 raj:1 store:1 nvidia:1 jens:1 exploited:1 uab:19 der:1 ministry:1 dai:1 somewhat:1 schneider:1 converge:1 paradigm:1 ller:3 maximize:1 dashed:2 ii:1 signal:1 gretton:1 rahimi:2 smooth:1 match:2 adapt:4 faster:1 cross:1 long:1 equally:1 award:1 impact:1 prediction:1 scalable:2 variant:2 florina:1 kindermans:1 liao:1 optimisation:1 regression:1 arxiv:6 kernel:136 represent:1 alireza:1 achieved:4 dec:1 addition:1 remarkably:2 fellowship:1 spacing:1 source:1 crucial:2 sch:3 kohavi:1 posse:1 isolate:1 kera:3 yang:3 intermediate:2 split:2 enough:2 bengio:3 vinod:1 affect:2 relu:3 xj:2 architecture:1 bandwidth:1 stall:1 reduce:3 inner:2 decline:2 shift:1 whether:2 gb:8 song:3 f:2 peter:1 speech:2 proceed:1 deep:24 useful:2 gael:1 ten:1 locally:1 hardware:1 generate:2 http:1 outperform:3 nsf:1 coates:1 notice:2 dotted:1 rb:18 blue:1 alfred:1 brucher:1 key:3 four:1 gunnar:1 nevertheless:1 shih:1 drawn:1 choromanski:1 marie:1 krzysztof:1 chollet:1 sum:1 angle:1 letter:3 fourteenth:1 striking:1 tailor:1 place:1 almost:1 guyon:1 yann:1 fran:1 draw:1 decision:1 scaling:2 qui:1 comparable:2 layer:20 ki:7 haim:1 distinguish:1 fan:2 quadratic:1 adapted:34 covertype:13 infinity:1 fei:1 alex:4 bousquet:1 aspect:1 fourier:6 speed:2 kumar:2 transferred:1 structured:7 according:1 gredilla:1 across:2 remain:1 slightly:1 beneficial:1 bellet:1 shallow:5 making:1 quoc:1 avner:1 intuitively:1 invariant:1 restricted:2 computationally:1 resource:2 remains:2 previously:2 discus:2 eventually:1 thirion:1 needed:3 singer:1 end:13 generalizes:2 junction:1 vidal:1 observe:1 appropriate:1 generic:1 travis:1 anymore:1 alternative:3 batch:6 schmidt:1 corinna:1 vikas:1 substitute:2 top:1 remaining:2 k1:1 approximating:2 ramabhadran:1 feng:2 objective:2 perrot:1 strategy:1 sha:1 dependence:2 mehrdad:1 costly:1 traditional:1 surrogate:1 diagonal:3 southern:1 gradient:2 exhibit:1 niao:1 antoine:1 separate:1 link:1 berlin:3 capacity:1 evenly:1 seven:1 oliphant:1 argue:2 nello:1 unstable:1 discriminant:1 chris:1 retained:1 relationship:2 mini:2 vladimir:1 innovation:1 liang:1 setup:2 zichao:2 robert:3 potentially:1 relate:1 ba:1 wbt:6 boltzmann:1 perform:5 upper:1 observation:1 darken:1 finite:1 acknowledge:1 descent:1 jin:1 colbert:1 defining:1 hinton:2 communication:1 incorporate:1 rn:1 introduced:2 complement:1 cast:1 required:4 pair:1 david:1 optimized:4 connection:1 anant:1 imagenet:1 vanderplas:1 california:1 acoustic:1 learned:1 quadratically:1 kingma:1 nip:3 address:1 adult:2 able:1 beyond:1 below:2 pattern:1 gideon:1 challenge:2 program:1 max:4 memory:1 zhiting:1 power:1 hot:1 overlap:1 hybrid:1 predicting:1 scheme:5 improve:6 github:1 technology:1 axis:1 acknowledges:1 coupled:1 extract:1 naive:1 deviate:1 epoch:3 understanding:1 text:1 python:2 multiplication:1 loss:5 expect:1 discriminatively:6 interesting:1 limitation:2 geoffrey:2 validation:4 digital:3 degree:1 xp:1 imposes:1 garakani:1 dd:2 editor:2 bordes:1 row:4 fchollet:1 repeat:1 last:1 keeping:1 surprisingly:1 supported:2 rasmussen:1 drastically:1 allow:4 deeper:2 burges:1 institute:1 wide:1 saul:1 circulant:2 peterson:1 sparse:2 benefit:2 van:1 overcome:2 default:2 dimension:2 world:1 stand:1 gram:2 made:1 adaptive:6 far:1 correlate:1 transaction:1 picheny:1 approximate:4 compact:7 implicitly:1 bernhard:1 blackard:1 reveals:1 investigating:1 b1:2 conclude:2 discriminative:3 xi:2 ziyu:1 spectrum:1 table:1 promising:2 learn:6 transfer:7 ca:1 contributes:1 forest:1 schuurmans:1 expansion:1 bottou:1 european:1 fastfood:2 repeated:1 mikio:1 fig:9 depicts:2 fashion:1 bmbf:1 duchesnay:1 lie:1 third:1 montavon:1 specific:1 rectifier:1 showing:1 theertha:1 cortes:1 glorot:1 incorporating:1 mnist:13 vapnik:1 adding:1 gained:1 taskdependent:1 supplement:2 magnitude:3 maximilian:2 horizon:1 gap:1 logarithmic:2 explore:1 kristof:1 prevents:1 kxk:1 partially:1 bo:2 kae:18 chang:2 holland:1 springer:1 sindhwani:1 corresponds:2 hua:1 adaption:4 relies:1 ma:1 rice:1 nair:1 viewed:3 presentation:1 consequently:2 rbf:1 towards:1 krm:2 jeff:1 replace:1 loglinear:1 specifically:2 infinite:2 ananda:1 discriminate:1 hd1:1 experimental:2 orthogonalize:1 la:1 exception:1 tara:1 pedregosa:1 support:3 guo:1 jonathan:1 alexander:1 collins:1 goire:1 evaluate:2 |
6,489 | 687 | Holographic Recurrent Networks
Tony A. Plate
Department of Computer Science
University of Toronto
Toronto, M5S lA4 Canada
Abstract
Holographic Recurrent Networks (HRNs) are recurrent networks
which incorporate associative memory techniques for storing sequential structure. HRNs can be easily and quickly trained using
gradient descent techniques to generate sequences of discrete outputs and trajectories through continuous spaee. The performance
of HRNs is found to be superior to that of ordinary recurrent networks on these sequence generation tasks.
1
INTRODUCTION
The representation and processing of data with complex structure in neural networks
remains a challenge. In a previous paper [Plate, 1991b] I described Holographic Reduced Representations (HRRs) which use circular-convolution associative-memory
to embody sequential and recursive structure in fixed-width distributed representations. This paper introduces Holographic Recurrent Networks (HRNs), which
are recurrent nets that incorporate these techniques for generating sequences of
symbols or trajectories through continuous space. The recurrent component of
these networks uses convolution operations rather than the logistic-of-matrix-vectorproduct traditionally used in simple recurrent networks (SRNs) [Elman, 1991,
Cleeremans et a/., 1991].
The goals ofthis work are threefold: (1) to investigate the use of circular-convolution
associative memory techniques in networks trained by gradient descent; (2) to see
whether adapting representations can improve the capacity of HRRs; and (3) to
compare performance of HRNs with SRNs.
34
Holographic Recurrent Networks
1.1
RECURRENT NETWORKS & SEQUENTIAL PROCESSING
SRNs have been used successfully to process sequential input and induce finite
state grammars [Elman, 1991, Cleeremans et a/., 1991]. However, training times
were extremely long, even for very simple grammars. This appeared to be due to the
difficulty of findin& a recurrent operation that preserved sufficient context [Maskara
and Noetzel, 1992J. In the work reported in this paper the task is reversed to be
one of generating sequential output. Furthermore, in order to focus on the context
retention aspect, no grammar induction is required.
1.2
CIRCULAR CONVOLUTION
Circular convolution is an associative memory operator. The role of convolution
in holographic memories is analogous to the role of the outer product operation in
matrix style associative memories (e.g., Hopfield nets). Circular convolution can be
viewed as a vector multiplication operator which maps pairs of vectors to a vector
(just as matrix multiplication maps pairs of matrices to a matrix). It is defined as
z = x@y : Zj = I:~:~ YkXj-k, where @ denotes circular eonvolution, x, y, and z
are vectors of dimension n , Xi etc. are their elements, and subscripts are modulo-n
X n -2). Circular convolution can be computed in O(nlogn) using
(so that X-2
Fast Fourier Transforms (FFTs). Algebraically, convolution behaves like scalar
multiplication: it is commutative, associative, and distributes over addition. The
identity vector for convolution (I) is the "impulse" vector: its zero'th element is 1
and all other elements are zero. Most vectors have an inverse under convolution,
i.e., for most vectors x there exists a unique vector y (=x- 1 ) such that x@y = I.
For vectors with identically and independently distributed zero mean elements and
an expected Euclidean length of 1 there is a numerically stable and simply derived
approximate inverse. The approximate inverse of x is denoted by x? and is defined
by the relation
Xn-j.
=
x; =
Vector pairs can be associated by circular convolution. Multiple associations can
be summed. The result can be decoded by convolving with the exact inverse or
approximate inverse, though the latter generally gives more stable results.
Holographie Reduced Representations [Plate, 1991a, Plate, 1991b] use c.ircular convolution for associating elements of a structure in a way that can embody hierarchical structure. The key property of circular convolution that makes it useful for
representing hierarchical structure is that the circular convolution of two vectors is
another vector of the same dimension, which can be used in further associations.
Among assoeiative memories, holographic memories have been regarded as inferior
beeause they produee very noisy results and have poor error correcting properties.
However, when used in Holographic Reduced Representations the noisy results can
be cleaned up with conventional error correcting associative memories. This gives
the best of both worlds - the ability to represent sequential and recursive structure
and clean output vectors.
2
TRAJECTORY-ASSOCIATION
A simple method for storing sequences using circular convolution is to associate
elements of the sequence with points along a predetermined trajectory. This is akin
35
36
Plate
to the memory aid called the method of loci which instructs us to remember a list
of items by associating each term with a distinctive location along a familiar path.
2.1
STORING SEQUENCES BY TRAJECTORY-ASSOCIATION
Elements of the sequence and loci (points) on the trajectory are all represented by
n-dimensional vectors. The loci are derived from a single vector k - they are its
suc,cessive convolutive powers: kO, kl, k 2, etc. The convolutive power is defined in
the obvious way: kO is the identity vector and k i +1 ki@k.
=
The vector k must be c,hosen so that it does not blow up or disappear when raised
to high powers, i.e., so that IlkP II
1 'V p. The dass of vec.tors which satisfy this
constraint is easily identified in the frequency domain (the range of the discrete
Fourier transform). They are the vectors for which the magnitude of the power of
each frequenc.y component is equal to one. This class of vectors is identic,al to the
class for which the approximate inverse is equal to the exact inverse.
=
Thus, the trajectory-association representation for the sequence "abc" is
Sabc.
2.2
= a
+ b@k + c@k2.
DECODING TRAJECTORY-ASSOCIATED SEQUENCES
Trajectory-associated sequences can be decoded by repeatedly convolving with the
inverse of the vector that generated the encoding loci. The results of dec,oding
summed convolution products are very noisy. Consequently, to decode trajec.tory
associated sequences, we must have all the possible sequenc,e elements stored in an
error c,orrecting associative memory. I call this memory the "clean up" memory.
For example, to retrieve the third element of the sequence Sabc we convolve twice
with k- 1 , which expands to a@k- 2 + b@k- 1 + c. The two terms involving powers
of k are unlikely to be correlated with anything in the clean up memory. The most
similar item in clean up memory will probably be c. The clean up memory should
recognize this and output the dean version of c.
2.3
CAPACITY OF TRAJECTORY-ASSOCIATION
In [Plate, 1991a] the capacity of circular-convolution based assoc.iative memory was
c,alculated. It was assumed that the elements of all vectors (dimension n) were
c,hosen randomly from a gaussian distribution with mean zero and variance lin
(giving an expec.ted Eudidean length of 1.0). Quite high dimensional vec.tors were
required to ensure a low probability of error in decoding. For example, with .512
element vec.tors and 1000 items in the clean up memory, 5 pairs can be stored with
a 1% chance of an error in deeoding. The scaling is nearly linear in n: with 1024
element vectors 10 pairs can be stored with about a 1% chance of error. This works
out to a information c,apac.ity of about 0.1 bits per element. The elements are real
numbers, but high precision is not required.
These capacity calculations are roughly applicable to the trajectory-association
method. They slightly underestimate its capacity because the restriction that the
encoding loci have unity power in all frequencies results in lower decoding noise.
Nonetheless this figure provides a useful benchmark against which to compare the
capacity of HRNs which adapt vec.tors using gradient descent.
Holographic Recurrent Networks
3
TRAJECTORY ASSOCIATION & RECURRENT NETS
HRNs incorporate the trajectory-association scheme in recurrent networks. HRNs
are very similar to SRNs , sueh as those used by [Elman , 1991] and [Cleeremans et
al. , 1991]. However, the task used in this paper is different: the generation of target
sequences at the output units, with inputs that do not vary in time.
In order to understand the relationship between HRNs and SRNs both were tested
on the sequence generation task. Several different unit activation functions were
tried for the SRN: symmetric (tanh) and non-symmetric sigmoid (1/(1 + e- X ) ) for
the hidden units, and soft max and normalized RBF for the output units. The best
combination was symmetric sigmoid with softmax outputs .
3.1
ARCHITECTURE
The H RN and the SRN used in the experiments described here are shown in Figure I. In the H RN the key layer y contains the generator for the inverse loci
(corresponding to k- 1 in Section 2). The hidden to output nodes implement the
dean-up memory: the output representation is local and the weights on the links
to an output unit form the vector that represents the symbol corresponding to that
unit . The softmax function serves to give maximum activation to the output unit
whose weights are most similar to the activation at the hidden layer .
The input representation is also loeal, and input activations do not ehange during
the generation of one sequence . Thus the weights from a single input unit determine
the acti vations at the code layer . Nets are reset at the beginning of each seq lIenee.
The HRN computes the following functions . Time superscripts are omitted where
all are the same. See Figure 1 for symbols. The parameter 9 is an adaptable input
gain shared by all output units.
Code units:
Hidden units:
Context units:
Output units:
(first time step)
(subsequent steps)
(h = p@y)
(total input)
(output)
(softmax)
In the SRN the only differenee is in the reeurrence operation, i.e., the computation
of the activations of the hidden units whieh is, where bj is a bias:
hj
= tanh(cj
+ Ek wjkPk + bj).
The objective function of the network is the asymmetric divergence between the
activations of the output units
and the targets
summed over eases sand
timesteps t, plus two weight penalty terms (n is the number of hidden units):
(or)
E = -
(
"""
~ tjst
stJ
(tr)
lor)
("""
2) 2
og t;; + 0.0001
n
~ r + """
~ c) + """
~ (1 - """
L.J
Wjk
J
Jk
0
Wjk
Wjk
Jk
J
k
The first weight penalty term is a standard weight cost designed to penalize large
37
38
Plate
Output
Output
0
HRN
0
SRN
(;ontext p
Input i
Figure 1: Holographic. Recurrent Network (HRN) and Simple Recurrent Network
(SRN). The backwards curved arrows denote a copy of activations to the next time
step . In the HRN the c.ode layer is active only at the first time step and the c.ontext
layer is active only after the first time step. The hidden, code, context, and key
layers all have the same number of units. Some input units are used only during
training, others only during testing.
weights. The sec.ond weight penalty term was designed to force the Eudidean length
of the weight vector on each output unit to be one. This penalty term helped the
HRN c.onsiderably but did not noticeably improve the performance of the SRN.
The partial derivatives for the activations were c.omputed by the unfolding in time
method [Rumelhart et ai., 1986]. The partial derivatives for the activations of the
context units in the HRN are:
DE
DE
= 81 . Yk-j
(= 'lpE = 'lh@Y*)
a-:
L
PJ
k
~J
When there are a large number of hidden units it is more efficient to compute this
derivative via FFTs as the convolution expression on the right .
On all sequenc.es the net was cycled for as many time steps as required to produc.e
the target sequence. The outputs did not indic.ate when the net had reached the end
of the sequence, however, other experiments have shown that it is a simple matter
to add an output to indic.ate this.
3.2
TRAINING AND GENERATIVE CAPACITY RESULTS
One of the motivations for this work was to find recurrent networks with high
generative capacity, i.e., networks whic.h after training on just a few sequences
c.ould generate many other sequences without further modification of recurrent or
output weights . The only thing in the network that changes to produce a different
sequence is the activation on the codes units. To have high generative capacity the
function of the output weights and recurrent weights (if they exist) must generalize
to the production of novel sequenc.es. At each step the recurrent operation must
update and retain information about the current position in the sequence. It was
Holographic Recurrent Networks
expected that this would be a difficult task for SRNs, given the reported difficulties
with getting SRNs to retain context, and Simard and LeCun's [1992] report of being
unable to train a type of recurrent network to generate more than one trajectory
through c.ontinuous space. However, it turned out that HRNs, and to a lesser extent
SRNs, c.ould be easily trained to perform the sequence generation task well.
The generative capacity of HRNs and SRNs was tested using randomly chosen
sequences over :3 symbols (a, b, and c). The training data was (in all but one
case) 12 sequences of length 4, e.g., "abac", and "bacb". Networks were trained
on this data using the conjugate gradient method until all sequences were correctly
generated. A symbol was judged to be correct when the activation of the correct
output unit exceeded 0.5 and exceeded twice any other output unit activation.
After the network had been trained, all the weights and parameters were frozen,
except for the weights on the input to c.ode links. Then the network was trained on
a test set of novel sequences of lengths 3 to 16 (32 sequences of each length). This
training could be done one sequence at a time since the generation of each sequence
involved an exclusive set of modifiable weights, as only one input unit was active for
any sequence. The search for code weights for the test sequences was a c.onjugate
gradient search limited to 100 iterations.
100%
'x
80%
HRN 64 --0HRN 32 -+HRN 16 ~
HRN 8 . X? N 4 'L:!.' ?
60%
40%
20%
A
)(
A
x.
0%
4
6
8 10 12 14 16
4
6
8 10 12 14 16
Figure 2: Percentage of novel sequences that can be generated versus length.
The graph on the left in Figure 2 shows how the performance varies with sequence
length for various networks with 16 hidden units. The points on this graph are the
average of 5 runs; each run began with a randomization of all weights. The worst
performance was produced by the SRN. The HRN gave the best performance: it
was able to produce around 90% of all sequences up to length 12. Interestingly,
a SRN (SRNZ in Figure 2) with frozen random recurrent weights from a suitable
distribution performed significantly better than the unconstrained SRN.
To some extent, the poor performance of the SRN was due to overtraining. This
was verified by training a SRN on 48 sequences oflength 8 (8 times as much data).
The performance improved greatly (SRN+ in Figure 2), but was still not as good
that of the HRN trained on the lesser amount of data. This suggests that the extra
parameters provided by the recurrent links in the SRN serve little useful purpose:
the net does well with fixed random values for those parameters and a HRN does
better without modifying any parameters in this operation. It appears that all that
39
40
Plate
is required in the recurrent operation is some stable random map.
The scaling performance of the HRN with respect to the number of hidden units
is good. ThE" graph on the right in Figure 2 shows the performance of HRNs with
R output units and varying numbers of hidden units (averages of 5 runs). As the
number of hidden units increases from 4 to 64 the generative capaeity increases
steadily. The sealing of sequence length with number of outputs (not shown) is also
good : it is over 1 bit per hidden unit. This compares very will with the 0.1 bit per
element aehieved by random vector eircular-c.onvolution (Section 2.3).
The training times for both the HRNs and the SRNs were very short. Both required around 30 passes through the training data to train the output and recurrent
weights. Finding a c.ode for test sequence of length 8 took the HRN an average of
14 passes. The SRN took an average of .57 passes (44 with frozen weights). The
SRN trained on more data took mueh longer for the initial training (average 281
passes) but the c.ode searc.h was shorter (average 31 passes).
4
TRAJECTORIES IN CONTINUOUS SPACE
HRNs ean also be used to generate trajectories through c.ontinuous spaee. Only two
modifieations need be made: (a) ehange the function on the output units to sigmoid
and add biases, and (b) use a fractional power for the key vector. A fractional power
vector f can be generated by taking a random unity-power vector k and multiplying
the phase angle of each frequency component by some fraction (\', i.e., f = kC/. The
result is that fi is similar to fi when the difference between i and j is less than 1/ (\' ,
and the similarity is greater for closer i and j. The output at the hidden layer will
be similar at successive time steps. If desired, the speed at which the trajectory is
traversed can be altered by changing (\'.
target X
target Y net Y
Figure 3: Targets and outputs of a HRN trained to generate trajectories through
c.ontinuous space. X and Yare plotted against time.
A trajectory generating HRN with 16 hidden units and a key veetor k O. 06 was trained
to produce pen trajectories (100 steps) for 20 instances of handwritten digits (two
of each). This is the same task that Simard and Le Cun [1992] used. The target
trajectories and the output of the network for one instance are shown in Figure 3.
5
DISCUSSION
One issue in processing sequential data with neural networks is how to present
the inputs to the network. One approach has been to use a fixed window on the
sequence, e.g., as in NETtaik [Sejnowski and Rosenberg, 1986] . A disadvantage
of this is any fixed size of window may not be large enough in some situations.
Another approach is to use a recurrent net to retain information about previous
Holographic Recurrent Networks
inputs. A disadvantage of this is the difficulty that recurrent nets have in retaining
information over many time steps. Generative networks offer another approach: use
the codes that generate a sequence as input rather than the raw sequence. This
would allow a fixed size network to take sequences of variable length as inputs (as
long as they were finite), without having to use multiple input blocks or windows.
The main attraction of circular convolution as an associative memory operator is
its affordance of the representation of hierarchical structure. A hierarchical HRN,
which takes advantage of this to represent sequences in chunks, has been built.
However, it remains to be seen if it can be trained by gradient descent.
6
CONCLUSION
The c.ircular convolution operation can be effectively incorporated into recurrent
nets and the resulting nets (HRNs) can be easily trained using gradient descent to
generate sequences and trajectories. HRNs appear to be more suited to this task
than SRNs, though SRNs did surprisingly well. The relatively high generative capacity of HRNs shows that the capacity of circular convolution associative memory
tplate, 1991a] can be greatly improved by adapting representations of vectors.
References
[Cleeremans et al., 1991] A. Cleeremans, D. Servan-Schreiber, and J. 1. McClelland. Graded state machines: The representation of temporal contingencies in
simple recurrent networks. Machine Learning, 7(2/3):161-194, 1991.
[Elman, 1991] J. Elman. Distributed representations, simple recurrent networks
and grammatical structure. Machine Learning, 7(2/3):195-226, 1991.
[Maskara and Noetzel, 1992] Arun Maskara and Andrew Noetzel. Forcing simple
recurrent neural networks to encode context. In Proceedings of the 1992 Long
Island Conference on Artificial Intelligence and Computer Graphics, 1992.
[Plate, 1991a] T. A. Plate. Holographic Reduced Representations. Technical Report
CRG-TR-91-1, Department of Computer Science, University of Toronto, 1991.
[Plate, 1991 b] T. A. Plate. Holographic Reduced Representations: Convolution
algebra for compositional distributed representations. In Proceedings of the 12th
International Joint Conference on Artificial Intelligence, pages 30-35, Sydney,
Australia, 1991.
[Rumelhart et al., 1986] D. E. Rumelhart, G. E. Hinton, and Williams R. J. Learning internal representations by error propagation. In Parallel distributed processing: Explorations in the microstructure of cognition, volume 1, chapter 8, pages
318-362. Bradford Books, Cambridge, MA, 1986.
[Sejnowski and Rosenberg, 1986] T. J. Sejnowski and C. R. Rosenberg. NETtalk:
A parallel network that learns to read aloud. Technical report 86-01, Department of Electrical Engineering and Computer Science, Johns Hopkins University,
Baltimore, MD., 1986.
[Simard and LeCun, 1992] P. Simard and Y. LeCun. Reverse TDNN: an architecture for trajectory generation. In J. M. Moody, S. J. Hanson, and R. P. Lippman,
editors, Advances in Neural Information Processing Systems 4 (NIPS*91) , Denver, CO, 1992. Morgan Kaufman.
41
| 687 |@word version:1 tried:1 tr:2 initial:1 contains:1 interestingly:1 current:1 activation:12 must:4 john:1 subsequent:1 predetermined:1 designed:2 update:1 generative:7 intelligence:2 item:3 beginning:1 short:1 provides:1 node:1 toronto:3 location:1 successive:1 instructs:1 lor:1 along:2 acti:1 expected:2 roughly:1 elman:5 embody:2 little:1 window:3 provided:1 kaufman:1 finding:1 temporal:1 remember:1 expands:1 k2:1 assoc:1 unit:33 appear:1 retention:1 engineering:1 local:1 encoding:2 subscript:1 path:1 plus:1 twice:2 tory:1 suggests:1 co:1 limited:1 range:1 unique:1 lecun:3 testing:1 ond:1 recursive:2 block:1 implement:1 lippman:1 digit:1 nlogn:1 lpe:1 adapting:2 significantly:1 induce:1 operator:3 stj:1 judged:1 context:7 restriction:1 conventional:1 map:3 dean:2 williams:1 independently:1 correcting:2 attraction:1 regarded:1 retrieve:1 ity:1 traditionally:1 analogous:1 target:7 modulo:1 exact:2 decode:1 us:1 associate:1 element:15 rumelhart:3 jk:2 asymmetric:1 role:2 electrical:1 worst:1 cleeremans:5 yk:1 trained:12 algebra:1 serve:1 distinctive:1 easily:4 joint:1 hopfield:1 represented:1 various:1 chapter:1 train:2 fast:1 sejnowski:3 artificial:2 vations:1 quite:1 whose:1 aloud:1 grammar:3 ability:1 transform:1 noisy:3 la4:1 associative:10 superscript:1 sequence:43 trajec:1 frozen:3 net:12 advantage:1 took:3 product:2 reset:1 turned:1 noetzel:3 wjk:3 getting:1 indic:2 generating:3 produce:3 recurrent:33 andrew:1 sydney:1 correct:2 modifying:1 exploration:1 australia:1 noticeably:1 sand:1 microstructure:1 randomization:1 traversed:1 crg:1 around:2 cognition:1 bj:2 tor:4 vary:1 omitted:1 purpose:1 applicable:1 tanh:2 schreiber:1 successfully:1 arun:1 unfolding:1 gaussian:1 ontinuous:3 rather:2 hj:1 og:1 varying:1 rosenberg:3 encode:1 derived:2 focus:1 greatly:2 unlikely:1 hidden:15 relation:1 kc:1 issue:1 among:1 denoted:1 retaining:1 raised:1 summed:3 softmax:3 equal:2 having:1 ted:1 represents:1 nearly:1 others:1 report:3 few:1 randomly:2 recognize:1 divergence:1 familiar:1 phase:1 circular:14 investigate:1 introduces:1 closer:1 partial:2 lh:1 shorter:1 euclidean:1 srn:15 desired:1 plotted:1 instance:2 soft:1 disadvantage:2 servan:1 ordinary:1 cost:1 holographic:14 graphic:1 reported:2 stored:3 varies:1 chunk:1 international:1 retain:3 eas:1 decoding:3 quickly:1 hopkins:1 moody:1 ontext:2 book:1 convolving:2 ek:1 style:1 derivative:3 simard:4 de:2 blow:1 sec:1 matter:1 satisfy:1 performed:1 helped:1 reached:1 parallel:2 variance:1 generalize:1 handwritten:1 raw:1 produced:1 trajectory:23 multiplying:1 m5s:1 overtraining:1 against:2 underestimate:1 nonetheless:1 frequency:3 involved:1 steadily:1 obvious:1 associated:4 gain:1 fractional:2 cj:1 adaptable:1 appears:1 exceeded:2 improved:2 done:1 though:2 furthermore:1 just:2 until:1 propagation:1 logistic:1 impulse:1 normalized:1 read:1 symmetric:3 nettalk:1 during:3 width:1 inferior:1 anything:1 plate:12 hrrs:2 novel:3 fi:2 began:1 superior:1 sigmoid:3 behaves:1 denver:1 volume:1 association:9 numerically:1 oflength:1 cambridge:1 vec:4 ai:1 unconstrained:1 affordance:1 had:2 stable:3 longer:1 similarity:1 etc:2 add:2 forcing:1 reverse:1 seen:1 morgan:1 greater:1 algebraically:1 determine:1 ii:1 multiple:2 technical:2 adapt:1 calculation:1 offer:1 long:3 lin:1 das:1 involving:1 ko:2 iteration:1 represent:2 dec:1 penalize:1 preserved:1 addition:1 ode:4 baltimore:1 extra:1 probably:1 pass:5 thing:1 call:1 backwards:1 identically:1 enough:1 timesteps:1 gave:1 architecture:2 associating:2 identified:1 lesser:2 ould:2 whether:1 expression:1 akin:1 penalty:4 compositional:1 repeatedly:1 generally:1 useful:3 transforms:1 amount:1 mcclelland:1 reduced:5 generate:7 hosen:2 exist:1 percentage:1 zj:1 oding:1 produc:1 per:3 correctly:1 modifiable:1 discrete:2 threefold:1 key:5 changing:1 pj:1 clean:6 verified:1 graph:3 fraction:1 run:3 inverse:9 angle:1 seq:1 scaling:2 bit:3 ki:1 layer:7 constraint:1 aspect:1 fourier:2 whic:1 extremely:1 speed:1 relatively:1 department:3 combination:1 poor:2 conjugate:1 ate:2 slightly:1 unity:2 island:1 cun:1 modification:1 remains:2 locus:6 serf:1 end:1 operation:8 yare:1 nettaik:1 hierarchical:4 denotes:1 convolve:1 tony:1 ensure:1 giving:1 graded:1 disappear:1 objective:1 exclusive:1 md:1 gradient:7 reversed:1 link:3 unable:1 capacity:12 outer:1 extent:2 induction:1 length:12 code:6 relationship:1 difficult:1 perform:1 convolution:23 benchmark:1 finite:2 descent:5 curved:1 situation:1 hinton:1 incorporated:1 rn:2 canada:1 pair:5 required:6 cleaned:1 kl:1 hanson:1 nip:1 able:1 convolutive:2 appeared:1 hrn:18 challenge:1 srns:12 built:1 max:1 memory:21 power:9 suitable:1 difficulty:3 force:1 representing:1 scheme:1 improve:2 altered:1 tdnn:1 multiplication:3 generation:7 versus:1 generator:1 frequenc:1 contingency:1 sufficient:1 cycled:1 editor:1 storing:3 production:1 surprisingly:1 copy:1 bias:2 allow:1 understand:1 taking:1 distributed:5 grammatical:1 dimension:3 xn:1 world:1 computes:1 made:1 approximate:4 active:3 assumed:1 xi:1 continuous:3 search:2 pen:1 ean:1 complex:1 suc:1 domain:1 did:3 main:1 arrow:1 motivation:1 noise:1 aid:1 precision:1 position:1 decoded:2 third:1 ffts:2 learns:1 symbol:5 list:1 expec:1 exists:1 ofthis:1 sequential:7 effectively:1 magnitude:1 commutative:1 suited:1 simply:1 scalar:1 chance:2 abc:1 ma:1 goal:1 viewed:1 identity:2 consequently:1 rbf:1 shared:1 change:1 abac:1 except:1 distributes:1 called:1 total:1 bradford:1 e:2 internal:1 latter:1 incorporate:3 tested:2 correlated:1 |
6,490 | 6,870 | Bridging the Gap Between Value and Policy Based
Reinforcement Learning
Ofir Nachum1
Mohammad Norouzi
Kelvin Xu1
Dale Schuurmans
{ofirnachum,mnorouzi,kelvinxx}@google.com, [email protected]
Google Brain
Abstract
We establish a new connection between value and policy based reinforcement
learning (RL) based on a relationship between softmax temporal value consistency
and policy optimality under entropy regularization. Specifically, we show that
softmax consistent action values correspond to optimal entropy regularized policy
probabilities along any action sequence, regardless of provenance. From this
observation, we develop a new RL algorithm, Path Consistency Learning (PCL),
that minimizes a notion of soft consistency error along multi-step action sequences
extracted from both on- and off-policy traces. We examine the behavior of PCL
in different scenarios and show that PCL can be interpreted as generalizing both
actor-critic and Q-learning algorithms. We subsequently deepen the relationship
by showing how a single model can be used to represent both a policy and the
corresponding softmax state values, eliminating the need for a separate critic. The
experimental evaluation demonstrates that PCL significantly outperforms strong
actor-critic and Q-learning baselines across several benchmarks.2
1
Introduction
Model-free RL aims to acquire an effective behavior policy through trial and error interaction with a
black box environment. The goal is to optimize the quality of an agent?s behavior policy in terms of
the total expected discounted reward. Model-free RL has a myriad of applications in games [22, 37],
robotics [16, 17], and marketing [18, 38], to name a few. Recently, the impact of model-free RL has
been expanded through the use of deep neural networks, which promise to replace manual feature
engineering with end-to-end learning of value and policy representations. Unfortunately, a key
challenge remains how best to combine the advantages of value and policy based RL approaches in
the presence of deep function approximators, while mitigating their shortcomings. Although recent
progress has been made in combining value and policy based methods, this issue is not yet settled,
and the intricacies of each perspective are exacerbated by deep models.
The primary advantage of policy based approaches, such as REINFORCE [45], is that they directly
optimize the quantity of interest while remaining stable under function approximation (given a
sufficiently small learning rate). Their biggest drawback is sample inefficiency: since policy gradients
are estimated from rollouts the variance is often extreme. Although policy updates can be improved
by the use of appropriate geometry [14, 27, 32], the need for variance reduction remains paramount.
Actor-critic methods have thus become popular [33, 34, 36], because they use value approximators
to replace rollout estimates and reduce variance, at the cost of some bias. Nevertheless, on-policy
learning remains inherently sample inefficient [10]; by estimating quantities defined by the current
policy, either on-policy data must be used, or updating must be sufficiently slow to avoid significant
bias. Naive importance correction is hardly able to overcome these shortcomings in practice [28, 29].
1
Work done as a member of the Google Brain Residency program (g.co/brainresidency)
An implementation of PCL can be found at https://github.com/tensorflow/models/tree/
master/research/pcl_rl
2
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
By contrast, value based methods, such as Q-learning [44, 22, 30, 42, 21], can learn from any
trajectory sampled from the same environment. Such ?off-policy? methods are able to exploit data
from other sources, such as experts, making them inherently more sample efficient than on-policy
methods [10]. Their key drawback is that off-policy learning does not stably interact with function
approximation [35, Chap.11]. The practical consequence is that extensive hyperparameter tuning can
be required to obtain stable behavior. Despite practical success [22], there is also little theoretical
understanding of how deep Q-learning might obtain near-optimal objective values.
Ideally, one would like to combine the unbiasedness and stability of on-policy training with the data
efficiency of off-policy approaches. This desire has motivated substantial recent work on off-policy
actor-critic methods, where the data efficiency of policy gradient is improved by training an offpolicy critic [19, 21, 10]. Although such methods have demonstrated improvements over on-policy
actor-critic approaches, they have not resolved the theoretical difficulty associated with off-policy
learning under function approximation. Hence, current methods remain potentially unstable and
require specialized algorithmic and theoretical development as well as delicate tuning to be effective
in practice [10, 41, 8].
In this paper, we exploit a relationship between policy optimization under entropy regularization
and softmax value consistency to obtain a new form of stable off-policy learning. Even though
entropy regularized policy optimization is a well studied topic in RL [46, 39, 40, 47, 5, 4, 6, 7]?in
fact, one that has been attracting renewed interest from concurrent work [25, 11]?we contribute new
observations to this study that are essential for the methods we propose: first, we identify a strong
form of path consistency that relates optimal policy probabilities under entropy regularization to
softmax consistent state values for any action sequence; second, we use this result to formulate a
novel optimization objective that allows for a stable form of off-policy actor-critic learning; finally, we
observe that under this objective the actor and critic can be unified in a single model that coherently
fulfills both roles.
2
Notation & Background
We model an agent?s behavior by a parametric distribution ?? (a | s) defined by a neural network over
a finite set of actions. At iteration t, the agent encounters a state st and performs an action at sampled
from ?? (a | st ). The environment then returns a scalar reward rt and transitions to the next state st+1 .
Note: Our main results identify specific properties that hold for arbitrary action sequences. To keep
the presentation clear and focus attention on the key properties, we provide a simplified presentation
in the main body of this paper by assuming deterministic state dynamics. This restriction is not
necessary, and in the Supplementary Material we provide a full treatment of the same concepts
generalized to stochastic state dynamics. All of the desired properties continue to hold in the general
case and the algorithms proposed remain unaffected.
For simplicity, we assume the per-step reward rt and the next state st+1 are given by functions
rt = r(st , at ) and st+1 = f (st , at ) specified by the environment. We begin the formulation
by reviewing the key elements of Q-learning [43, 44], which uses a notion of hard-max Bellman
backup to enable off-policy TD control. First, observe that the expected discounted reward objective,
OER (s, ?), can be recursively expressed as,
X
OER (s, ?) =
?(a | s) [r(s, a) + ?OER (s0 , ?)] ,
where s0 = f (s, a) .
(1)
a
?
Let V (s) denote the optimal state value at a state s given by the maximum value of OER (s, ?) over
policies, i.e., V ? (s) = max? OER (s, ?). Accordingly, let ? ? denote the optimal policy that results
in V ? (s) (for simplicity, assume there is one unique optimal policy), i.e., ? ? = argmax? OER (s, ?).
Such an optimal policy is a one-hot distribution that assigns a probability of 1 to an action with
maximal return and 0 elsewhere. Thus we have
V ? (s) = OER (s, ? ? ) = max(r(s, a) + ?V ? (s0 )).
a
(2)
This is the well-known hard-max Bellman temporal consistency. Instead of state values, one can
equivalently (and more commonly) express this consistency in terms of optimal action values, Q? :
Q? (s, a) = r(s, a) + ? max
Q? (s0 , a0 ) .
0
a
2
(3)
Q-learning relies on a value iteration algorithm based on (3), where Q(s, a) is bootstrapped based on
successor action values Q(s0 , a0 ).
3
Softmax Temporal Consistency
In this paper, we study the optimal state and action values for a softmax form of temporal consistency [48, 47, 7], which arises by augmenting the standard expected reward objective with a
discounted entropy regularizer. Entropy regularization [46] encourages exploration and helps prevent
early convergence to sub-optimal policies, as has been confirmed in practice (e.g., [21, 24]). In this
case, one can express regularized expected reward as a sum of the expected reward and a discounted
entropy term,
OENT (s, ?) = OER (s, ?) + ? H(s, ?) ,
(4)
where ? ? 0 is a user-specified temperature parameter that controls the degree of entropy regularization, and the discounted entropy H(s, ?) is recursively defined as
X
H(s, ?) =
?(a | s) [? log ?(a | s) + ? H(s0 , ?)] .
(5)
a
The objective OENT (s, ?) can then be re-expressed recursively as,
X
OENT (s, ?) =
?(a | s) [r(s, a) ? ? log ?(a | s) + ?OENT (s0 , ?)] .
(6)
a
Note that when ? = 1 this is equivalent to the entropy regularized objective proposed in [46].
Let V ? (s) = max? OENT (s, ?) denote the soft optimal state value at a state s and let ? ? (a | s) denote
the optimal policy at s that attains the maximum of OENT (s, ?). When ? > 0, the optimal policy is no
longer a one-hot distribution, since the entropy term prefers the use of policies with more uncertainty.
We characterize the optimal policy ? ? (a | s) in terms of the OENT -optimal state values of successor
states V ? (s0 ) as a Boltzmann distribution of the form,
? ? (a | s) ? exp{(r(s, a) + ?V ? (s0 ))/? } .
(7)
It can be verified that this is the solution by noting that the OENT (s, ?) objective is simply a ? -scaled
constant-shifted KL-divergence between ? and ? ? , hence the optimum is achieved when ? = ? ? .
To derive V ? (s) in terms of V ? (s0 ), the policy ? ? (a | s) can be substituted into (6), which after
some manipulation yields the intuitive definition of optimal state value in terms of a softmax (i.e.,
log-sum-exp) backup,
X
exp{(r(s, a) + ?V ? (s0 ))/? } .
(8)
V ? (s) = OENT (s, ? ? ) = ? log
a
Note that in the ? ? 0 limit one recovers the hard-max state values defined in (2). Therefore we can
equivalently state softmax temporal consistency in terms of optimal action values Q? (s, a) as,
X
Q? (s, a) = r(s, a) + ?V ? (s0 ) = r(s, a) + ?? log
exp(Q? (s0 , a0 )/? ) .
(9)
0
a
Now, much like Q-learning, the consistency equation (9) can be used to perform one-step backups
to asynchronously bootstrap Q? (s, a) based on Q? (s0 , a0 ). In the Supplementary Material we prove
that such a procedure, in the tabular case, converges to a unique fixed point representing the optimal
values.
We point out that the notion of softmax Q-values has been studied in previous work (e.g., [47, 48, 13,
5, 3, 7]). Concurrently to our work, [11] has also proposed a soft Q-learning algorithm for continuous
control that is based on a similar notion of softmax temporal consistency. However, we contribute
new observations below that lead to the novel training principles we explore.
4
Consistency Between Optimal Value & Policy
We now describe the main technical contributions of this paper, which lead to the development of
two novel off-policy RL algorithms in Section 5. The first key observation is that, for the softmax
3
value function V ? in (8), the quantity exp{V ? (s)/? } also serves as the normalization factor of the
optimal policy ? ? (a | s) in (7); that is,
? ? (a | s) =
exp{(r(s, a) + ?V ? (s0 ))/? }
.
exp{V ? (s)/? }
(10)
Manipulation of (10) by taking the log of both sides then reveals an important connection between
the optimal state value V ? (s), the value V ? (s0 ) of the successor state s0 reached from any action a
taken in s, and the corresponding action probability under the optimal log-policy, log ? ? (a | s).
Theorem 1. For ? > 0, the policy ? ? that maximizes OENT and state values V ? (s) = max? OENT (s, ?)
satisfy the following temporal consistency property for any state s and action a (where s0 = f (s, a)),
V ? (s) ? ?V ? (s0 ) = r(s, a) ? ? log ? ? (a | s) .
(11)
Proof. All theorems are established for the general case of a stochastic environment and discounted
infinite horizon problems in the Supplementary Material. Theorem 1 follows as a special case.
Note that one can also characterize ? ? in terms of Q? as
? ? (a | s) = exp{(Q? (s, a) ? V ? (s))/? } .
(12)
An important property of the one-step softmax consistency established in (11) is that it can be
extended to a multi-step consistency defined on any action sequence from any given state. That is, the
softmax optimal state values at the beginning and end of any action sequence can be related to the
rewards and optimal log-probabilities observed along the trajectory.
Corollary 2. For ? > 0, the optimal policy ? ? and optimal state values V ? satisfy the following
extended temporal consistency property, for any state s1 and any action sequence a1 , ..., at?1 (where
si+1 = f (si , ai )):
V ? (s1 ) ? ? t?1 V ? (st ) =
t?1
X
? i?1 [r(si , ai ) ? ? log ? ? (ai | si )] .
(13)
i=1
Proof. The proof in the Supplementary Material applies (the generalized version of) Theorem 1 to
any s1 and sequence a1 , ..., at?1 , summing the left and right hand sides of (the generalized version
of) (11) to induce telescopic cancellation of intermediate state values. Corollary 2 follows as a
special case.
Importantly, the converse of Theorem 1 (and Corollary 2) also holds:
Theorem 3. If a policy ?(a | s) and state value function V (s) satisfy the consistency property (11) for
all states s and actions a (where s0 = f (s, a)), then ? = ? ? and V = V ? . (See the Supplementary
Material.)
Theorem 3 motivates the use of one-step and multi-step path-wise consistencies as the foundation
of RL algorithms that aim to learn parameterized policy and value estimates by minimizing the
discrepancy between the left and right hand sides of (11) and (13).
5
Path Consistency Learning (PCL)
The temporal consistency properties between the optimal policy and optimal state values developed above lead to a natural path-wise objective for training a policy ?? , parameterized by ?,
and a state value function V? , parameterized by ?, via the minimization of a soft consistency
error. Based on (13), we first define a notion of soft consistency for a d-length sub-trajectory
si:i+d ? (si , ai , . . . , si+d?1 , ai+d?1 , si+d ) as a function of ? and ?:
Xd?1
C(si:i+d , ?, ?) = ?V? (si ) + ? d V? (si+d ) +
? j [r(si+j , ai+j ) ? ? log ?? (ai+j | si+j )] . (14)
j=0
The goal of a learning algorithm can then be to find V? and ?? such that C(si:i+d , ?, ?) is as close to
0 as possible for all sub-trajectories si:i+d . Accordingly, we propose a new learning algorithm, called
4
Path Consistency Learning (PCL), that attempts to minimize the squared soft consistency error over a
set of sub-trajectories E,
X 1
OPCL (?, ?) =
C(si:i+d , ?, ?)2 .
(15)
2
si:i+d ?E
The PCL update rules for ? and ? are derived by calculating the gradient of (15). For a given trajectory
si:i+d these take the form,
Xd?1
?? = ?? C(si:i+d , ?, ?)
? j ?? log ?? (ai+j | si+j ) ,
(16)
j=0
?? = ?v C(si:i+d , ?, ?) ?? V? (si ) ? ? d ?? V? (si+d ) ,
(17)
where ?v and ?? denote the value and policy learning rates respectively. Given that the consistency
property must hold on any path, the PCL algorithm applies the updates (16) and (17) both to
trajectories sampled on-policy from ?? as well as trajectories sampled from a replay buffer. The
union of these trajectories comprise the set E used in (15) to define OPCL .
Specifically, given a fixed rollout parameter d, at each iteration, PCL samples a batch of on-policy
trajectories and computes the corresponding parameter updates for each sub-trajectory of length d.
Then PCL exploits off-policy trajectories by maintaining a replay buffer and applying additional
updates based on a batch of episodes sampled from the buffer at each iteration. We have found it
beneficial to sample replay episodes proportionally to exponentiated reward, mixed with a uniform
distribution, although we did not exhaustively experiment with this sampling procedure. In particular,
we sample a full episode s0:T from the replay buffer of size B with probability 0.1/B + 0.9 ?
PT ?1
exp(? i=0 r(si , ai ))/Z, where we use no discounting on the sum of rewards, Z is a normalization
factor, and ? is a hyper-parameter. Pseudocode of PCL is provided in the Appendix.
We note that in stochastic settings, our squared inconsistency objective approximated by Monte Carlo
samples is a biased estimate of the true squared inconsistency (in which an expectation over stochastic
dynamics occurs inside rather than outside the square). This issue arises in Q-learning as well, and
others have proposed possible remedies which can also be applied to PCL [2].
5.1
Unified Path Consistency Learning (Unified PCL)
The PCL algorithm maintains a separate model for the policy and the state value approximation.
However, given the soft consistency between the state and action value functions (e.g.,in (9)), one can
express the soft consistency errors strictly in terms of Q-values. Let Q? denote a model of action
values parameterized by ?, based on which one can estimate both the state values and the policy as,
X
V? (s) = ? log
exp{Q? (s, a)/? } ,
(18)
a
?? (a | s)
=
exp{(Q? (s, a) ? V? (s))/? } .
(19)
Given this unified parameterization of policy and value, we can formulate an alternative algorithm, called Unified Path Consistency Learning (Unified PCL), which optimizes the same objective
(i.e., (15)) as PCL but differs by combining the policy and value function into a single model. Merging
the policy and value function models in this way is significant because it presents a new actor-critic
paradigm where the policy (actor) is not distinct from the values (critic). We note that in practice,
we have found it beneficial to apply updates to ? from V? and ?? using different learning rates, very
much like PCL. Accordingly, the update rule for ? takes the form,
Xd?1
?? = ?? C(si:i+d , ?)
? j ?? log ?? (ai+j | si+j ) +
(20)
j=0
?v C(si:i+d , ?) ?? V? (si ) ? ? d ?? V? (si+d ) .
(21)
5.2
Connections to Actor-Critic and Q-learning
To those familiar with advantage-actor-critic methods [21] (A2C and its asynchronous analogue A3C)
PCL?s update rules might appear to be similar. In particular, advantage-actor-critic is an on-policy
method that exploits the expected value function,
X
V ? (s) =
?(a | s) [r(s, a) + ?V ? (s0 )] ,
(22)
a
5
to reduce the variance of policy gradient, in service of maximizing the expected reward. As in PCL,
two models are trained concurrently: an actor ?? that determines the policy, and a critic V? that is
trained to estimate V ?? . A fixed rollout parameter d is chosen, and the advantage of an on-policy
trajectory si:i+d is estimated by
Xd?1
A(si:i+d , ?) = ?V? (si ) + ? d V? (si+d ) +
? j r(si+j , ai+j ) .
(23)
j=0
The advantage-actor-critic updates for ? and ? can then be written as,
??
= ?? Esi:i+d |? [A(si:i+d , ?)?? log ?? (ai |si )] ,
(24)
??
= ?v Esi:i+d |? [A(si:i+d , ?)?? V? (si )] ,
(25)
where the expectation Esi:i+d |? denotes sampling from the current policy ?? . These updates exhibit a
striking similarity to the updates expressed in (16) and (17). In fact, if one takes PCL with ? ? 0
and omits the replay buffer, a slight variation of A2C is recovered. In this sense, one can interpret PCL
as a generalization of A2C. Moreover, while A2C is restricted to on-policy samples, PCL minimizes
an inconsistency measure that is defined on any path, hence it can exploit replay data to enhance its
efficiency via off-policy learning.
It is also important to note that for A2C, it is essential that V? tracks the non-stationary target V ??
to ensure suitable variance reduction. In PCL, no such tracking is required. This difference is more
dramatic in Unified PCL, where a single model is trained both as an actor and a critic. That is, it is
not necessary to have a separate actor and critic; the actor itself can serve as its own critic.
One can also compare PCL to hard-max temporal consistency RL algorithms, such as Q-learning [43].
In fact, setting the rollout to d = 1 in Unified PCL leads to a form of soft Q-learning, with the degree
of softness determined by ? . We therefore conclude that the path consistency-based algorithms
developed in this paper also generalize Q-learning. Importantly, PCL and Unified PCL are not
restricted to single step consistencies, which is a major limitation of Q-learning. While some
have proposed using multi-step backups for hard-max Q-learning [26, 21], such an approach is not
theoretically sound, since the rewards received after a non-optimal action do not relate to the hard-max
Q-values Q? . Therefore, one can interpret the notion of temporal consistency proposed in this paper
as a sound generalization of the one-step temporal consistency given by hard-max Q-values.
6
Related Work
Connections between softmax Q-values and optimal entropy-regularized policies have been previously
noted. In some cases entropy regularization is expressed in the form of relative entropy [4, 6, 7, 31],
and in other cases it is the standard entropy [47]. While these papers derive similar relationships to (7)
and (8), they stop short of stating the single- and multi-step consistencies over all action choices we
highlight. Moreover, the algorithms proposed in those works are essentially single-step Q-learning
variants, which suffer from the limitation of using single-step backups. Another recent work [25]
uses the softmax relationship in the limit of ? ? 0 and proposes to augment an actor-critic algorithm
with offline updates that minimize a set of single-step hard-max Bellman errors. Again, the methods
we propose are differentiated by the multi-step path-wise consistencies which allow the resulting
algorithms to utilize multi-step trajectories from off-policy samples in addition to on-policy samples.
The proposed PCL and Unified PCL algorithms bear some similarity to multi-step Q-learning [26],
which rather than minimizing one-step hard-max Bellman error, optimizes a Q-value function
approximator by unrolling the trajectory for some number of steps before using a hard-max backup.
While this method has shown some empirical success [21], its theoretical justification is lacking,
since rewards received after a non-optimal action no longer relate to the hard-max Q-values Q? . In
contrast, the algorithms we propose incorporate the log-probabilities of the actions on a multi-step
rollout, which is crucial for the version of softmax consistency we consider.
Other notions of temporal consistency similar to softmax consistency have been discussed in the RL
literature. Previous work has used a Boltzmann weighted average operator [20, 5]. In particular, this
operator has been used by [5] to propose an iterative algorithm converging to the optimal maximum
reward policy inspired by the work of [15, 39]. While they use the Boltzmann weighted average,
they briefly mention that a softmax (log-sum-exp) operator would have similar theoretical properties.
More recently [3] proposed a mellowmax operator, defined as log-average-exp. These log-averageexp operators share a similar non-expansion property, and the proofs of non-expansion are related.
6
Synthetic Tree
Copy
Synthetic Tree
20
DuplicatedInput
Copy
35
RepeatCopy
100
14
30
15
RepeatCopy
DuplicatedInput
16
80
12
25
10
20
60
8
10
15
10
5
0
0
50
4
5
2
0
0
100
40
6
0
Reverse
1000
2000
20
0
0
ReversedAddition
Reverse
2000
3000
ReversedAddition3
20
4000
Hard ReversedAddition
30
25
15
25
20
2000
Hard ReversedAddition
30
25
0
ReversedAddition3
ReversedAddition
35
30
1000
20
20
10
15
15
10
10
5
5
0
0
0
5000
10000
15
10
5
5
0
0
5000
PCL
10000
0
0
A3C
20000
40000
60000
0
5000
10000
DQN
Figure 1: The results of PCL against A3C and DQN baselines. Each plot shows average reward
across 5 random training runs (10 for Synthetic Tree) after choosing best hyperparameters. We also
show a single standard deviation bar clipped at the min and max. The x-axis is number of training
iterations. PCL exhibits comparable performance to A3C in some tasks, but clearly outperforms A3C
on the more challenging tasks. Across all tasks, the performance of DQN is worse than PCL.
Additionally it is possible to show that when restricted to an infinite horizon setting, the fixed point
of the mellowmax operator is a constant shift of the Q? investigated here. In all these cases, the
suggested training algorithm optimizes a single-step consistency unlike PCL and Unified PCL, which
optimizes a multi-step consistency. Moreover, these papers do not present a clear relationship between
the action values at the fixed point and the entropy regularized expected reward objective, which was
key to the formulation and algorithmic development in this paper.
Finally, there has been a considerable amount of work in reinforcement learning using off-policy data
to design more sample efficient algorithms. Broadly speaking, these methods can be understood as
trading off bias [36, 34, 19, 9] and variance [28, 23]. Previous work that has considered multi-step
off-policy learning has typically used a correction (e.g., via importance-sampling [29] or truncated
importance sampling with bias correction [23], or eligibility traces [28]). By contrast, our method
defines an unbiased consistency for an entire trajectory applicable to on- and off-policy data. An
empirical comparison with all these methods remains however an interesting avenue for future work.
7
Experiments
We evaluate the proposed algorithms, namely PCL & Unified PCL, across several different tasks and
compare them to an A3C implementation, based on [21], and an implementation of double Q-learning
with prioritized experience replay, based on [30]. We find that PCL can consistently match or beat the
performance of these baselines. We also provide a comparison between PCL and Unified PCL and
find that the use of a single unified model for both values and policy can be competitive with PCL.
These new algorithms are easily amenable to incorporate expert trajectories. Thus, for the more
difficult tasks we also experiment with seeding the replay buffer with 10 randomly sampled expert
trajectories. During training we ensure that these trajectories are not removed from the replay buffer
and always have a maximal priority.
The details of the tasks and the experimental setup are provided in the Appendix.
7.1
Results
We present the results of each of the variants PCL, A3C, and DQN in Figure 1. After finding the
best hyperparameters (see the Supplementary Material), we plot the average reward over training
iterations for five randomly seeded runs. For the Synthetic Tree environment, the same protocol is
performed but with ten seeds instead.
7
Synthetic Tree
Copy
Synthetic Tree
20
DuplicatedInput
Copy
35
RepeatCopy
100
14
30
15
RepeatCopy
DuplicatedInput
16
80
12
25
10
20
60
8
10
15
10
5
0
0
50
4
5
2
0
0
100
40
6
0
Reverse
1000
2000
20
0
0
ReversedAddition
Reverse
1000
2000
3000
0
ReversedAddition3
ReversedAddition
ReversedAddition3
30
30
30
25
25
25
25
20
20
20
20
15
15
15
15
10
10
10
10
5
5
5
5
0
0
0
5000
10000
0
5000
10000
4000
Hard ReversedAddition
30
0
2000
Hard ReversedAddition
0
0
PCL
20000
40000
60000
0
5000
10000
Unified PCL
Figure 2: The results of PCL vs. Unified PCL. Overall we find that using a single model for both
values and policy is not detrimental to training. Although in some of the simpler tasks PCL has an
edge over Unified PCL, on the more difficult tasks, Unified PCL preforms better.
Reverse
ReversedAddition
ReversedAddition3
Reverse
ReversedAddition
ReversedAddition3
Hard ReversedAddition
Hard ReversedAddition
30
30
30
30
25
25
25
25
20
20
20
20
15
15
15
15
10
10
10
10
5
5
5
0
0
0
2000
4000
5
0
0
2000
4000
0
0
PCL
20000
40000
60000
0
5000
10000
PCL + Expert
Figure 3: The results of PCL vs. PCL augmented with a small number of expert trajectories on the
hardest algorithmic tasks. We find that incorporating expert trajectories greatly improves performance.
The gap between PCL and A3C is hard to discern in some of the more simple tasks such as Copy,
Reverse, and RepeatCopy. However, a noticeable gap is observed in the Synthetic Tree and DuplicatedInput results and more significant gaps are clear in the harder tasks, including ReversedAddition,
ReversedAddition3, and Hard ReversedAddition. Across all of the experiments, it is clear that the
prioritized DQN performs worse than PCL. These results suggest that PCL is a competitive RL
algorithm, which in some cases significantly outperforms strong baselines.
We compare PCL to Unified PCL in Figure 2. The same protocol is performed to find the best
hyperparameters and plot the average reward over several training iterations. We find that using a
single model for both values and policy in Unified PCL is slightly detrimental on the simpler tasks,
but on the more difficult tasks Unified PCL is competitive or even better than PCL.
We present the results of PCL along with PCL augmented with expert trajectories in Figure 3. We
observe that the incorporation of expert trajectories helps a considerable amount. Despite only
using a small number of expert trajectories (i.e., 10) as opposed to the mini-batch size of 400, the
inclusion of expert trajectories in the training process significantly improves the agent?s performance.
We performed similar experiments with Unified PCL and observed a similar lift from using expert
trajectories. Incorporating expert trajectories in PCL is relatively trivial compared to the specialized
methods developed for other policy based algorithms [1, 12]. While we did not compare to other
algorithms that take advantage of expert trajectories, this success shows the promise of using pathwise consistencies. Importantly, the ability of PCL to incorporate expert trajectories without requiring
adjustment or correction is a desirable property in real-world applications such as robotics.
8
8
Conclusion
We study the characteristics of the optimal policy and state values for a maximum expected reward
objective in the presence of discounted entropy regularization. The introduction of an entropy
regularizer induces an interesting softmax consistency between the optimal policy and optimal state
values, which may be expressed as either a single-step or multi-step consistency. This softmax
consistency leads to the development of Path Consistency Learning (PCL), an RL algorithm that
resembles actor-critic in that it maintains and jointly learns a model of the state values and a model of
the policy, and is similar to Q-learning in that it minimizes a measure of temporal consistency error.
We also propose the variant Unified PCL which maintains a single model for both the policy and the
values, thus upending the actor-critic paradigm of separating the actor from the critic. Unlike standard
policy based RL algorithms, PCL and Unified PCL apply to both on-policy and off-policy trajectory
samples. Further, unlike value based RL algorithms, PCL and Unified PCL can take advantage of
multi-step consistencies. Empirically, PCL and Unified PCL exhibit a significant improvement over
baseline methods across several algorithmic benchmarks.
9
Acknowledgment
We thank Rafael Cosman, Brendan O?Donoghue, Volodymyr Mnih, George Tucker, Irwan Bello, and
the Google Brain team for insightful comments and discussions.
References
[1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In
Proceedings of the twenty-first international conference on Machine learning, page 1. ACM,
2004.
[2] A. Antos, C. Szepesv?ri, and R. Munos. Learning near-optimal policies with bellman-residual
minimization based fitted policy iteration and a single sample path. Machine Learning, 71(1):89?
129, 2008.
[3] K. Asadi and M. L. Littman.
arXiv:1612.05628, 2016.
A new softmax operator for reinforcement learning.
[4] M. G. Azar, V. G?mez, and H. J. Kappen. Dynamic policy programming with function
approximation. AISTATS, 2011.
[5] M. G. Azar, V. G?mez, and H. J. Kappen. Dynamic policy programming. JMLR, 13(Nov),
2012.
[6] M. G. Azar, V. G?mez, and H. J. Kappen. Optimal control as a graphical model inference
problem. Mach. Learn. J., 87, 2012.
[7] R. Fox, A. Pakman, and N. Tishby. G-learning: Taming the noise in reinforcement learning via
soft updates. UAI, 2016.
[8] A. Gruslys, M. G. Azar, M. G. Bellemare, and R. Munos. The reactor: A sample-efficient
actor-critic architecture. arXiv preprint arXiv:1704.04651, 2017.
[9] S. Gu, E. Holly, T. Lillicrap, and S. Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. ICRA, 2016.
[10] S. Gu, T. Lillicrap, Z. Ghahramani, R. E. Turner, and S. Levine. Q-Prop: Sample-efficient
policy gradient with an off-policy critic. ICLR, 2017.
[11] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine. Reinforcement learning with deep energy-based
policies. arXiv:1702.08165, 2017.
[12] J. Ho and S. Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565?4573, 2016.
9
[13] D.-A. Huang, A.-m. Farahmand, K. M. Kitani, and J. A. Bagnell. Approximate maxent inverse
optimal control and its application for mental simulation of human interactions. 2015.
[14] S. Kakade. A natural policy gradient. NIPS, 2001.
[15] H. J. Kappen. Path integrals and symmetry breaking for optimal control theory. Journal of
statistical mechanics: theory and experiment, 2005(11):P11011, 2005.
[16] J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. IJRR, 2013.
[17] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.
JMLR, 17(39), 2016.
[18] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized
news article recommendation. 2010.
[19] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra.
Continuous control with deep reinforcement learning. ICLR, 2016.
[20] M. L. Littman. Algorithms for sequential decision making. PhD thesis, Brown University, 1996.
[21] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and
K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ICML, 2016.
[22] V. Mnih, K. Kavukcuoglu, D. Silver, et al. Human-level control through deep reinforcement
learning. Nature, 2015.
[23] R. Munos, T. Stepleton, A. Harutyunyan, and M. Bellemare. Safe and efficient off-policy
reinforcement learning. NIPS, 2016.
[24] O. Nachum, M. Norouzi, and D. Schuurmans. Improving policy gradient by exploring underappreciated rewards. ICLR, 2017.
[25] B. O?Donoghue, R. Munos, K. Kavukcuoglu, and V. Mnih. PGQ: Combining policy gradient
and Q-learning. ICLR, 2017.
[26] J. Peng and R. J. Williams. Incremental multi-step Q-learning. Machine learning, 22(1-3):283?
290, 1996.
[27] J. Peters, K. M?ling, and Y. Altun. Relative entropy policy search. AAAI, 2010.
[28] D. Precup. Eligibility traces for off-policy policy evaluation. Computer Science Department
Faculty Publication Series, page 80, 2000.
[29] D. Precup, R. S. Sutton, and S. Dasgupta. Off-policy temporal-difference learning with function
approximation. 2001.
[30] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. ICLR, 2016.
[31] J. Schulman, X. Chen, and P. Abbeel. Equivalence between policy gradients and soft Q-learning.
arXiv:1704.06440, 2017.
[32] J. Schulman, S. Levine, P. Moritz, M. Jordan, and P. Abbeel. Trust region policy optimization.
ICML, 2015.
[33] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous
control using generalized advantage estimation. ICLR, 2016.
[34] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy
gradient algorithms. ICML, 2014.
[35] R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning. MIT Press, 2nd edition,
2017. Preliminary Draft.
[36] R. S. Sutton, D. A. McAllester, S. P. Singh, Y. Mansour, et al. Policy gradient methods for
reinforcement learning with function approximation. NIPS, 1999.
10
[37] G. Tesauro. Temporal difference learning and TD-gammon. CACM, 1995.
[38] G. Theocharous, P. S. Thomas, and M. Ghavamzadeh. Personalized ad recommendation systems
for life-time value optimization with guarantees. IJCAI, 2015.
[39] E. Todorov. Linearly-solvable Markov decision problems. NIPS, 2006.
[40] E. Todorov. Policy gradients in linearly-solvable MDPs. NIPS, 2010.
[41] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas. Sample
efficient actor-critic with experience replay. ICLR, 2017.
[42] Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcement
learning. ICLR, 2016.
[43] C. J. Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England,
1989.
[44] C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279?292, 1992.
[45] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Mach. Learn. J., 1992.
[46] R. J. Williams and J. Peng. Function optimization using connectionist reinforcement learning
algorithms. Connection Science, 1991.
[47] B. D. Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal
entropy. PhD thesis, CMU, 2010.
[48] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement
learning. AAAI, 2008.
11
| 6870 |@word trial:1 briefly:1 version:3 eliminating:1 faculty:1 nd:1 simulation:1 dramatic:1 mention:1 harder:1 recursively:3 kappen:4 reduction:2 inefficiency:1 series:1 renewed:1 bootstrapped:1 outperforms:3 freitas:2 current:3 com:2 recovered:1 contextual:1 si:38 yet:1 bello:1 must:3 written:1 chu:1 seeding:1 plot:3 update:14 v:2 stationary:1 generative:1 parameterization:1 offpolicy:1 accordingly:3 beginning:1 short:1 mental:1 draft:1 contribute:2 simpler:2 five:1 rollout:5 along:4 wierstra:2 become:1 farahmand:1 prove:1 pritzel:1 combine:2 inside:1 apprenticeship:1 theoretically:1 peng:2 expected:9 behavior:6 examine:1 mechanic:1 multi:14 brain:3 bellman:5 inspired:1 discounted:7 chap:1 td:2 little:1 unrolling:1 begin:1 estimating:1 notation:1 provided:2 maximizes:1 moreover:3 interpreted:1 minimizes:3 developed:3 unified:26 finding:1 guarantee:1 temporal:16 xd:4 softness:1 demonstrates:1 scaled:1 control:9 converse:1 appear:1 kelvin:1 before:1 service:1 engineering:1 understood:1 limit:2 consequence:1 theocharous:1 despite:2 mach:2 sutton:3 path:15 black:1 might:2 studied:2 resembles:1 equivalence:1 challenging:1 co:1 hunt:1 practical:2 unique:2 acknowledgment:1 practice:4 union:1 differs:1 gruslys:1 bootstrap:1 procedure:2 riedmiller:1 empirical:2 significantly:3 induce:1 gammon:1 suggest:1 altun:1 close:1 operator:7 applying:1 bellemare:2 optimize:2 restriction:1 deterministic:2 demonstrated:1 equivalent:1 telescopic:1 maximizing:1 williams:3 regardless:1 attention:1 survey:1 formulate:2 simplicity:2 assigns:1 rule:3 importantly:3 stability:1 notion:7 variation:1 justification:1 pt:1 target:1 ualberta:1 user:1 programming:2 us:2 element:1 approximated:1 updating:1 observed:3 role:1 levine:6 preprint:1 wang:2 region:1 news:1 episode:3 removed:1 substantial:1 environment:6 a2c:5 reward:21 ideally:1 littman:2 esi:3 ziebart:2 dynamic:5 exhaustively:1 ghavamzadeh:1 trained:3 singh:1 reviewing:1 myriad:1 serve:1 efficiency:3 gu:2 resolved:1 easily:1 regularizer:2 distinct:1 effective:2 shortcoming:2 describe:1 monte:1 visuomotor:1 lift:1 hyper:1 outside:1 choosing:1 cacm:1 supplementary:6 ability:1 jointly:1 itself:1 asynchronously:1 sequence:8 advantage:9 propose:6 interaction:2 maximal:2 kober:1 combining:3 schaul:1 intuitive:1 convergence:1 double:1 optimum:1 darrell:1 ijcai:1 silver:5 converges:1 incremental:1 help:2 derive:2 develop:1 stating:1 augmenting:1 noticeable:1 received:2 progress:1 exacerbated:1 strong:3 trading:1 safe:1 drawback:2 subsequently:1 stochastic:4 exploration:1 human:2 enable:1 successor:3 ermon:1 material:6 mcallester:1 require:1 abbeel:6 generalization:2 preliminary:1 strictly:1 exploring:1 correction:4 hold:4 sufficiently:2 considered:1 exp:13 seed:1 algorithmic:4 major:1 early:1 estimation:1 applicable:1 cosman:1 concurrent:1 weighted:2 minimization:2 bapst:1 mit:1 concurrently:2 clearly:1 always:1 aim:2 rather:2 avoid:1 barto:1 publication:1 corollary:3 derived:1 focus:1 improvement:2 consistently:1 greatly:1 contrast:3 brendan:1 attains:1 baseline:5 sense:1 adversarial:1 inference:1 dayan:1 typically:1 entire:1 a0:4 bandit:1 mitigating:1 issue:2 overall:1 augment:1 development:4 proposes:1 softmax:22 special:2 comprise:1 beach:1 sampling:4 ng:1 hardest:1 icml:3 tabular:1 discrepancy:1 others:1 future:1 mirza:1 connectionist:2 few:1 randomly:2 divergence:1 delayed:1 familiar:1 geometry:1 rollouts:1 reactor:1 argmax:1 delicate:1 attempt:1 harley:1 interest:2 mnih:5 ofir:1 evaluation:2 extreme:1 antos:1 amenable:1 edge:1 integral:1 necessary:2 experience:3 fox:1 tree:8 maxent:1 desired:1 re:1 a3c:8 preforms:1 causal:1 theoretical:5 fitted:1 soft:11 modeling:1 cost:1 deviation:1 uniform:1 tishby:1 characterize:2 synthetic:7 unbiasedness:1 st:9 international:1 off:23 enhance:1 precup:2 squared:3 lever:1 settled:1 again:1 opposed:1 huang:1 thesis:3 aaai:2 priority:1 worse:2 expert:14 inefficient:1 return:2 li:1 volodymyr:1 de:2 degris:1 satisfy:3 xu1:1 ad:1 performed:3 reached:1 competitive:3 maintains:3 contribution:1 minimize:2 square:1 variance:6 characteristic:1 correspond:1 identify:2 yield:1 generalize:1 norouzi:2 kavukcuoglu:4 carlo:1 trajectory:30 confirmed:1 unaffected:1 harutyunyan:1 manual:1 definition:1 against:1 energy:1 tucker:1 associated:1 proof:4 recovers:1 sampled:6 stop:1 treatment:1 popular:1 improves:2 improved:2 formulation:2 done:1 box:1 though:1 mez:3 dey:1 marketing:1 langford:1 hand:2 trust:1 google:4 defines:1 stably:1 quality:1 dqn:5 usa:1 name:1 lillicrap:4 requiring:1 concept:1 true:1 remedy:1 unbiased:1 regularization:7 hence:3 discounting:1 seeded:1 holly:1 kitani:1 moritz:2 daes:1 game:1 during:1 encourages:1 eligibility:2 noted:1 generalized:4 mohammad:1 performs:2 temperature:1 wise:3 novel:3 recently:2 specialized:2 pseudocode:1 rl:15 empirically:1 tassa:1 discussed:1 slight:1 interpret:2 significant:4 cambridge:1 ai:12 tuning:2 consistency:50 inclusion:1 erez:1 cancellation:1 stable:4 actor:23 longer:2 similarity:2 attracting:1 badia:1 own:1 recent:3 perspective:1 optimizes:4 reverse:7 scenario:1 manipulation:3 buffer:7 tesauro:1 success:3 continue:1 life:1 approximators:2 inconsistency:3 additional:1 george:1 paradigm:2 relates:1 full:2 sound:2 desirable:1 technical:1 match:1 pakman:1 england:1 long:1 a1:2 impact:1 converging:1 variant:3 essentially:1 expectation:2 cmu:1 arxiv:5 iteration:8 represent:1 normalization:2 robotics:3 achieved:1 background:1 addition:1 szepesv:1 source:1 crucial:1 biased:1 unlike:3 comment:1 quan:1 member:1 jordan:2 near:2 presence:2 noting:1 intermediate:1 todorov:2 architecture:2 reduce:2 pgq:1 avenue:1 donoghue:2 shift:1 motivated:1 bridging:1 suffer:1 peter:2 speaking:1 hardly:1 action:26 prefers:1 deep:11 heess:3 clear:4 proportionally:1 amount:2 ten:1 induces:1 http:1 schapire:1 shifted:1 estimated:2 per:1 track:1 broadly:1 hyperparameter:1 promise:2 dasgupta:1 express:3 key:6 nevertheless:1 purposeful:1 prevent:1 verified:1 utilize:1 sum:4 run:2 inverse:3 parameterized:4 master:1 uncertainty:1 striking:1 discern:1 clipped:1 decision:2 appendix:2 lanctot:1 comparable:1 paramount:1 incorporation:1 ri:1 personalized:2 pcl:76 asadi:1 optimality:1 min:1 expanded:1 relatively:1 department:1 across:6 remain:2 beneficial:2 slightly:1 kakade:1 making:2 s1:3 restricted:3 taken:1 equation:1 remains:4 previously:1 finn:1 antonoglou:1 end:5 serf:1 apply:2 observe:3 appropriate:1 differentiated:1 batch:3 encounter:1 alternative:1 ho:1 thomas:1 denotes:1 remaining:1 ensure:2 graphical:1 maintaining:1 calculating:1 exploit:5 ghahramani:1 establish:1 icra:1 objective:13 quantity:3 coherently:1 occurs:1 parametric:1 primary:1 rt:3 bagnell:3 exhibit:3 gradient:13 detrimental:2 iclr:8 separate:3 reinforce:1 separating:1 thank:1 nachum:1 topic:1 unstable:1 trivial:1 assuming:1 length:2 relationship:6 mini:1 minimizing:2 acquire:1 equivalently:2 difficult:3 unfortunately:1 setup:1 potentially:1 relate:2 trace:3 haarnoja:1 implementation:3 design:1 motivates:1 policy:109 boltzmann:3 perform:1 twenty:1 observation:4 markov:1 benchmark:2 finite:1 oer:8 truncated:1 beat:1 extended:2 team:1 mansour:1 arbitrary:1 provenance:1 namely:1 required:2 specified:2 extensive:1 connection:5 kl:1 omits:1 tensorflow:1 established:2 nip:6 able:2 deepen:1 bar:1 below:1 suggested:1 challenge:1 program:1 max:17 including:1 analogue:1 hot:2 suitable:1 dueling:1 difficulty:1 natural:2 regularized:6 solvable:2 residual:1 turner:1 representing:1 github:1 mdps:1 ijrr:1 brown:1 axis:1 naive:1 taming:1 understanding:1 literature:1 schulman:3 relative:2 graf:1 lacking:1 highlight:1 bear:1 mixed:1 interesting:2 limitation:2 approximator:1 foundation:1 agent:4 degree:2 consistent:2 s0:22 article:1 principle:2 critic:26 share:1 elsewhere:1 maas:1 free:3 asynchronous:3 copy:5 offline:1 bias:4 side:3 exponentiated:1 allow:1 taking:1 munos:5 overcome:1 transition:1 world:1 computes:1 dale:1 made:1 reinforcement:19 commonly:1 simplified:1 adaptive:1 nov:1 approximate:1 rafael:1 keep:1 robotic:1 reveals:1 uai:1 summing:1 conclude:1 imitation:1 continuous:3 iterative:1 search:1 additionally:1 learn:4 nature:1 ca:2 inherently:2 symmetry:1 schuurmans:2 improving:1 interact:1 expansion:2 investigated:1 protocol:2 substituted:1 did:2 aistats:1 main:3 linearly:2 backup:6 azar:4 hyperparameters:3 noise:1 ling:1 edition:1 body:1 augmented:2 biggest:1 slow:1 sub:5 replay:11 breaking:1 jmlr:2 watkins:2 learns:1 tang:1 theorem:7 stepleton:1 specific:1 showing:1 insightful:1 essential:2 incorporating:2 merging:1 sequential:1 importance:3 phd:3 horizon:2 gap:4 chen:1 entropy:22 generalizing:1 intricacy:1 simply:1 explore:1 desire:1 expressed:5 adjustment:1 tracking:1 pathwise:1 scalar:1 recommendation:2 applies:2 determines:1 relies:1 extracted:1 acm:1 prop:1 goal:2 presentation:2 prioritized:3 replace:2 considerable:2 hard:19 specifically:2 infinite:2 determined:1 total:1 called:2 experimental:2 fulfills:1 arises:2 incorporate:3 evaluate:1 |
6,491 | 6,871 | Premise Selection for Theorem Proving
by Deep Graph Embedding
Mingzhe Wang? Yihe Tang? Jian Wang Jia Deng
University of Michigan, Ann Arbor
Abstract
We propose a deep learning-based approach to the problem of premise selection:
selecting mathematical statements relevant for proving a given conjecture. We
represent a higher-order logic formula as a graph that is invariant to variable
renaming but still fully preserves syntactic and semantic information. We then
embed the graph into a vector via a novel embedding method that preserves the
information of edge ordering. Our approach achieves state-of-the-art results on the
HolStep dataset, improving the classification accuracy from 83% to 90.3%.
1
Introduction
Automated reasoning over mathematical proofs is a core question of artificial intelligence that dates
back to the early days of computer science [1]. It not only constitutes a key aspect of general
intelligence, but also underpins a broad set of applications ranging from circuit design to compilers,
where it is critical to verify the correctness of a computer system [2, 3, 4].
A key challenge of theorem proving is premise selection [5]: selecting relevant statements that are
useful for proving a given conjecture. Theorem proving is essentially a search problem with the
goal of finding a sequence of deductions leading from presumed facts to the given conjecture. The
space of this search is combinatorial?with today?s large mathematical knowledge bases [6, 7], the
search can quickly explode beyond the capability of modern automated theorem provers, despite
the fact that often only a small fraction of facts in the knowledge base are relevant for proving a
given conjecture. Premise selection thus plays a critical role in narrowing down the search space and
making it tractable.
Premise selection has been traditionally tackled as hand-designed heuristics based on comparing and
analyzing symbols [8]. Recently, machine learning methods have emerged as a promising alternative
for premise selection, which can naturally be cast as a classification or ranking problem. Alama et
al. [9] trained a kernel-based classifier using essentially bag-of-words features, and demonstrated
large improvement over the state of the art system. Alemi et al. [5] were the first to apply deep
learning approaches to premise selection and demonstrated competitive results without manual feature
engineering. Kaliszyk et al. [10] introduced HolStep, a large dataset of higher-order logic proofs, and
provided baselines based on logistic regression and deep networks.
In this paper we propose a new deep learning approach to premise selection. The key idea of our
approach is to represent mathematical formulas as graphs and embed them into vector space. This
is different from prior work on premise selection that directly applies deep networks to sequences
of characters or tokens [5, 10]. Our approach is motivated by the observation that a mathematical
formula can be represented as a graph that encodes the syntactic and semantic structure of the formula.
For example, the formula ?x?y(P (x) ? Q(x, y)) can be expressed as the graph shown in Fig. 1,
where edges link terms to their constituents and connect quantifiers to their variables.
?
Equal contribution.
P
VAR
Q
VAR
Figure 1: The formula ?x?y(P (x) ? Q(x, y)) can be represented as a graph.
Our hypothesis is that such graph representations are better than sequential forms because a graph
makes explicit key syntactic and semantic structures such as composition, variable binding, and
co-reference. Such an explicit representation helps the learning of invariant feature representations.
For example, P (x, T (f (z) + g(z), v)) ? Q(y) and P (y) ? Q(x) share the same top level structure
P ? Q, but such similarity would be less apparent and harder to detect from a sequence of tokens
because syntactically close terms can be far apart in the sequence.
Another benefit of a graph representation is that we can make it invariant to variable renaming
while preserving the semantics. For example, the graph for ?x?y(P (x) ? Q(x, y) (Fig. 1) is the
same regardless of how the variables are named in the formula, but the semantics of quantifiers and
co-reference is completely preserved?the quantifier ? binds a variable that is the first argument of
both P and Q, and the quantifier ? binds a variable that is the second argument of Q.
It is worth noting that although a sequential form encodes the same information, and a neural network
may well be able to learn to convert a sequence of tokens into a graph, such a neural conversion
is unnecessary?unlike parsing natural language sentences, constructing a graph out of a formula
is straightforward and unambiguous. Thus there is no obvious benefit to be gained through an
end-to-end approach that starts from the textual representation of formulas.
To perform premise selection, we convert a formula into a graph, embed the graph into a vector,
and then classify the relevance of the formula. To embed a graph into a vector, we assign an initial
embedding vector for each node of the graph, and then iteratively update the embedding of each
node using the embeddings of its neighbors. We then pool the embeddings of all nodes to form
the embedding of the entire graph. The parameters of each update are learned end to end through
backpropagation. In other words, we learn a deep network that embeds a graph into a vector; the
topology of the unrolled network is determined by the input graph.
We perform experiments using the HolStep dataset [10], which consists of over two million conjecturestatement pairs that can be used to evaluate premise selection. The results show that our graphembedding approach achieves large improvement over sequence-based models. In particular, our
approach improves the state-of-the-art accuracy on HolStep by 7.3%.
Our main contributions of this work are twofold. First, we propose a novel approach to premise
selection that represents formulas as graphs and embeds them into vectors. To the best our knowledge,
this is the first time premise selection is approached using deep graph embedding. Second, we
improve the state-of-the-art classification accuracy on the HolStep dataset from 83% to 90.3%.
2
Related Work
Research on automated theorem proving has a long history [11]. Decades of research has resulted
in a variety of well-developed automated theorem provers such as Coq [12], Isabelle [13], and
E [14]. However, no existing automated provers can scale to large mathematical libraries due to
combinatorial explosion of the search space. This limitation gave rise to the development of interactive
theorem proving [11], which combines humans and machines in theorem proving and has led to
impressive achievements such as the proof of the Kepler conjecture [15] and the formal proof of the
Feit-Thompson problem [16].
Premise selection as a machine learning problem was introduced by Alama et al. [9], who constructed
a corpus of proofs to train a kernelized classifier using bag-of-word features that represent the
occurrences of terms in a vocabulary. Deep learning techniques were first applied to premise selection
in the DeepMath work by Alemi et al. [5], who applied recurrent networks and convolutional to
formulas represented as textual sequences, and showed that deep learning approaches can achieve
competitive results against baselines using hand-engineered features. Serving the needs for large
2
datasets for training deep models, Kaliszyk et al. [10] introduced the HolStep dataset that consists of
2M statements and 10K conjectures, an order of magnitude larger than the DeepMath dataset [5].
A related task to premise selection is proof guidance [17, 18, 19, 20, 21, 22], the selection of the next
clause to process inside an automated theorem prover. Proof guidance differs from premise selection
in that proof guidance depends on the logical representation, inference algorithm, and current state
inside a theorem prover, whereas premise selection is only about picking relevant statements as the
initial input to a theorem prover that is treated as a black box. Because proof guidance is tightly
integrated with proof search and is invoked repeatedly, efficiency is as important as accuracy, whereas
for premise selection efficiency is not as critical.
Loos et al. [23] were the first to apply deep networks to proof guidance. They experimented with
both sequential representations and tree representations (recursive neural networks [24, 25]). Note
that their tree representations are simply the parse trees, which, unlike our graphs, are not invariant
to variable renaming and do not capture how quantifiers bind variables. Whalen [22] uses GRU
networks to guide the exploration of partial proof trees, with formulas represented as sequences of
tokens.
In addition to premise selection and proof guidance, other aspects of theorem proving have also
benefited from machine learning. For example, K?hlwein & Urban [26] applied kernel methods to
strategy finding, the problem of searching for good parameter configurations for an automated prover.
Similarly, Bridge et al. [27] applied SVM and Gaussian Processes to select good heuristics, which
are collections of standard settings for parameters and other decisions.
Our graph embedding method is related to a large body of prior work on embeddings and graphs.
Deepwalk [28], LINE [29] and Node2Vec [30] focus on learning node embeddings. Similar to
Word2Vec [31, 32], they optimize the embedding of a node to predict nodes in a neighborhood.
Recursive neural networks [33, 25] and Tree LSTMs [34] consider embeddings of trees, a special
type of graphs. Misra & Artzi [35] embed tree representations of typed lambda calculus expressions
into vectors, with variable nodes labeled with only their types. This leads to invariance to variable
renaming, but is not entirely lossless in terms of semantics. If a formula contains multiple variables
of the same type but with different names, it is not possible to know which lambda abstraction binds
which variable.
Neural networks on general graphs were first introduced by Gori et al [36] and Scarselli et al [37].
Many follow-up works [38, 39, 40, 41, 42, 43] proposed specific architectures to handle graph-based
input by extending recurrent neural network to graph data [36, 39, 40] or making use of graph
convolutions based on spectral graph theories [38, 41, 42, 43, 44]. Our approach is most similar to
the work of [38], where they encode molecular fragments as neural fingerprints with graph-based
convolutions for chemical applications. But to the best of our knowledge, no previous deep learning
approaches on general graphs preserve the order of edges. In contrast, we propose a novel way of
graph embedding that can preserve the information of edge ordering, and demonstrate its effectiveness
for premise selection.
3
3.1
FormulaNet: Formulas to Graphs to Embeddings
Formulas to Graphs
We consider formulas in higher-order logic [45]. A higher-order formula can be defined recursively
based on a vocabulary of constants, variables, and quantifiers. A variable or a constant can act as a
value or a function. For example, ?f ?x(f (x, c) ? P (f )) is a higher-order formula where ? and ? are
quantifiers, c is a constant value, P, ? are constant functions, x is a variable value, and f is both a
variable function and a variable value.
To construct a graph from a formula, we first parse the formula into a tree, where each internal node
represents a constant function, a variable function, or a quantifier, and each leaf node represents a
variable value or a constant value. We then add edges that connect a quantifier node to all instances of
its quantified variables, after which we merge (leaf) nodes that represent the same constant or variable.
Finally, for each occurrence of a variable, we replace its original name with VAR, or VARFUNC if it
acts as a function. Fig. 2 illustrates these steps.
3
f
x
c
(a)
f
P
x
x
P
f
c
f
(b)
x
(c)
P
VARFUNC
f
c
VAR
VAR
(d)
Figure 2: From a formula to a graph: (a) the input formula; (b) parsing the formula into a tree; (c)
merging leaves and connecting quantifiers to variables; (d) renaming variables.
Formally, let S be the set of all formulas, Cv be the set of constant values, Cf the set of constant
functions, Vv the set of variable values, Vf the set of variable functions, and Q the set of quantifiers.
Let s be a higher-order logic formula with no free variables?any free variables can be bounded
by adding quantifiers ? to the front of the formula. The graph Gs = (Vs , Es ) of formula s can be
recursively constructed as follows:
? if s = ?, where ? ? Cv ? Vv , then Gs ? ({?}, ?), i.e. the graph contains a single node ?.
f (s1 , s2 , . . . , snS), where f ? Cf ? Vf and s1 , . . . , sn ? S, then we perform
? if s = S
n
n
G0s ? ( i Vsi ? {f }, i Esi ? {(f, ?(si ))}i ) followed by Gs ? MERGE_C(G0s ), where
?(si ) is the ?head node? of si and MERGE_C is an operation that merges the same constant
(leaf) nodes in the graph.
?
if s = ?x t, where ? ? Q, t ? S, x ? Vv ? Vf , then we perform G00s ?
S
Vt ? {f }, Et ? {(?, ?(t)) v?Vt [x] {(?, v)} , followed by G0s ? MERGEx (G00s ) if x ?
Vv ? Vf and Gs ? RENAMEx (G0s ), where Vt [x] is the nodes that represent the variable x in
the graph of t, MERGEx is an operation that merges all nodes representing the variable x into
a single node, and RENAMEx is an operation that renames x to VAR (or VARFUNC if x acts as
a function).
By construction, our graph is invariant to variable renaming, yet no syntactic or semantic information
is lost. This is because for a variable node (either as a function or value), its original name in the
formula is irrelevant in the graph?the graph structure already encodes where it is syntactically and
which quantifier binds it.
3.2
Graphs to Embeddings
To embed a graph to a vector, we take an approach similar to performing convolution or message
passing on graphs [38]. The overall idea is to associate each node with an initial embedding and
iteratively update them. As shown in Fig. 3, suppose v and each node around v has an initial
embedding. We update the embedding of v by the node embeddings in its neighborhood. After
multi-step updates, the embedding of v will contain information from its local strcuture. Then we
max-pool the node embeddings across all of nodes in the graph to form an embedding for the graph.
To initialize the embedding for each node, we use the one-hot vector that represents the name of the
node. Note that in our graph all variables have the same name VAR (or VARFUNC if the variable acts
as a function), so their initial embeddings are the same. All other nodes (constants and quantifiers)
each have their names and thus their own one-hot vectors.
We then repeatedly update the embedding of each node using the embeddings of its neighbors. Given
a graph G = (V, E), at step t + 1 we update the embedding xt+1
of node v as follows:
v
h
i
X
X
1
t
t
t t
t
t
t
t
xt+1
=
F
x
+
F
(x
,
x
)
+
F
(x
,
x
)
,
(1)
v
P
v
I
u
v
O v
u
dv
(u,v)?E
(v,u)?E
where dv is the degree of node v, FIt and FOt are update functions using incoming edges and outgoing
edges, and FPt is an update function to conbine the old embeddings with the new update from neighbor
4
u
u
u
u
v
u
u
Figure 3: An example of applying the order-preserving updates in Eqn. 2. To update node v, we
consider its neighbors and its position in all treelets (see Sec. 3.3) it belongs to.
nodes. We parametrize these update functions as neural networks; the detailed configurations will be
given in Sec. 4.2.
It is worth noting that all node embeddings are updated in parallel using the same update functions,
but the update functions can be different across steps to allow more flexibility. Repeated updates allow
each embedding to incorporate information from a bigger neighborhood and thus capture more global
structures. Interestingly, with zero updates, our model reduces to a bag-of-words representation, that
is, a max pooling of individual node embeddings.
To predict the usefulness of a statement for a conjecture, we send the concatenation of their embeddings to a classifier. The classification can also be done in the unconditional setting where only the
statement is given; in this case we directly send the embedding of the statement to a classifier. The
parameters of the update functions and the classifiers are learned end to end through backpropagation.
3.3
Order-Preserving Embeddings
For functions in a formula, the order of its arguments matters. That is, f (x, y) cannot generally be
presumed to mean the same as f (y, x). But our current embedding update as defined in Eqn. 1 is
invariant to the ordering of arguments. Given that it is possible that the ordering of arguments can
be a useful feature for premise selection, we now consider a variant of our basic approach to make
our graph embeddings sensitive to the ordering of arguments. In this variant, we update each node
considering the ordering of its incoming edges and outgoing edges.
Before we define our new update equation, we need to introduce the notion of a treelet. Given a node
v in graph G = (V, E), let (v, w) ? E be an outgoing edge of v, and let rv (w) ? {1, 2, . . .} be the
rank of edge (v, w) among all outgoing edges of v. We define a treelet of graph G = (V, E) as a
tuple of nodes (u, v, w) ? V ? V ? V such that (1) both (v, u) and (v, w) are edges in the graph
and (2) (v, u) is ranked before (v, w) among all outgoing edges of v. In other words, a treelet is a
subgraph that consists of a head node v, a left child u and a right child w. We use TG to denote all
treelets of graph G, that is, TG = {(u, v, w) : (v, u) ? E, (v, w) ? E, rv (u) < rv (w)}.
Now, when we update a node embedding, we consider not only its direct neighbors, but also its roles
in all the treelets it belongs to:
i
X
1h X
t
t
t t
t
t
t
t
xt+1
=
F
x
+
F
(x
,
x
)
+
F
(x
,
x
)
I
u
v
O v
u
v
P
v
dv
(u,v)?E
(v,u)?E
i
X
X
X
1h
t
+
FLt (xtv , xtu , xtw ) +
FH
(xtu , xtv , xtw ) +
FRt (xtu , xtw , xtv )
ev
(v,u,w)?TG
(u,v,w)?TG
(u,w,v)?TG
(2)
where ev = |{(u, v, w) : (u, v, w) ? TG ? (v, u, w) ? TG ? (u, w, v) ? TG }| is the number of total
treelets containing v. In this new update equation, FL is an update function that considers a treelet
where node v is the left child. Similarly, FH considers a treelet where node v is the head and FR
considers a treelet where node v is the right child. As in Sec. 3.2, the same update functions are
applied to all nodes at each step, but across steps the update functions can be different. Fig. 3 shows
the update equation of a concrete example.
Our design of Eqn. 2 now allows a node to be embedded differently dependent on the ordering of its
own arguments and dependent on which argument slot it takes in a parent function. For example,
the function node f can now be embedded differently for f (a, b) and f (b, a) because of the output
of FH can be different. As another example, in the formula g(f (a), f (a)), there are two function
5
FP
xv
xu
xw
xv
FC
BN ReLU
FH / F L / F R
concat
FI / F O
xu
256
concat
concat
FC
FC
BN ReLU FC
dim=256
(a)
BN ReLU
BN ReLU FC
dim=256
dim=2
FC
BN ReLU FC
dim=128
dim=2
dim=256
(b)
(c)
(d)
Figure 4: Configurations of the update functions and classifiers: (a) FP in Eqn. 1 and 2; (b) FI , FO
in Eqn. 1 and 2, and FL , FH , FR in Eqn. 2; (c) conditional classifier; (d) unconditional classifier.
nodes with the same name f , same parent g, and same child a, but they can be embedded differently
because only FL will be applied to the f as the first argument of g and only FR will be applied to the
f as the second argument of g.
To distinguish the two variants of our approach, we call the method with the treelet update terms
FormulaNet, as opposed to the basic FormulaNet-basic without considering edge ordering.
4
4.1
Experiments
Dataset and Evaluation
We evaluate our approach on the HolStep dataset [10], a recently introduced benchmark for evaluating
machine learning approaches for theorem proving. It was constructed from the proof trace files of
the HOL Light theorem prover [7] on its multivariate analysis library [46] and the formal proof
of the Kepler conjecture. The dataset contains 11,410 conjectures, including 9,999 in the training
set and 1,411 in the test set. Each conjecture is associated with a set of statements, each with a
ground truth label on whether the statement is useful for proving the conjecture. There are 2,209,076
conjecture-statement pairs in total. We hold out 700 conjectures from the training set as the validation
set to tune hyperparameters and perform ablation analysis.
Following the evaluation setup proposed in [10], we treat premise selection as a binary classification
task and evaluate classification accuracy. Also following [10], we evaluate two settings, the conditional setting where both the conjecture and the statement are given, and the unconditional setting
where the conjecture is ignored. In HolStep, each conjecture is associated with an equal number of
positive statements and negative statements, so the accuracy of random prediction is 50%.
4.2
Network Configurations
The initial one-hot vector for each node has 1909 dimensions, representing 1909 unique tokens.
These 1909 tokens include 1906 unique constants from the training set and three special tokens,
"VAR", "VARFUNC", and "UNKNOWN" (representing all novel tokens during testing). We use a
linear layer to map one-hot encodings to 256-dimensional vectors. All of the following intermediate
embeddings are 256-dimensional.
The update functions in Eqn. 1 and Eqn. 2 are parametrized as neural networks. Fig. 4 (a), (b) shows
their configurations. All update functions are configured the same: concatenation of inputs followed
by two fully connected layers with ReLUs, Batch Normalizations [47].
The classifier for the conditional setting takes in the embeddings from the conjecture and the statement.
Its configuration is shown in Fig. 4 (c). The classifier for the unconditional setting uses only the
embedding of the statement; its configuration is shown in Fig. 4 (d).
4.3
Model Training
We train our networks using RMSProp [48] with 0.001 learning rate and 1 ? 10?4 weight decay. We
lower the learning rate by 3X after each epoch. We train all models for five epochs and all networks
converge after about three or four epochs.
It is worth noting that there are two levels of batching in our approach: intra-graph batching and
inter-graph batching. Intra-graph batching arises from the fact that to embed a graph, each update
6
Table 1: Classification accuracy on the test set of our approach versus baseline methods on HolStep
in the unconditional setting (conjecture unknown) and the conditional setting (conjecture given).
Unconditional
Conditional
CNN [10]
83
82
CNN-LSTM [10]
83
83
FormulaNet-basic
89.0
89.1
FormulaNet
90.0
90.3
function (FP , FI , FO , FL , FH , FR in Eqn. 2) is applied to all nodes in parallel. This is the same as
training each update function as a standalone network with a batch of input examples. Thus regular
batch normalization can be directly applied to the inputs of each update function within a single
graph, as shown in Fig. 4(a)(b).
Furthermore, this batch normalization within a graph can be run in the training mode even when we
are only performing inference to embed a graph, because there are multiple input examples to each
update function within a graph. Another level of batching is the regular batching of multiple graphs
in training, as is necessary for training the classifier. As usual, batch normalization across graphs is
done in the evaluation mode in test time.
We also apply intermediate supervision after each step of embedding update using a separate classifier.
For training, our loss function is the sum of cross-entropy losses for each step. We use the prediction
from the last step as our final predictions.
4.4
Main Results
Table 1 compares the accuracy of our approach versus the best existing results [10]. Our approach
improves the best existing result by a large margin from 83% to 90.3% in the conditional setting
and from 83% to 90.0% in the unconditional setting. We also see that FormulaNet gives a 1% improvement over the FormulaNet-basic, validating our hypothesis that the order of function arguments
provides useful cues.
Consistent with prior work [10], conditional and unconditional selection have similar performances.
This is likely due to the data distribution in HolStep. In the training set, only 0.8% of the statements
appear in both a positive statement-conjecture pair and a negative statement-conjecture pair, and the
upper performance bound of unconditional selection is 97%. In addition, HolStep contains 9,999
unique conjectures but 1,304,888 unique statements for training, so it is likely easier for the network
to learn useful patterns from statements than from conjectures.
We also apply Deepwalk [28], an unsupervised approach for generating node embeddings that is
purely based on graph topology without considering the token associated with each node. For each
formula graph, we max-pool its node embeddings and train a classifier. The accuracy is 61.8%
(conditional) and 61.7% (unconditional). This result suggests that for embedding formulas it is
important to use token information and end-to-end supervision.
4.5
Ablation Experiments
Invariance to Variable Renaming One motivation for our graph representation is that the meaning
of formulas should be invariant to the renaming of variable values and variable functions. To achieve
such invariance, we perform two main transformations of a parse tree to generate a graph: (1) we
convert the tree to a graph by linking quantifiers and variables, and (2) we discard the variable names.
We now study the effect of these steps on the premise selection task. We compare FormulaNet-basic
with the following three variants whose only difference is the format of the input graph:
? Tree-old-names: Use the parse tree as the graph and keep all original names for the nodes.
An example is the tree in Fig. 2 (b).
? Tree-renamed: Use the parse tree as the graph but rename all variable values to VAR and
variable functions to VARFUNC.
? Graph-old-names: Use the same graph as FormulaNet-basic but keep all original names for
the nodes, thus making the graph embedding dependent on the original variable names. An
example is the graph in Fig. 2 (c).
7
Table 2: The accuracy of FormulaNet-basic and its ablated versions on original and renamed validation
set.
Tree-old-names Tree-renamed Graph-old-names Our Graph
Original Validation
89.7
84.7
89.8
89.9
Renamed Validation
82.3
84.7
83.5
89.9
Table 3: Validation accuracy of proposed models with different numbers of update steps on conditional
premise selection.
Number of steps
0
1
2
3
4
FormulaNet-basic 81.5 89.3 89.8 89.9 90.0
FormulaNet
81.5 90.4 91.0 91.1 90.8
We train these variants on the same training set as FormulaNet-basic. To compare with FormulaNetbasic, we evaluate them on the same held-out validation set. In addition, we generate a new validation
set (Renamed Validation) by randomly permutating the variable names in the formulas?the textual
representation is different but the semantics remains the same. We also compare all models on this
renamed validation set to evaluate their robustness to variable renaming.
Table 2 reports the results. If we use a tree with the original names, there is a slight drop when
evaluate on the original validation set, but there is a very large drop when evaluated on the renamed
validation set. This shows that there are features exploitable in the original variable names and the
model is exploiting it, but the model is essentially overfitting to the bias in the original names and
cannot generalize to renamed formulas. The same applies to the model trained on graphs with the
original names, whose performance also drops drastically on renamed formulas.
It is also interesting to note that the model trained on renamed trees performs poorly, although it is
invariant to variable renaming. This shows that the syntactic and semantic information encoded in
the graph on variables?particularly their quantifiers and coreferences?is important.
4.6
Visualization of Embeddings
Number of Update Steps An important hyperparameter of our approach is the number of steps
to update the embeddings. Zero steps can only embed a bag of unstructured tokens, while more
steps can embed information from larger graph structures. Table 3 compares the accuracy of models
=>
=>
=>
=>
=>
=>
=
=
=
=
=
=
vector_sub
VAR
casn
VAR
=_c
VAR
continuous
VAR
FST
=>
=>
=>
=>
=>
=>
NOT
NOT
NOT
NOT
NOT
NOT
real_gt
extreme_point_of
IN
T
=
complex_mul
=
condensation_point_of
VAR
NUMERAL
VAR
=>
=>
NOT
NOT
NOT
hull
DISJOINT
VAR
=>
ALL
VARFUNC
VARFUNC
VARFUNC
VARFUNC
VARFUNC
VAR
VAR
VAR
VAR
VAR
Figure 5: Nearest neighbors of node embeddings after step 1 with FormulaNet. Query nodes are in
the first column. The color of each node is coded by a t-SNE [49] projection of its step-0 embedding
into 2D. The closer the colors, the nearer two nodes are in the step-0 embedding space.
8
with different numbers of update steps. Perhaps surprisingly, models with zero steps can already
achieve an accuracy of 81.5%, showing that much of the performance comes from just the names
of constant functions and values. More steps lead to notable increases of accuracy, showing that
the structures in the graph are important. There is a diminishing return after 3 steps, but this can
be reasonably expected because a radius of 3 in a graph is a fairly sizable neighborhood and can
encompass reasonably complex expressions?a node can influence its grand-grandchildren and
grand-grandparents. In addition, it would naturally be more difficult to learn generalizable features
from long-range patterns because they are more varied and each of them occurs much less frequently.
To qualitatively examine the learned embeddings, we find out a set of nodes with similar embeddings
and visualize their local structures in Fig. 5. In each row, we use a node as the query and find the
nearest neighbors across all nodes from different graphs. We can see that the nearest neighbors have
similar structures in terms of topology and naming. This demonstrates that our graph embeddings
can capture syntactic and semantic structures of a formula.
5
Conclusion
In this work, we have proposed a deep learning-based approach to premise selection. We represent
a higher-order logic formula as a graph that is invariant to variable renaming but fully preserves
syntactic and semantic information. We then embed the graph into a continuous vector through a
novel embedding method that preserves the information of edge ordering. Our approach has achieved
state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to
90.3%.
Acknowledgements This work is partially supported by the National Science Foundation under
Grant No. 1633157.
References
[1] Alan JA Robinson and Andrei Voronkov. Handbook of automated reasoning, volume 1. Elsevier, 2001.
[2] Christoph Kern and Mark R Greenstreet. Formal verification in hardware design: a survey. ACM
Transactions on Design Automation of Electronic Systems (TODAES), 4(2):123?193, 1999.
[3] Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock, Philip Derrin, Dhammika
Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, et al. sel4: Formal verification of an os kernel.
In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles, pages 207?220.
ACM, 2009.
[4] Xavier Leroy. Formal verification of a realistic compiler. Communications of the ACM, 52(7):107?115,
2009.
[5] Alexander A Alemi, Francois Chollet, Niklas Een, Geoffrey Irving, Christian Szegedy, and Josef Urban.
Deepmath - deep sequence models for premise selection. In D. D. Lee, M. Sugiyama, U. V. Luxburg,
I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2235?2243.
Curran Associates, Inc., 2016.
[6] Adam Naumowicz and Artur Korni?owicz. A Brief Overview of Mizar, pages 67?72. Springer Berlin
Heidelberg, Berlin, Heidelberg, 2009.
[7] John Harrison. HOL Light: An Overview, pages 60?66. Springer Berlin Heidelberg, Berlin, Heidelberg,
2009.
[8] Kry?tof Hoder and Andrei Voronkov. Sine qua non for large theory reasoning. In International Conference
on Automated Deduction, pages 299?314. Springer, 2011.
[9] Jesse Alama, Tom Heskes, Daniel K?hlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise selection for
mathematics by corpus analysis and kernel methods. Journal of Automated Reasoning, 52(2):191?213,
2014.
[10] Cezary Kaliszyk, Fran?ois Chollet, and Christian Szegedy. Holstep: A machine learning dataset for
higher-order logic theorem proving. arXiv preprint arXiv:1703.00426, 2017.
9
[11] John Harrison, Josef Urban, and Freek Wiedijk. History of interactive theorem proving. In Computational
Logic, volume 9, pages 135?214, 2014.
[12] Gilles Dowek, Amy Felty, Hugo Herbelin, G?rard Huet, Chetan Murthy, Catherin Parent, Christine
Paulin-Mohring, and Benjamin Werner. The COQ Proof Assistant: User?s Guide: Version 5.6. INRIA,
1992.
[13] Makarius Wenzel, Lawrence C Paulson, and Tobias Nipkow. The isabelle framework. In International
Conference on Theorem Proving in Higher Order Logics, pages 33?38. Springer, 2008.
[14] Stephan Schulz. E?a brainiac theorem prover. Ai Communications, 15(2, 3):111?126, 2002.
[15] Thomas Hales, Mark Adams, Gertrud Bauer, Dat Tat Dang, John Harrison, Truong Le Hoang, Cezary
Kaliszyk, Victor Magron, Sean McLaughlin, Thang Tat Nguyen, et al. A formal proof of the kepler
conjecture. arXiv preprint arXiv:1501.02155, 2015.
[16] Georges Gonthier, Andrea Asperti, Jeremy Avigad, Yves Bertot, Cyril Cohen, Fran?ois Garillot, St?phane
Le Roux, Assia Mahboubi, Russell O?Connor, Sidi Ould Biha, Ioana Pasca, Laurence Rideau, Alexey
Solovyev, Enrico Tassi, and Laurent Th?ry. A Machine-Checked Proof of the Odd Order Theorem, pages
163?179. Springer Berlin Heidelberg, Berlin, Heidelberg, 2013.
[17] Christian Suttner and Wolfgang Ertel. Automatic acquisition of search guiding heuristics. In International
Conference on Automated Deduction, pages 470?484. Springer, 1990.
[18] J?rg Denzinger, Matthias Fuchs, Christoph Goller, and Stephan Schulz. Learning from previous proof
experience: A survey. Citeseer, 1999.
[19] SA Schulz. Learning search control knowledge for equational deduction, volume 230. IOS Press, 2000.
[20] Michael F?rber and Chad Brown. Internal guidance for satallax. In International Joint Conference on
Automated Reasoning, pages 349?361. Springer, 2016.
[21] Cezary Kaliszyk and Josef Urban. Femalecop: fairly efficient machine learning connection prover. In
Logic for Programming, Artificial Intelligence, and Reasoning, pages 88?96. Springer, 2015.
[22] Daniel Whalen. Holophrasm: a neural automated theorem prover for higher-order logic. arXiv preprint
arXiv:1608.02644, 2016.
[23] Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof search.
arXiv preprint arXiv:1701.06972, 2017.
[24] Richard Socher, Eric H. Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y. Ng. Dynamic
pooling and unfolding recursive autoencoders for paraphrase detection. In J. Shawe-Taylor, R. S. Zemel,
P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing
Systems 24, pages 801?809. Curran Associates, Inc., 2011.
[25] Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural
language with recursive neural networks. In Proceedings of the 28th international conference on machine
learning (ICML-11), pages 129?136, 2011.
[26] Daniel K?hlwein and Josef Urban. Males: A framework for automatic tuning of automated theorem
provers. Journal of Automated Reasoning, 55(2):91?116, 2015.
[27] James P. Bridge, Sean B. Holden, and Lawrence C. Paulson. Machine learning for first-order theorem
proving. Journal of Automated Reasoning, 53(2):141?172, 2014.
[28] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In
Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,
pages 701?710. ACM, 2014.
[29] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale
information network embedding. In Proceedings of the 24th International Conference on World Wide Web,
pages 1067?1077. ACM, 2015.
[30] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the
22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 855?864.
ACM, 2016.
10
[31] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations
of words and phrases and their compositionality. In Advances in neural information processing systems,
pages 3111?3119, 2013.
[32] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. arXiv preprint arXiv:1301.3781, 2013.
[33] Christoph Goller and Andreas Kuchler. Learning task-dependent distributed representations by backpropagation through structure. In Neural Networks, 1996., IEEE International Conference on, volume 1, pages
347?352. IEEE, 1996.
[34] Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from
tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015.
[35] Dipendra Kumar Misra and Yoav Artzi. Neural shift-reduce ccg semantic parsing. In EMNLP, pages
1775?1786, 2016.
[36] Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In
Neural Networks, 2005. IJCNN?05. Proceedings. 2005 IEEE International Joint Conference on, volume 2,
pages 729?734. IEEE, 2005.
[37] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The
graph neural network model. IEEE Transactions on Neural Networks, 20(1):61?80, 2009.
[38] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al?n
Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints.
In Advances in neural information processing systems, pages 2224?2232, 2015.
[39] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks.
arXiv preprint arXiv:1511.05493, 2015.
[40] Ashesh Jain, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-rnn: Deep learning on spatiotemporal graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 5308?5317, 2016.
[41] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data.
arXiv preprint arXiv:1506.05163, 2015.
[42] Micha?l Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs
with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages
3837?3845, 2016.
[43] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks.
arXiv preprint arXiv:1609.02907, 2016.
[44] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for
graphs. In Proceedings of the 33rd annual international conference on machine learning. ACM, 2016.
[45] Alonzo Church. A formulation of the simple theory of types. The journal of symbolic logic, 5(02):56?68,
1940.
[46] John Harrison. The hol light theory of euclidean space. Journal of Automated Reasoning, pages 1?18,
2013.
[47] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[48] Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Lecture 6a overview of mini?batch gradient
descent. Coursera Lecture slides https://class. coursera. org/neuralnets-2012-001/lecture,[Online, 2012.
[49] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning
Research, 9(Nov):2579?2605, 2008.
11
| 6871 |@word cnn:2 version:2 laurence:1 nd:2 calculus:1 heiser:1 tat:2 bn:5 citeseer:1 kutzkov:1 harder:1 recursively:2 initial:6 configuration:7 contains:4 fragment:1 selecting:2 daniel:4 interestingly:1 existing:3 current:2 comparing:1 si:3 yet:1 parsing:4 john:4 ashesh:1 realistic:1 christian:5 designed:1 drop:3 update:40 standalone:1 v:1 ashutosh:1 intelligence:3 leaf:4 cue:1 concat:3 amir:1 core:1 paulin:1 short:1 tarlow:1 provides:1 node:60 kepler:3 org:1 zhang:1 five:1 mathematical:6 constructed:3 direct:1 symposium:1 consists:3 combine:1 inside:2 introduce:1 node2vec:2 inter:1 expected:1 presumed:2 andrea:1 frequently:1 examine:1 multi:1 ry:1 ming:1 dang:1 considering:3 provided:1 bounded:1 circuit:1 underpins:1 developed:1 generalizable:1 finding:2 transformation:1 saxena:1 act:4 interactive:2 classifier:13 demonstrates:1 control:1 grant:1 appear:1 before:2 positive:2 engineering:1 bind:5 local:2 xv:2 treat:1 io:1 despite:1 encoding:1 analyzing:1 cliff:1 meng:1 laurent:1 merge:1 black:1 inria:1 alexey:1 quantified:1 suggests:1 christoph:3 co:2 micha:1 range:1 unique:4 fot:1 tsoi:1 testing:1 lecun:1 recursive:4 lost:1 differs:1 backpropagation:3 xtw:3 mei:1 rnn:1 yan:1 projection:1 word:7 regular:2 renaming:11 kern:1 symbolic:1 cannot:2 close:1 selection:31 deepwalk:3 applying:1 influence:1 optimize:1 map:1 demonstrated:2 dean:2 send:2 straightforward:1 regardless:1 jesse:1 thompson:1 survey:2 tomas:2 roux:1 unstructured:1 artur:1 amy:1 sel4:1 embedding:29 proving:16 searching:1 traditionally:1 handle:1 notion:1 updated:1 construction:1 today:1 play:1 suppose:1 user:1 programming:1 guzik:1 us:2 curran:2 hypothesis:2 associate:3 recognition:1 particularly:1 labeled:1 narrowing:1 role:2 steven:1 preprint:10 wang:3 capture:3 zamir:1 connected:1 coursera:2 ordering:9 russell:1 ramus:1 benjamin:1 rmsprop:1 esi:1 tobias:1 dynamic:1 hol:3 trained:3 coreference:1 purely:1 efficiency:2 eric:1 completely:1 joint:2 differently:3 represented:4 pennin:1 train:5 provers:4 jain:1 fast:1 artificial:2 approached:1 query:2 zemel:2 kevin:2 neighborhood:4 apparent:1 heuristic:3 emerged:1 larger:2 whose:2 encoded:1 kai:4 syntactic:7 final:1 online:2 hagenbuchner:1 sequence:10 matthias:1 propose:4 fr:4 relevant:4 ablation:2 date:1 subgraph:1 flexibility:1 achieve:3 poorly:1 ioana:1 constituent:1 achievement:1 exploiting:1 parent:3 sutskever:1 defferrard:1 extending:1 francois:1 generating:1 adam:3 phane:1 help:1 sarah:1 recurrent:2 andrew:2 nearest:3 odd:1 sa:1 sizable:1 ois:2 come:1 guided:1 radius:1 laurens:1 hull:1 exploration:1 human:1 engineered:1 numeral:1 ja:1 premise:28 assign:1 ryan:1 hold:1 marco:2 around:1 duvenaud:1 ground:1 lawrence:2 rfou:1 predict:2 visualize:1 maclaurin:1 achieves:2 early:1 fh:6 vsi:1 nipkow:1 assistant:1 estimation:1 bag:4 combinatorial:2 label:1 bridge:2 sensitive:1 correctness:1 unfolding:1 voronkov:2 gaussian:1 feit:1 encode:1 focus:1 june:1 improvement:3 rank:1 contrast:1 sigkdd:2 baseline:3 detect:1 dim:6 inference:2 elsevier:1 abstraction:1 dependent:4 cock:1 holden:1 entire:1 integrated:1 diminishing:1 kernelized:1 sidi:1 deduction:4 schulz:3 semantics:4 josef:5 overall:1 classification:9 among:2 development:1 art:5 special:2 initialize:1 fairly:2 equal:2 construct:1 holstep:13 whalen:2 thang:1 ng:2 represents:4 broad:1 unsupervised:1 constitutes:1 icml:1 report:1 richard:4 modern:1 randomly:1 preserve:6 resulted:1 tightly:1 individual:1 national:1 scarselli:3 skiena:1 jeffrey:2 detection:1 message:1 dougal:1 mining:2 intra:2 evaluation:3 male:1 unconditional:10 light:3 held:1 word2vec:1 edge:16 tuple:1 explosion:1 partial:1 necessary:1 closer:1 sigops:1 experience:1 tree:21 old:5 taylor:1 savarese:1 euclidean:1 ablated:1 guidance:7 leskovec:1 instance:1 classify:1 column:1 konstantin:1 bresson:1 yoav:1 werner:1 tg:8 artzi:2 phrase:1 usefulness:1 goller:2 front:1 loo:2 connect:2 spatiotemporal:1 st:1 lstm:1 grand:2 international:11 lee:1 pool:3 picking:1 alemi:3 quickly:1 connecting:1 concrete:1 michael:2 ilya:1 containing:1 opposed:1 rafal:1 huang:1 emnlp:1 lambda:2 wenzel:1 chung:1 leading:1 return:1 gonthier:1 szegedy:4 li:1 jeremy:1 gabriele:2 sec:3 automation:1 matter:1 configured:1 inc:2 notable:1 bombarell:1 ranking:1 depends:1 sine:1 treelet:11 wolfgang:1 compiler:2 competitive:2 start:1 relus:1 capability:1 parallel:2 tof:1 hirzel:1 jia:1 contribution:2 yves:1 accuracy:15 convolutional:6 greg:2 who:2 coq:2 generalize:1 worth:3 xtv:3 history:2 huet:1 murthy:1 ah:1 fo:2 manual:1 checked:1 against:1 acquisition:1 typed:1 mohamed:1 james:1 obvious:1 naturally:2 proof:20 associated:3 dataset:11 logical:1 knowledge:7 color:2 improves:2 sean:2 back:1 higher:10 day:1 follow:1 tom:1 supervised:1 improved:1 rard:1 formulation:1 done:2 box:1 evaluated:1 niepert:1 furthermore:1 just:1 autoencoders:1 hand:2 eqn:9 sheng:1 parse:5 lstms:1 chad:1 o:1 christopher:2 web:1 iparraguirre:1 logistic:1 mode:2 perhaps:1 name:21 effect:1 verify:1 contain:1 brown:1 xavier:2 chemical:1 iteratively:2 semantic:9 visualizing:1 during:1 irving:2 unambiguous:1 demonstrate:1 performs:1 syntactically:2 christine:1 reasoning:9 ranging:1 meaning:1 invoked:1 novel:5 recently:2 fi:3 clause:1 overview:3 hugo:1 cohen:1 volume:5 million:1 linking:1 slight:1 composition:1 isabelle:2 connor:1 cv:2 ai:1 automatic:2 tuning:1 rd:1 heskes:1 similarly:2 mathematics:1 sugiyama:1 language:2 fingerprint:2 shawe:1 kipf:1 bruna:1 similarity:1 impressive:1 supervision:2 fst:1 base:2 add:1 operating:1 multivariate:1 own:2 showed:1 henaff:1 irrelevant:1 apart:1 belongs:2 discard:1 misra:2 kaliszyk:6 binary:1 jorge:1 vt:3 der:1 victor:1 preserving:3 george:1 deng:1 converge:1 corrado:2 semi:1 rv:3 multiple:3 encompass:1 solovyev:1 reduces:1 alan:1 ahmed:1 cross:1 long:3 truong:1 lin:1 naming:1 molecular:2 bigger:1 coded:1 prediction:3 variant:5 regression:1 basic:10 scalable:1 essentially:3 vision:1 arxiv:20 represent:6 kernel:4 normalization:5 sergey:1 achieved:1 preserved:1 whereas:2 addition:4 enrico:1 harrison:4 jian:2 perozzi:1 unlike:2 file:1 pooling:2 validating:1 effectiveness:1 call:1 structural:1 noting:3 intermediate:2 embeddings:27 stephan:2 automated:17 variety:1 fit:1 gave:1 relu:5 architecture:1 topology:3 een:1 andreas:1 idea:2 reduce:1 mclaughlin:1 ould:1 shift:2 whether:1 motivated:1 expression:2 fuchs:1 bartlett:1 accelerating:1 passing:1 cyril:1 repeatedly:2 deep:19 ignored:1 useful:5 generally:1 detailed:1 tune:1 slide:1 hardware:1 generate:2 cezary:4 http:1 disjoint:1 klein:1 serving:1 bryan:1 hyperparameter:1 key:4 four:1 urban:6 xtu:3 graph:99 chollet:2 fraction:1 convert:3 sum:1 run:1 luxburg:1 named:1 swersky:1 guyon:1 electronic:1 yann:1 fran:2 decision:1 maaten:1 grandchild:1 vf:4 entirely:1 bound:1 fl:4 layer:2 followed:3 distinguish:1 tackled:1 g:4 leroy:1 annual:1 ijcnn:1 deepmath:3 scene:1 encodes:3 explode:1 markus:1 aspect:2 argument:11 franco:2 nitish:1 kumar:1 performing:2 mikolov:2 format:1 conjecture:24 structured:2 manning:3 renamed:10 across:5 character:1 qu:1 making:3 s1:2 dv:3 invariant:9 quantifier:16 equation:3 visualization:1 remains:1 tai:1 know:1 tractable:1 end:8 parametrize:1 operation:3 apply:4 spectral:2 batching:6 occurrence:2 pierre:1 alternative:1 batch:7 robustness:1 weinberger:1 original:12 thomas:2 top:1 gori:3 cf:2 include:1 paulson:2 xw:1 mikael:1 dat:1 question:1 prover:8 already:2 mingzhe:2 strategy:1 occurs:1 usual:1 gradient:1 link:1 separate:1 grandparent:1 concatenation:2 parametrized:1 philip:1 berlin:6 chris:1 considers:3 engelhardt:1 mini:1 unrolled:1 setup:1 difficult:1 statement:20 sne:2 trace:1 negative:2 rise:1 design:4 unknown:2 perform:6 gilles:1 conversion:1 upper:1 observation:1 convolution:3 datasets:1 gated:1 benchmark:1 descent:1 hinton:2 communication:2 head:3 niklas:1 varied:1 paraphrase:1 compositionality:1 introduced:5 david:2 cast:1 pair:4 gru:1 sentence:1 connection:1 learned:3 textual:3 merges:2 nearer:1 robinson:1 jure:1 beyond:1 able:1 pattern:3 ev:2 yujia:1 fp:3 challenge:1 monfardini:2 max:4 including:1 memory:1 fpt:1 hot:4 critical:3 natural:3 treated:1 ranked:1 representing:3 improve:1 lossless:1 library:2 brief:1 church:1 jun:1 sn:2 joan:1 prior:3 epoch:3 acknowledgement:1 discovery:2 embedded:3 fully:3 loss:2 lecture:3 interesting:1 limitation:1 filtering:1 grover:1 var:21 versus:2 geoffrey:4 hoang:1 validation:11 foundation:1 vandergheynst:1 localized:1 strcuture:1 degree:1 verification:3 consistent:1 principle:1 editor:2 share:1 row:1 token:11 surprisingly:1 last:1 free:2 supported:1 drastically:1 formal:6 guide:2 vv:4 allow:2 bias:1 neighbor:8 wide:1 aspuru:1 benefit:2 bauer:1 distributed:2 dimension:1 vocabulary:2 evaluating:1 gerwin:1 world:1 van:1 collection:1 qualitatively:1 qiaozhu:1 nguyen:1 far:1 social:1 transaction:2 welling:1 nov:1 rafael:1 logic:11 keep:2 global:1 overfitting:1 incoming:2 ioffe:1 handbook:1 corpus:2 unnecessary:1 search:9 continuous:2 decade:1 table:6 promising:1 learn:4 reasonably:2 brockschmidt:1 improving:2 heidelberg:6 complex:1 constructing:1 garnett:1 domain:1 marc:1 main:3 s2:1 motivation:1 hyperparameters:1 repeated:1 child:5 body:1 xu:2 fig:12 benefited:1 exploitable:1 andrei:2 embeds:2 position:1 guiding:1 explicit:2 pereira:1 tang:2 formula:40 theorem:22 embed:11 down:1 specific:1 xt:3 qua:1 showing:2 hale:1 covariate:1 symbol:1 experimented:1 svm:1 decay:1 flt:1 mizar:1 socher:3 sequential:3 merging:1 gained:1 adding:1 ccg:1 magnitude:1 illustrates:1 margin:1 chen:2 easier:1 entropy:1 rg:1 michigan:1 led:1 simply:1 fc:7 likely:2 timothy:1 expressed:1 aditya:1 partially:1 applies:2 binding:1 springer:8 g0s:4 truth:1 acm:10 slot:1 conditional:9 goal:1 ann:1 twofold:1 replace:1 jeff:1 determined:1 reducing:1 total:2 silvio:1 mathias:1 invariance:3 arbor:1 e:1 select:1 formally:1 internal:3 rename:1 mark:2 arises:1 alexander:1 relevance:1 incorporate:1 evaluate:7 outgoing:5 srivastava:1 |
6,492 | 6,872 | A Bayesian Data Augmentation Approach for
Learning Deep Models
Toan Tran1 , Trung Pham1 , Gustavo Carneiro1 , Lyle Palmer2 and Ian Reid1
1
School of Computer Science, 2 School of Public Health
The University of Adelaide, Australia
{toan.m.tran, trung.pham, gustavo.carneiro,
lyle.palmer, ian.reid} @adelaide.edu.au
Abstract
Data augmentation is an essential part of the training process applied to deep
learning models. The motivation is that a robust training process for deep learning
models depends on large annotated datasets, which are expensive to be acquired,
stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as data
augmentation. The dominant data augmentation approach in the field assumes
that new training samples can be obtained via random geometric or appearance
transformations applied to annotated training samples, but this is a strong assumption because it is unclear if this is a reliable generative model for producing new
training samples. In this paper, we provide a novel Bayesian formulation to data
augmentation, where new annotated training points are treated as missing variables
and generated based on the distribution learned from the training set. For learning,
we introduce a theoretically sound algorithm ? generalised Monte Carlo expectation maximisation, and demonstrate one possible implementation via an extension
of the Generative Adversarial Network (GAN). Classification results on MNIST,
CIFAR-10 and CIFAR-100 show the better performance of our proposed method
compared to the current dominant data augmentation approach mentioned above ?
the results also show that our approach produces better classification results than
similar GAN models.
1
Introduction
Deep learning has become the ?backbone? of several state-of-the-art visual object classification
[19, 14, 25, 27], speech recognition [17, 12, 6], and natural language processing [4, 5, 31] systems.
One of the many reasons that explains the success of deep learning models is that their large capacity
allows for the modeling of complex, high dimensional data patterns. The large capacity allowed by
deep learning is enabled by millions of parameters estimated within annotated training sets, where
generalization tends to improve with the size of these training sets. One way of acquiring large
annotated training sets is via the manual (or ?hand?) labeling of training samples by human experts ?
a difficult and sometimes subjective task that is expensive and prone to mistakes. Another way of
producing such large training sets is to artificially enlarge existing training datasets ? a process that
is commonly known in computer science as data augmentation (DA).
In computer vision applications, DA has been predominantly developed with the application of simple
geometric and appearance transformations on existing annotated training samples in order to generate
new training samples, where the transformation parameters are sampled with additive Gaussian or
uniform noise. For instance, for ImageNet classification [8], new training images can be generated by
applying random rotations, translations or color perturbations to the annotated images [19]. Such a
DA process based on ?label-preserving? transformations assumes that the noise model over these
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
transformation spaces can represent with fidelity the processes that have produced the labelled images.
This is a strong assumption that to the best of our knowledge has not been properly tested. In fact,
this commonly used DA process is known as ?poor man?s? data augmentation (PMDA) [28] in the
statistical learning community because new synthetic samples are generated from a distribution
estimated only once at the beginning of the training process.
Figure 1: An overview of our Bayesian data augmentation algorithm for learning deep models. In
this analytic framework, the generator and classifier networks are jointly learned, and the synthesized
training set is continuously updated as the training progresses.
In the current manuscript, we propose a novel Bayesian DA approach for training deep learning
models. In particular, we treat synthetic data points as instances of a random latent variable, which
are drawn from a distribution learned from the given annotated training set. Effectively, rather than
generating new synthetic training data prior to the training process using pre-defined transformation
spaces and noise models, our approach generates new training data as the training progresses using
samples obtained from an iteratively learned training data distribution. Fig. 1 shows an overview of
our proposed data augmentation algorithm.
The development of our approach is inspired by DA using latent variables proposed by the statistical
learning community [29], where the motivation is to introduce latent variables to facilitate the computation of posterior distributions. However, directly applying this idea to deep learning is challenging
because sampling millions of network parameters is computationally difficult. By replacing the
estimation of the posterior distribution by the estimation of the maximum a posteriori (MAP) probability, one can employ the Expectation Maximization (EM) algorithm, if the maximisation of such
augmented posteriors is feasible. Unfortunately, this is not the case for deep learning models, where
the posterior maximisation cannot reliably produce a global optimum. An additional challenge for
deep learning models is that it is nontrivial to compute the expected value of the network parameters
given the current estimate of the network parameters and the augmented data.
In order to address such challenges, we propose a novel Bayesian DA algorithm, called Generalized
Monte Carlo Expectation Maximization (GMCEM), which jointly augments the training data and
optimises the network parameters. Our algorithm runs iteratively, where at each iteration we sample
new synthetic training points and use Monte Carlo to estimate the expected value of the network
parameters given the previous estimate. Then, the parameter values are updated with stochastic
gradient decent (SGD). We show that the augmented learning loss function is actually equivalent to
the expected value of the network parameters, and that therefore we can guarantee weak convergence.
Moreover, our method depends on the definition of predictive distributions over the latent variables,
but the design of such distributions is hard because they need to be sufficiently expressive to model
high-dimensional data, such as images. We address this challenge by leveraging the recent advances
reached by deep generative models [11], where data distributions are implicitly represented via deep
neural networks whose parameters are learned from annotated data.
We demonstrate our Bayesian DA algorithm in the training of deep learning classification models [15,
16]. Our proposed algorithm is realised by extending a generative adversarial network (GAN)
model [11, 22, 24] with a data generation model and two discriminative models (one to discriminate
between real and fake images and another to discriminate between the dataset classes). One important
contribution of our approach is the fact that the modularity of our method allows us to test different
models for the generative and discriminative models ? in particular, we are able to test several recently
proposed deep learning models [15, 16] for the dataset class classification. Experiments on MNIST,
CIFAR-10 and CIFAR-100 datasets show the better classification performance of our proposed
method compared to the current dominant DA approach.
2
2
2.1
Related Work
Data Augmentation
Data augmentation (DA) has become an essential step in training deep learning models, where
the goal is to enlarge the training sets to avoid over-fitting. DA has also been explored by the
statistical learning community [29, 7] for calculating posterior distributions via the introduction of
latent variables. Such DA techniques are useful in cases where the likelihood (or posterior) density
functions are hard to maximize or sample, but the augmented density functions are easier to work.
An important caveat is that in statistical learning, latent variables may not lie in the same space of the
observed data, but in deep learning, the latent variables representing the synthesized training samples
belong to the same space as the observed data.
Synthesizing new training samples from the original training samples is a widely used DA method
for training deep learning models [30, 26, 19]. The usual idea is to apply either additive Gaussian or
uniform noise over pre-determined families of transformations to generate new synthetic training
samples from the original annotated training samples. For example, Yaeger et al. [30] proposed the
?stroke warping" technique for word recognition, which adds small changes in skew, rotation, and
scaling into the original word images. Simard et al. [26] used a related approach for visual document
analysis. Similarly, Krizhevsky et al. [19] used horizontal reflections and color perturbations for
image classification. Hauberg et al. [13] proposed a manifold learning approach that is run once
before the classifier training begins, where this manifold describes the geometric transformations
present in the training set.
Nevertheless, the DA approaches presented above have several limitations. First, it is unclear how
to generate diverse data samples. As pointed out by Fawzi et al. [10], the transformations should
be ?sufficiently small? so that the ground truth labels are preserved. In other words, these methods
implicitly assume a small scale noise model over a pre-determined ?transformation space" of the
training samples. Such an assumption is likely too restrictive and has not been tested properly.
Moreover, these DA mechanisms do not adapt with the progress of the learning process? instead, the
augmented data are generated only once and prior to the training process. This is, in fact, analogous to
the Poor Man?s Data Augmentation (PMDA) [28] algorithm in statistical learning as it is non-iterative.
In contrast, our Bayesian DA algorithm iteratively generates novel training samples as the training
progresses, and the ?generator? is adaptively learned. This is crucial because we do not make a noise
model assumption over pre-determined transformation spaces to generate new synthetic training
samples.
2.2
Deep Generative Models
Deep learning has been widely applied in training discriminative models with great success, but
the progress in learning generative models has proven to be more difficult. One noteworthy work
in training deep generative models is the Generative Adversarial Networks (GAN) proposed by
Goodfellow et al. [11], which, once trained, can be used to sample synthetic images. GAN consists
of one generator and one discriminator, both represented by deep learning models. In ?adversarial
training?, the generator and discriminator play a ?two-player minimax game?, in which the generator
tries to fool the discriminator by rendering images as similar as possible to the real images, and the
discriminator tries to distinguish the real and fake ones. Nonetheless, the synthetic images generated
by GAN are of low quality when trained on the datasets with high variability [9]. Variants of GAN
have been proposed to improve the quality of the synthetic images [22, 3, 23, 24]. For instance,
conditional GAN [22] improves the original GAN by making the generator conditioned on the class
labels. Auxiliary classifier GAN (AC-GAN) [24] additionally forces the discriminator to classify both
real-or-fake sources as well as the class labels of the input samples. These two works have shown
significant improvement over the original GAN in generating photo-realistic images. So far these
generative models mainly aim at generating samples of high-quality, high-resolution photo-realistic
images. In contrast, we explore generative models (in the form of GANs) in our proposed Bayesian
DA algorithm for improving classification models.
3
3
3.1
Data Augmentation Algorithm in Deep Learning
Bayesian Neural Networks
Our goal is to estimate the parameters of a deep learning model using an annotated training set
denoted by Y = {yn }N
n=1 , where y = (t, x), with annotations t ? {1, ..., K} (K = # Classes), and
data samples represented by x ? RD . Denoting the model parameters by ?, the training process is
defined by the following optimisation problem:
?? = arg max log p(?|y),
(1)
?
where the observed posterior p(?|y) = p(?|t, x) ? p(t|x, ?)p(x|?)p(?).
Assuming that the data samples in Y are conditionally independent, the cost function that maximises
(1) is defined as [1]:
log p(?|y) ? log p(?) +
N
1 X
(log p(tn |xn , ?) + log p(xn |?)),
N n=1
(2)
where p(?) denotes a prior on the distribution of the deep learning model parameters, p(tn |xn , ?)
represents the conditional likelihood of label tn , and p(xn |?) is the likelihood of the data x.
In general, the training process to estimate the model parameters ? tends to over-fit the training set Y
given the large dimensionality of ? and the fact that Y does not have a sufficiently large amount of
training samples. One of the main approaches designed to circumvent this over-fitting issue is the
automated generation of synthetic training samples ? a process known as data augmentation (DA).
In this work, we propose a novel Bayesian approach to augment the training set, targeting a more
robust training process.
3.2
Data Augmentation using Latent Variable Methods
The DA principle is to increase the observed training data y using a latent variable z that represents
the synthesised data, so that the augmented posterior p(?|y, z) can be easily estimated [28], leading
to a more robust estimation of p(?|y). The latent variable is defined by z = (ta , xa ), where xa ? RD
refers to a synthesized data point, and ta ? {1, ..., K} denotes the associated label.
The most commonly chosen optimization method in these types of training processes involving
a latent variable is the expectation-maximisation (EM) algorithm [7]. In EM, let ?i denote the
estimated parameters of the model of p(?|y) at iteration i, and p(z|?i , y) represents the conditional
predictive distribution of z. Then, the E-step computes the expectation of log p(?|y, z) with respect
to p(z|?i , y), as follows:
Z
i
Q(?, ? ) = Ep(z|?i ,y) log p(?|y, z) = log p(?|y, z)p(z|?i , y)dz.
(3)
z
i+1
The parameter estimation at the next iteration, ? , is then obtained at the M-step by maximizing
the Q function:
?i+1 = arg max Q(?, ?i ).
(4)
?
The algorithm iterates until ||?
? ? || is sufficiently small, and the optimal ?? is selected from the
last iteration. The EM algorithm guarantees that the sequence {?i }i=1,2,... converges to a stationary
point of p(?|y) [7, 28], given that the expectation in (3) and the maximization in (4) can be computed
exactly. In the convergence proof [7, 28], it is assumed that ?i converges to ?? as the number of
iterations i increases, then the proof consists of showing that ?? is a critical point of p(?|y).
i+1
i
However, in practice, either the E-step or M-step or both can be difficult to compute exactly, especially
when working with deep learning models. In such cases, we need to rely on approximation methods.
For instance, Monte Carlo sampling method can approximate the integration in (3) (the E-step).
This technique is known as Monte Carlo EM (MCEM) algorithm [28]. Furthermore, when the
estimation of the global maximiser of Q(?, ?i ) in (4) is difficult, Dempster et al. [7] proposed the
Generalized EM (GEM) algorithm, which relaxes this requirement with the estimation of ?i+1 , where
Q(?i+1 , ?i ) > Q(?i , ?i ). The GEM algorithm is proven to have weak convergence [28], by showing
that p(?i+1 |y) > p(?i |y), given that Q(?i+1 , ?i ) > Q(?i , ?i ).
4
3.3
Generalized Monte Carlo EM Algorithm
With the latent variable z, the augmented posterior p(?|y, z) becomes:
p(?|y, z) =
p(z|y, ?)p(?|y)p(y)
p(z|y, ?)p(?|y)
p(y, z, ?)
=
=
,
p(y, z)
p(z|y)p(y)
p(z|y)
(5)
where the E-step is represented by the following Monte-Carlo estimation of Q(?, ?i ):
M
M
X
1 X
? ?i ) = 1
Q(?,
log p(?|y, zm ) = log p(?|y) +
(log p(zm |y, ?) ? log p(zm |y)), (6)
M m=1
M m=1
where zm ? p(z|y, ?i ), for m ? {1, ..., M }. In (6), if the label tam of the mth synthesized sample
zm is known, then xam can be sampled from the distribution p(xam |?, y, tam ). Hence, the conditional
distribution p(z|y, ?) can be decomposed as:
p(z|y, ?) = p(ta , xa |y, ?) = p(ta |xa , y, ?)p(xa |y, ?),
(7)
where (ta , xa ) are conditionally independent of y given that all the information from the training set
y is summarized in ? ? this means that p(ta |xa , y, ?) = p(ta |xa , ?), and p(xa |y, ?) = p(xa |?).
? ?i ) with respect to ? for the M-step is re-formulated by first removing all
The maximization of Q(?,
terms that are independent of ?, which allows us to reach the following derivation (making the same
assumption as in (2)):
N
M
X
1 X
? ?i ) = log p(?) + 1
Q(?,
(log p(tn |xn , ?) + log p(xn |?)) +
log p(zm |y, ?)
N n=1
M m=1
= log p(?) +
(8)
N
M
1 X
1 X
(log p(tn |xn , ?) + log p(xn |?)) +
(log p(tam |xam , ?) + log p(xam |?)).
N n=1
M m=1
Given that there is no analytical solution for the optimization in (8), we follow the same strategy
? i+1 , ?i ) > Q(?
? i , ?i ).
employed in the GEM algorithm, where we estimate ?i+1 so that Q(?
? ?i ) is differentiable, we can find such ?i+1 by running one step of gradient
As the function Q(?,
decent. It can be seen that our proposed optimization consists of a marriage between MCEM and
GEM algorithms, which we name: Generalized Monte Carlo EM (GMCEM). The weak convergence
proof of GMCEM is provided by Lemma 1.
? i+1 , ?i ) > Q(?
? i , ?i ), which is guaranteed from (8), then the weak
Lemma 1. Assuming that Q(?
i+1
i
convergence (i.e. p(? |y) > p(? |y)) will be fulfilled.
? i+1 , ?i ) > Q(?
? i , ?i ), then by taking the expectation on both sides, that is
Proof. Given Q(?
i+1 i
?
? i , ?i )], we obtain Q(?i+1 , ?i ) > Q(?i , ?i ), which is the
Ep(z|y,?i ) [Q(? , ? )] > Ep(z|y,?i ) [Q(?
i+1
i
condition for p(? |y) > p(? |y) proven from [28].
So far, we have presented our Bayesian DA algorithm in a very general manner. The specific forms
that the probability terms in (8) take in our implementation are presented in the next section.
4
Implementation
In general, our proposed DA algorithm can be implemented using any deep generative and classification models which have differentiable optimisation functions. This is in fact an important advantage
that allows us to use the most sophisticated extant models available in the field for the implementation of our algorithm. In this section, we present a specific implementation of our approach using
state-of-the-art discriminative and generative models.
5
4.1
Network Architecture
Our network architecture consists of two models: a classifier and a generator. For the classifier,
modern deep convolutional neural networks [15, 16] can be used. For the generator, we select
the adversarial generative networks (GAN) [11], which include a generative model (represented
by a deconvolutional neural network) and an authenticator model (represented by a convolutional
neural network). This authenticator component is mainly used for facilitating the adversarial
training. As a result, our network consists of a classifier (C) with parameters ?C , a generator (G)
with parameters ?G and an Authenticator (A) with parameters ?A . Fig. 2 compares our network
architecture with other variants of GAN recently proposed [11, 22, 24]. On the surface, our network
appears similar to AC-GAN [24], where the only difference is the separation of the classifier network
from the authenticator network. However, this crucial modularisation enables our DA algorithm
to replace GANs by other generative models that may become available in the future; likewise,
we can use the most sophisticated classification models for C. Furthermore, unlike our model,
the classification subnetwork introduced in AC-GAN mainly aims for improving the quality of
synthesized samples, rather than for classification tasks. Nonetheless, one can consider AC-GAN
as one possible implementation of our DA algorithm. Finally, our proposed GAN model is similar
to the recently proposed triplet GAN [21] 1 , but it is important to emphasise that triplet GAN was
proposed in order to improve the training procedure for GANs, while our model represents a particular
realisation of the proposed Bayesian DA algorithm, which is the main contribution of this paper.
Figure 2: A comparison of different network architectures including GAN[11], C-GAN [22], ACGAN [24] and ours. G: Generator, A: Authenticator, C: Classifier, D: Discriminator.
4.2
Optimization Function
Let us define x ? RD , ?C ? RC , ?A ? RA , ?G ? RG , u ? R100 , c ? {1, ..., K}, the classifier C, the
authenticator A and the generator G are respectively defined by
fC : RD ? RC ? [0, 1]K ;
D
A
(9)
2
fA : R ? R ? [0, 1] ;
100
fG : R
G
(10)
D
? Z+ ? R ? R .
(11)
The optimisation function used to train the classifier C is defined as:
JC (?C ) =
N
M
1 X
1 X
lC (tn |xn , ?C ) +
lC (tam |xam , ?C ),
N n=1
M m=1
(12)
where lC (tn |xn , ?C ) = ? log (softmax(fC (tn = c; xn , ?C ))).
The optimisation functions for the authenticator and generator networks are defined by [11]:
JAG (?A , ?G ) =
1
N
M
1 X
1 X
lA (xn |?A ) +
lAG (xam |?A , ?G ),
N n=1
M m=1
The triplet GAN [21] was proposed in parallel to this NIPS submission.
6
(13)
where
lA (xn |?A ) = ? log (softmax(fA (input = real, xn , ?A )) ;
(14)
lAG (xam |?A , ?G ) = ? log (1 ? softmax(fA (input = real, xam , ?G , ?A ))) .
(15)
Following the same training procedure used to train GANs [11, 24], the optimisation is divided into
two steps: the training of the discriminative part, consisting of minimising JC (?C ) + JAG (?A , ?G )
and the training of the generative part consisting of minimising JC (?C ) ? JAG (?A , ?G ). This loss
function can be linked to (8), as follows:
lC (tn |xn , ?C ) = ? log p(tn |xn , ?),
(16)
a
a
a
a
lC (tm |xm , ?C ) = ? log p(tm |xm , ?),
(17)
lA (xn |?A ) = ? log p(xn |?),
(18)
a
a
lAG (xm |?A , ?G ) = ? log p(xm |?).
(19)
4.3
Training
Training the network parameters ? follows the proposed GMCEM algorithm presented in Sec. 3.
? i+1 , ?i ) > Q(?
? i , ?i ), which can be
Accordingly, at each iteration we need to find ?i+1 so that Q(?
achieved using gradient decent. However, since the number of training and augmented samples
(i.e., N + M ) is large, evaluating the sum of the gradients over this whole set is computationally
expensive. A similar issue was observed in contrastive divergence [2], where the computation of the
approximate gradient required in theory an infinite number of Markov chain Monte Carlo (MCMC)
cycles, but in practice, it was noted that only a few cycles were needed to provide a robust gradient
approximation. Analogously, following the same principle, we propose to replace gradient decent by
stochastic gradient decent (SGD), where the update from ?i to ?i+1 is estimated using only a sub-set
of the M + N training samples. In practice, we divide the training set into batches, and the updated
?i+1 is obtained by running SGD through all batches (i.e, one epoch). We found that such strategy
works well empirically, as shown in the experiments (Sec. 5).
5
Experiments
In this section, we compare our proposed Bayesian DA algorithm with the commonly used DA
technique [19] (denoted as PMDA) on several image classification tasks (code available at: https:
//github.com/toantm/keras-bda). This comparison is based on experiments using the
following three datasets: MNIST [20] (containing 60, 000 training and 10, 000 testing images of 10
handwritten digits), CIFAR-10[18] (consisting of 50, 000 training and 10, 000 testing images of 10
visual classes like car, dog, cat, etc.), and CIFAR-100 [18] (containing the same amount of training
and testing samples as CIFAR-10, but with 100 visual classes).
The experimental results are based on the top-1 classification accuracy as a function of the amount of
data augmentation used ? in particular, we try the following amounts of synthesized images M : a)
M = N (i.e., 2? DA), M = 4N (5? DA), and M = 9N (10? DA). The PMDA is based on the
use of a uniform noise model over a rotation range of [?10, 10] degrees, and a translation range of at
most 10% of the image width and height. Other transformations were tested, but these two provided
the best results for PMDA on the datasets considered in this paper. We also include an experiment
that does not use DA in order to illustrate the importance of DA in deep learning.
As mentioned in Sec. 1, one important contribution of our method is its ability to use arbitrary deep
learning generative and classification models. For the generative model, we use the C-GAN [22] 2 , and
for the classification model we rely on the ResNet18 [15] and ResNetpa [16]. The architectures of the
generator and authenticator networks, which are kept unchanged for all three datasets, can be found
in the supplementary material. For training, we use Adadelta (with learning rate=1.0, decay rate=0.95
and epsilon=1e ? 8) for the Classifier (C), Adam (with learning rate 0.0002, and exponential decay
rate 0.5) for the Generator (G) and SDG (with learning rate 0.01) for the Authenticator (A). The
noise vector used by the Generator G is based on a standard Gaussian noise. In all experiments, we
use training batches of size 100.
Comparison results using ResNet18 and ResNetpa networks are shown in Figures 3 and 4. First, in all
cases it is clear that DA provides a significant improvement in the classification accuracy ? in general,
2
The code was adapted from: https://github.com/lukedeo/keras-acgan
7
ResNet18 on MNIST
ResNet18 on CIFAR-10
95
90
Accuracy rate
Accuracy rate
99.6
99.5
Without DA
PMDA
Ours
99.4
99.3
99.2
2X
5X
70
85
80
75
2X
10X
ResNet18 on CIFAR-100
80
Without DA
PMDA
Ours
Accuracy rate
99.7
Increase size of training data
5X
50
40
2X
10X
5X
10X
Increase size of training data
Increase size of training data
(a) MNIST
Without DA
PMDA
Ours
60
(b) CIFAR-10
(c) CIFAR-100
Figure 3: Performance comparison using ResNet18 [15] classifier.
ResNetPA on MNIST
ResNetPA on CIFAR-10
99.75
99.65
Without DA
PMDA
99.6
5X
90
88
Without DA
PMDA
Ours
86
Ours
2X
70
84
2X
10X
Accuracy rate
Accuracy rate
Accuracy rate
75
92
99.7
99.55
ResNetPA on CIFAR-100
94
Increase size of training data
5X
65
55
2X
10X
Increase size of training data
(a) MNIST
Without DA
PMDA
Ours
60
5X
10X
Increase size of training data
(b) CIFAR-10
(c) CIFAR-100
Figure 4: Performance comparison using ResNetpa [16] classifier.
larger augmented training set sizes lead to more accurate classification. More importantly, the results
reveal that our Bayesian DA algorithm outperforms PMDA by a large margin in all datasets. Given
the similarity between the model used by our proposed Bayesian DA algorithm (using ResNetpa [16])
and AC-GAN, it is relevant to present a comparison between these two models, which is shown in
Fig. 5 ? notice that our approach is far superior to AC-GAN. Finally, it is also important to show the
evolution of the test classification accuracy as a function of training time ? this is reported in Fig. 6.
As expected, it is clear that PMDA produces better classification results at the first training stages, but
after a certain amount of training, our Bayesian DA algorithm produces better results. In particular,
using the ResNet18 [15] classifier, on CIFAR-100, our method is better than PMDA after two hours
of training; while for MNIST, our method is better after five hours of training.
It is worth emphasizing that the main goal of the proposed Bayesian DA is to improve the training
process of the classifier C. Nevertheless, it is also of interest to investigate the quality of the
images produced by the generator G. In Fig. 7, we display several examples of the synthetic images
produced by G after the training process has converged. In general, the images look reasonably
realistic, particularly the handwritten digits, where the synthesized images would be hard to generate
Comparison with AC-GAN on MNIST
Comparison with AC-GAN on CIFAR-10
99.4
99
2X
75
70
AC-GAN
Accuracy rate
Accuracy rate
99.6
99.2
Comparison with AC-GAN on CIFAR-100
95
AC-GAN
ResNetpa without DA
ResNetpa with ours
5X
Increase size of training data
(a) MNIST
90
Accuracy rate
99.8
ResNetpa without DA
ResNetpa with ours
85
65
60
AC-GAN
ResNetpa without DA
ResNetpa with ours
55
10X
80
2X
5X
Increase size of training data
(b) CIFAR-10
10X
50
2X
5X
Increase size of training data
(c) CIFAR-100
Figure 5: Performance comparison with AC-GAN using ResNetpa [16]
8
10X
ResNet18 on CIFAR-100
80
98
70
Accuracy rate
Accuracy rate
ResNet18 on MNIST
100
96
94
92
90
0.1hr
2hrs
5hrs
10hrs
50
40
With PMDA
With ours
1hr
60
30
0.1hr
24hrs
Training time
With PMDA
With ours
1hr
2hrs
5hrs
10hrs
24hrs
Training time
(a) MNIST
(b) CIFAR-100
Figure 6: Classification accuracy (as a function of the training time) using PMDA and our proposed
data augmentation on ResNet18 [15]
(a) MNIST
(b) CIFAR-10
(c) CIFAR-100
Figure 7: Synthesized images generated using our model trained on MNIST (a), CIFAR-10 (b) and
CIFAR-100 (c). Each column is conditioned on a class label: a) classes are 0, ..., 9; b) classes are
airplane, automobile, bird and ship; and c) classes are apple, aquarium fish, rose and lobster.
by the application of Gaussian or uniform noise on pre-determined geometric and appearance
transformations.
6
Conclusions
In this paper we have presented a novel Bayesian DA that improves the training process of deep
learning classification models. Unlike currently dominant methods that apply random transformations
to the observed training samples, our method is theoretically sound; the missing data are sampled
from the distribution learned from the annotated training set. However, we do not train the generator
distribution independently from the training of the classification model. Instead, both models are
jointly optimised based on our proposed Bayesian DA formulation that connects the classical latent
variable method in statistical learning with modern deep generative models. The advantages of
our data augmentation approach are validated using several image classification tasks with clear
improvements over standard DA methods and also over the recently proposed AC-GAN model [24].
Acknowledgments
TT gratefully acknowledges the support by Vietnam International Education Development (VIED).
TP, GC and IR gratefully acknowledge the support of the Australian Research Council through the
Centre of Excellence for Robotic Vision (project number CE140100016) and Laureate Fellowship
FL130100102 to IR.
9
References
[1] C. Bishop. Pattern recognition and machine learning (information science and statistics), 1st edn. 2006.
corr. 2nd printing edn. Springer, New York, 2007.
[2] M. A. Carreira-Perpinan and G. E. Hinton. On contrastive divergence learning. In AISTATS, volume 10,
pages 33?40. Citeseer, 2005.
[3] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: interpretable
representation learning by information maximizing generative adversarial nets. In Advances in Neural
Information Processing Systems, 2016.
[4] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks
with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages
160?167. ACM, 2008.
[5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing
(almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493?2537, 2011.
[6] X. Cui, V. Goel, and B. Kingsbury. Data augmentation for deep neural network acoustic modeling.
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 23(9):1469?1477, 2015.
[7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em
algorithm. Journal of the royal statistical society. Series B (methodological), pages 1?38, 1977.
[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In IEEE Conference on Computer Vision and Pattern Recognition, 2009, 2009.
[9] E. L. Denton, S. Chintala, a. szlam, and R. Fergus. Deep generative image models using a laplacian pyramid
of adversarial networks. In Advances in Neural Information Processing Systems 28, pages 1486?1494.
2015.
[10] A. Fawzi, H. Samulowitz, D. Turaga, and P. Frossard. Adaptive data augmentation for image classification.
In Image Processing (ICIP), 2016 IEEE International Conference on, pages 3688?3692. IEEE, 2016.
[11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In Advances in neural information processing systems, pages 2672?2680,
2014.
[12] A. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In
Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645?6649.
IEEE, 2013.
[13] S. Hauberg, O. Freifeld, A. B. L. Larsen, J. Fisher, and L. Hansen. Dreaming more data: Class-dependent
distributions over diffeomorphisms for learned data augmentation. In Artificial Intelligence and Statistics,
pages 342?350, 2016.
[14] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual
recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9):1904?1916, 2015.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016.
[16] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference
on Computer Vision, pages 630?645. Springer, 2016.
[17] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen,
T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views
of four research groups. IEEE Signal Processing Magazine, 29(6):82?97, 2012.
[18] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[21] C. Li, K. Xu, J. Zhu, and B. Zhang. Triple generative adversarial nets. CoRR, abs/1703.02291, 2017.
[22] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
[23] A. Odena.
Semi-supervised learning with generative adversarial networks.
arXiv preprint
arXiv:1606.01583, 2016.
[24] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint
arXiv:1610.09585, 2016.
[25] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer
Vision, 115(3):211?252, 2015.
[26] P. Y. Simard, D. Steinkraus, and J. C. Platt. Best practices for convolutional neural networks applied to
visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis
and Recognition - Volume 2, 2003.
[27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
CoRR, abs/1409.1556, 2014.
[28] M. A. Tanner. Tools for statistical inference: Observed data and data augmentation methods. Lecture Notes
in Statistics, 67, 1991.
[29] M. A. Tanner and W. H. Wong. The calculation of posterior distributions by data augmentation. Journal of
the American statistical Association, 82(398):528?540, 1987.
[30] L. Yaeger, R. Lyon, and B. Webb. Effective training of a neural network character classifier for word
recognition. In NIPS, volume 9, pages 807?813, 1996.
[31] X. Zhang and Y. LeCun. Text understanding from scratch. arXiv preprint arXiv:1502.01710, 2015.
10
| 6872 |@word multitask:1 nd:1 contrastive:2 citeseer:1 sgd:3 series:1 denoting:1 document:4 ours:12 deconvolutional:1 subjective:1 existing:2 outperforms:1 current:4 com:2 realistic:3 additive:2 analytic:1 enables:1 designed:1 interpretable:1 update:1 stationary:1 generative:26 selected:1 intelligence:2 trung:2 accordingly:1 beginning:1 caveat:1 iterates:1 provides:1 zhang:5 five:1 height:1 rc:2 kingsbury:1 olah:1 become:3 consists:5 fitting:2 manner:1 introduce:2 theoretically:2 excellence:1 acquired:1 ra:1 expected:4 frossard:1 kuksa:1 inspired:1 decomposed:1 steinkraus:1 automatically:1 duan:1 lyon:1 yaeger:2 becomes:1 begin:1 provided:2 moreover:2 project:1 backbone:1 developed:1 unified:1 transformation:14 guarantee:2 exactly:2 classifier:17 platt:1 szlam:1 yn:1 producing:2 reid:1 before:1 generalised:1 treat:1 tends:2 mistake:1 fl130100102:1 optimised:1 noteworthy:1 bird:1 au:1 challenging:1 palmer:1 range:2 acknowledgment:1 lecun:2 testing:3 lyle:2 maximisation:4 practice:4 digit:2 procedure:2 pre:5 word:4 refers:1 cannot:1 targeting:1 applying:2 wong:1 equivalent:1 map:1 missing:2 dz:1 maximizing:2 independently:1 sainath:1 resolution:1 pouget:1 importantly:1 shlens:1 enabled:1 analogous:1 updated:3 play:1 magazine:1 edn:2 goodfellow:2 jaitly:1 adadelta:1 expensive:3 recognition:14 particularly:1 submission:1 database:1 observed:7 ep:3 preprint:4 cycle:2 sun:3 mentioned:2 rose:1 dempster:2 warde:1 trained:3 mcem:2 predictive:2 r100:1 easily:1 icassp:1 represented:6 cat:1 carneiro:1 derivation:1 train:3 dreaming:1 effective:1 monte:9 artificial:1 labeling:1 aquarium:1 whose:1 lag:3 widely:2 supplementary:1 larger:1 ability:1 statistic:3 simonyan:1 samulowitz:1 jointly:3 laird:1 sequence:1 differentiable:2 advantage:2 analytical:1 net:4 propose:4 tran:1 zm:6 relevant:1 jag:3 bda:1 sutskever:2 convergence:5 optimum:1 extending:1 requirement:1 produce:4 generating:3 adam:1 converges:2 object:1 illustrate:1 recurrent:1 ac:14 school:2 progress:5 aug:1 strong:2 auxiliary:2 implemented:1 australian:1 annotated:13 stochastic:2 human:1 australia:1 public:1 material:1 education:1 explains:1 abbeel:1 generalization:1 extension:1 pham:1 sufficiently:4 marriage:1 ground:1 considered:1 great:1 mapping:1 estimation:7 label:8 currently:1 hansen:1 council:1 tool:1 gaussian:4 aim:2 rather:2 avoid:1 validated:1 properly:2 improvement:3 methodological:1 likelihood:4 mainly:3 contrast:2 adversarial:12 hauberg:2 posteriori:1 inference:1 dependent:1 mth:1 resnet18:10 arg:2 classification:27 fidelity:1 issue:2 denoted:2 augment:1 development:2 art:2 integration:1 softmax:3 spatial:1 field:2 once:4 optimises:1 enlarge:2 beach:1 sampling:2 represents:4 look:1 denton:1 yu:1 future:1 mirza:2 realisation:1 employ:1 few:1 modern:2 divergence:2 consisting:3 connects:1 ab:2 interest:1 investigate:1 farley:1 chain:1 accurate:1 synthesised:1 incomplete:1 divide:1 re:1 fawzi:2 instance:4 classify:1 modeling:3 column:1 tp:1 maximization:4 cost:1 uniform:4 krizhevsky:3 seventh:1 osindero:1 too:1 stored:1 reported:1 synthetic:11 adaptively:1 st:2 density:2 international:6 dong:1 tanner:2 analogously:1 continuously:1 synthesis:1 gans:5 extant:1 augmentation:24 containing:2 huang:1 tam:4 expert:1 simard:2 leading:1 american:1 li:3 summarized:1 sec:3 jc:3 depends:2 collobert:2 try:3 view:1 linked:1 reached:1 realised:1 parallel:1 annotation:1 contribution:3 ir:2 accuracy:15 convolutional:6 likewise:1 weak:4 bayesian:19 handwritten:2 kavukcuoglu:1 produced:3 ren:3 carlo:9 worth:1 apple:1 russakovsky:1 converged:1 stroke:1 reach:1 manual:1 definition:1 nonetheless:2 lobster:1 mohamed:2 larsen:1 chintala:1 associated:1 proof:4 sampled:3 dataset:2 color:2 knowledge:1 improves:2 dimensionality:1 car:1 sophisticated:2 actually:1 manuscript:1 appears:1 ta:7 supervised:1 follow:1 zisserman:1 formulation:2 furthermore:2 xa:10 stage:1 until:1 hand:1 working:1 horizontal:1 replacing:1 expressive:1 su:1 quality:5 reveal:1 facilitate:1 usa:1 name:1 evolution:1 hence:1 iteratively:3 conditionally:2 game:1 width:1 noted:1 generalized:4 tt:1 demonstrate:2 tn:10 reflection:1 image:33 novel:6 recently:4 predominantly:1 superior:1 rotation:3 empirically:1 overview:2 volume:3 million:2 belong:1 he:3 association:1 synthesized:8 significant:2 rd:4 similarly:1 pointed:1 centre:1 language:4 gratefully:2 similarity:1 surface:1 etc:1 add:1 dominant:4 posterior:10 recent:1 ship:1 certain:1 success:2 preserving:1 seen:1 additional:1 employed:1 goel:1 deng:3 maximize:1 signal:2 semi:1 multiple:1 sound:2 karlen:1 adapt:1 calculation:1 minimising:2 long:1 cifar:26 divided:1 laplacian:1 variant:2 involving:1 vision:6 expectation:7 optimisation:5 arxiv:8 iteration:6 sometimes:1 represent:1 pyramid:2 achieved:1 preserved:1 fellowship:1 krause:1 source:1 crucial:2 unlike:2 pooling:1 leveraging:1 kera:2 bernstein:1 bengio:2 decent:5 rendering:1 automated:1 relaxes:1 fit:1 architecture:6 idea:2 tm:2 haffner:1 airplane:1 speech:5 york:1 deep:42 useful:1 fake:3 fool:1 clear:3 karpathy:1 amount:5 ce140100016:1 processed:1 augments:1 generate:6 http:2 notice:1 fish:1 estimated:5 fulfilled:1 diverse:1 group:1 four:1 nevertheless:2 drawn:1 dahl:1 kept:1 sum:1 houthooft:1 run:2 family:1 reasonable:1 almost:1 separation:1 scaling:1 maximiser:1 layer:1 guaranteed:1 distinguish:1 display:1 vietnam:1 courville:1 nontrivial:1 adapted:1 fei:2 generates:2 diffeomorphisms:1 turaga:1 poor:2 cui:1 describes:1 em:9 character:1 making:2 computationally:2 skew:1 mechanism:1 needed:1 photo:2 available:3 apply:2 hierarchical:1 alternative:1 batch:3 original:5 assumes:2 denotes:2 running:2 include:2 gan:34 top:1 calculating:1 restrictive:1 epsilon:1 especially:1 classical:1 society:1 unchanged:1 warping:1 strategy:2 fa:3 usual:1 unclear:2 subnetwork:1 gradient:9 capacity:2 manifold:2 reason:1 xam:8 ozair:1 assuming:2 code:2 difficult:5 unfortunately:1 webb:1 synthesizing:1 implementation:6 reliably:1 design:1 satheesh:1 maximises:1 datasets:8 markov:1 acknowledge:1 hinton:5 variability:1 gc:1 perturbation:2 arbitrary:1 community:3 introduced:1 dog:1 required:1 imagenet:4 discriminator:6 icip:1 acoustic:3 learned:8 hour:2 nip:3 address:2 able:2 pattern:5 xm:4 challenge:4 reliable:1 max:2 including:1 royal:1 critical:1 odena:2 treated:1 natural:3 force:1 circumvent:1 rely:2 hr:12 residual:2 zhu:1 representing:1 minimax:1 improve:4 github:2 acknowledges:1 health:1 text:1 prior:3 geometric:4 epoch:1 schulman:1 understanding:1 graf:1 loss:2 lecture:1 generation:2 limitation:1 proven:3 generator:17 triple:1 degree:1 vanhoucke:1 freifeld:1 rubin:1 principle:2 tiny:1 translation:2 prone:1 last:1 side:1 senior:1 taking:1 emphasise:1 fg:1 xn:18 evaluating:1 computes:1 commonly:4 adaptive:1 nguyen:1 far:3 transaction:2 approximate:2 implicitly:2 laureate:1 global:2 robotic:1 assumed:1 gem:4 discriminative:5 fergus:1 latent:13 iterative:1 triplet:3 modularity:1 khosla:1 additionally:1 reasonably:1 robust:4 ca:1 improving:2 automobile:1 complex:1 artificially:1 bottou:2 european:1 da:48 aistats:1 main:3 motivation:2 noise:10 whole:1 allowed:1 facilitating:1 xu:2 augmented:9 fig:5 lc:5 sub:1 exponential:1 lie:1 perpinan:1 infogan:1 printing:1 ian:2 removing:1 emphasizing:1 specific:2 bishop:1 showing:2 explored:1 decay:2 abadie:1 essential:2 socher:1 mnist:14 gustavo:2 effectively:1 importance:1 corr:3 conditioned:2 margin:1 chen:1 easier:1 rg:1 fc:2 appearance:3 likely:1 explore:1 visual:7 acquiring:1 springer:2 truth:1 acm:2 ma:1 weston:2 conditional:6 goal:3 formulated:1 identity:1 labelled:1 replace:2 man:2 feasible:1 hard:3 change:1 carreira:1 determined:4 infinite:1 fisher:1 shared:1 lemma:2 called:1 discriminate:2 experimental:1 la:3 player:1 select:1 support:2 adelaide:2 mcmc:1 audio:1 tested:3 scratch:2 |
6,493 | 6,873 | Principles of Riemannian Geometry
in Neural Networks
Michael Hauser
Department of Mechanical Engineering
Pennsylvania State University
State College, PA 16801
[email protected]
Asok Ray
Department of Mechanical Engineering
Pennsylvania State University
State College, PA 16801
[email protected]
Abstract
This study deals with neural networks in the sense of geometric transformations
acting on the coordinate representation of the underlying data manifold which
the data is sampled from. It forms part of an attempt to construct a formalized
general theory of neural networks in the setting of Riemannian geometry. From
this perspective, the following theoretical results are developed and proven for
feedforward networks. First it is shown that residual neural networks are finite difference approximations to dynamical systems of first order differential
equations, as opposed to ordinary networks that are static. This implies that the
network is learning systems of differential equations governing the coordinate
transformations that represent the data. Second it is shown that a closed form
solution of the metric tensor on the underlying data manifold can be found by
backpropagating the coordinate representations learned by the neural network
itself. This is formulated in a formal abstract sense as a sequence of Lie group
actions on the metric fibre space in the principal and associated bundles on the
data manifold. Toy experiments were run to confirm parts of the proposed theory,
as well as to provide intuitions as to how neural networks operate on data.
1
Introduction
The introduction is divided into two parts. Section 1.1 attempts to succinctly describe ways in
which neural networks are usually understood to operate. Section 1.2 articulates a more minority
perspective. It is this minority perspective that this study develops, showing that there exists a rich
connection between neural networks and Riemannian geometry.
1.1
Latent variable perspectives
Neural networks are usually understood from a latent variable perspective, in the sense that
successive layers are learning successive representations of the data. For example, convolution
networks [10] are understood quite well as learning hierarchical representations of images [19].
Long short-term memory networks [9] are designed such that input data act on a memory cell to
avoid problems with long term dependencies. More complex devices like neural Turing machines
are designed with similar intuitions for reading and writing to a memory [6].
Residual networks were designed [7] with the intuition that it is easier to learn perturbations from
the identity map than it is to learn an unreferenced map. Further experiments then suggest that
residual networks work well because, during forward propagation and back propagation, the signal
from any block can be mapped to any other block [8]. After unraveling the residual network, this
attribute can be seen more clearly. From this perspective, the residual network can be understood
as an ensemble of shallower networks [16].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.2
Geometric perspectives
These latent variable perspectives are a powerful tool for understanding and designing neural
networks. However, they often overlook the fundamental process taking place, where successive
layers successively warp the coordinate representation of the data manifold with nonlinear transformations into a form where the classes in the data manifold are linearly separable by hyperplanes.
These nested compositions of affine transformations followed by nonlinear activations can be seen
by work done by C. Olah (http://colah.github.io/) and published by LeCun et al. [11].
Research in language modeling has shown that the word embeddings learned by the network
preserve vector offsets[12], with an example given as xapples ? xapple ? xcars ? xcar for the
word embedding vector xi . This suggests the network is learning a word embedding space with
some resemblance to group closure, with group operation vector addition. Note that closure is
generally not a property of data, for if instead of word embeddings one had images of apples and
cars, preservation of these vector offsets would certainly not hold at the input [3]. This is because
the input images are represented in Cartesian coordinates, but are not sampled from a flat data
manifold, and so one should not measure vector offsets by Euclidean distance. In Locally Linear
Embedding [13], a coordinate system is learned in which Euclidean distance can be used. This work
shows that neural networks are also learning a coordinate system in which the data manifold can
be measured by Euclidean distance, and the coordinate representation of the metric tensor can be
backpropagated through to the input so that distance can be measured in the input coordinates.
2
Mathematical notations
Einstein notation is used throughout this paper. A raised index in parenthesis, such as x(l) , means
it is the lth coordinate system while ?(l) means it is the lth coordinate transformation. If the
index is not in parenthesis, a superscript (contravariant) free index means it is a vector, a subscript
(covariant) free index means it is a covector, and a repeated index means implied summation. The .
in tensors, such as Aa.
.b , are placeholders to keep track of which index comes first, second, etc.
A (topological) manifold M of dimension dimM is a Hausdorff, paracompact topological space that
is locally homeomorphic to RdimM [17]. This homeomorphism x : U ? x (U ) ? RdimM is called a
coordinate system on U ? M . Non-Euclidean manifolds, such as S 1 , can be created by taking an
image and rotating it in a circle. A feedfoward network learns coordinate transformations
?(l) :
(l)
(l)
(l)
(l+1)
(l)
(l)
(l+1)
x (M ) ? ? ? x
(M ), where the new coordinates x
:= ?
x
:M ?x
(M ),
and is initialized in Cartesian coordinates x(0) : M ? x(0) (M ) ? RdimM . A data point q ? M can
only be represented as numbers with respect to some coordinate system; with the coordinates at layer
l + 1, q is represented as the layerwise composition x(l+1) (q) := ?(l) ? ... ? ?(1) ? ?(0) ? x(0) (q).
For activation function f , such as
ReLU or tanh, a standard feedfoward network transforms coordinates as x(l+1) := ?(l) x(l) := f (x(l) ; l), whereas a residual network transforms coordi
nates as x(l+1) := ?(l) x(l) := x(l) + f (x(l) ; l). Note that these are global coordinates over
j
the entire manifold. With the Softmax coordinate transformation defined as softmax x(L) :=
P
(L)j (L)
(L)k (L)
K
x
x
eW
/ k=1 eW
, the probability of q ? M being from class j is P (Y = j|X = q) =
j
softmax x(L) (q) . A figure for this section is in the appendix of the full version of the paper.
3
Neural networks as C k differentiable coordinate transformations
One can define entire classes of coordinate transformations. The following formulation also has the
form of differentiable curves/trajectories, but because the number of dimensions often changes as
one moves through the network, it is difficult to interpret a trajectory traveling through a space of
changing dimensions. A standard feedforward neural network is a C 0 function:
x(l+1) := f (x(l) ; l)
(l+1)
(l)
(1)
(l)
A residual network has the form x
= x + f (x ; l). However, because of eventually taking
the limit as L ? ? and l ? [0, 1] ? R, as opposed to l being only a finitely countable index, the
equivalent form of the residual network is as follows:
x(l+1) ' x(l) + f (x(l) ; l)?l
2
(2)
where ?l = 1/L for a uniform partition of the interval [0, 1] and is implicit in the weight matrix.
One can define entire classes of coordinate transformations inspired by finite difference approximations of differential equations. These can be used to impose k th order differentiable smoothness:
?x(l) := x(l+1) ? x(l) ' f (x(l) ; l)?l
2 (l)
? x
:= x
(l+1)
(l)
? 2x
+x
(l?1)
(l)
' f (x ; l)?l
2
(3)
(4)
Each of these define a differential equation, but of different order smoothness on the coordinate
transformations. Written in this form the residual network in Equation 3 is a first-order forward
difference approximation to a C 1 coordinate transformation and has O (?l) error. Network architectures with higher order accuracies can be constructed,
such as central differencing approximations
of a C 1 coordinate transformation to give O ?l2 error.
Note that the architecture of a standard feedforward neural network is a static equation, while the
others are dynamic. Also note that Equation 4 can be rewritten x(l+1) = x(l) + f (x(l) ; l)?l2 +
?x(l?1) , where ?x(l?1) = x(l) ? x(l?1) , and in this form one sees that this is a residual network
with an extra term ?x(l?1) acting as a sort of momentum term on the coordinate transformations.
This momentum term is explored in Section 7.1.
By the definitions of the C k networks given by Equations 3-4, the right hand side is both continuous
and independent of ?l (after dividing), and so the limit exists as ?l ? 0. Convergence rates and
error bounds of finite difference approximations can be applied to these equations. By the standard
definition of the derivative, the residual network defines a system of differentiable transformations.
d2 x(l)
dl2
dx(l)
x(l+?l) ? x(l)
:= lim
= f (x(l) ; l)
?l?0
dl
?l
x(l+?l) ? 2x(l) + x(l??l)
:= lim
= f (x(l) ; l)
?l?0
?l2
(5)
(6)
Notations are slightly changed, by taking l = n?l for n ? {0, 1, 2, .., L ? 1} and indexing the
layers by the fractional index l instead of the integer index n. This defines a partitioning:
P = {0 = l(0) < l(1) < l(2) < ... < l(n) < ... < l(L) = 1}
(7)
where ?l(n) := l(n + 1) ? l(n) can in general vary with n as the maxn ?l(n) still goes to
zero as L ? ?. To reduce notational complications, this paper will write ?l := ?l (n) for all
n ? {0, 1, 2, ..., L ? 1}.
In [4], a deep residual convolution network was trained on ImageNet in the usual fashion except
parameter weights between residual blocks at the same dimension were shared, at a cost to the
accuracy of only 0.2%. This is the difference between learning an inhomogeneous first order
(l)
(l)
equation dxdl := f (x(l) ; l) and a (piecewise) homogeneous first order equation dxdl := f (x(l) ).
4
The Riemannian metric tensor learned by neural networks
From the perspective of differentiable geometry, as one moves through the layers of the neural
network, the data manifold stays the same but the coordinate representation of the data manifold
changes with each successive affine transformation and nonlinear activation. The objective of the
neural network is to find a coordinate representation of the data manifold such that the classes are
linearly separable by hyperplanes.
Definition 4.1. (Riemannian manifold [17]) A Riemannian manifold (M, gab ) is a real smooth
manifold M with an inner product, defined by the positive definite metric tensor gab = gab (x),
varying smoothly on the tangent space of M .
If the network has been well trained as a classifier, then by Euclidean distance two input points
of the same class may be far apart when represented by the input coordinates but close together
in the output coordinates. Similarly, two points of different classes may be near each other when
3
layer 0
layer 1
layer 2
layer 3
layer 4
layer 5
layer 6
layer 7
layer 8
layer 9
layer 10
layer 9
layer 10
layer 9
layer 10
layer 9
layer 10
(a) A C 0 network with sharply changing layer-wise particle trajectories.
layer 0
layer 1
layer 2
layer 3
layer 4
layer 5
layer 6
layer 7
layer 8
(b) A C 1 network with smooth layer-wise particle trajectories.
layer 0
layer 1
layer 2
layer 3
layer 4
layer 5
layer 6
layer 7
layer 8
(c) A C 2 network also exhibits smooth layer-wise particle trajectories.
layer 0
layer 1
layer 2
layer 3
layer 4
layer 5
layer 6
layer 7
layer 8
(d) A combination C 0 and C 1 network, where the identity connection is left out in layer 6.
Figure 1: Untangling the same spiral with 2-dimensional neural networks with different constraints
on smoothness. The x and y axes are the two nodes of the neural network at a given layer l, where
layer 0 is the input data. The C 0 network is a standard network, while the C 1 network is a residual
network and the C 2 network also exhibits smooth layerwise transformations. All networks achieve
0.0% error rates. The momentum term in the C 2 network allows the red and blue sets to pass over
each other in layers 3, 4 and 5. Figure 1d has the identity connection for all layers other than layer 6.
represented by the input coordinates but far apart in the output coordinates. These ideas form the
basis of Locally Linear Embeddings [13]. The intuitive way to measure distances is in the output
coordinates, which tends to be a flattened representation of the data manifold [3]. Accordingly, the
metric representation in the output coordinates is defined as the standard Euclidean metric:
g(x(L) )aL bL := ?aL bL
(8)
The metric tensor transforms as a tensor with coordinate transformations:
(l)
g(x )al bl =
?x(l+1)
?x(l)
al+1 .
.al
?x(l+1)
?x(l)
bl+1 .
g(x(l+1) )al+1 bl+1
(9)
.bl
The above recursive formula is solved from the output layer to the input, i.e. the coordinate
representation of the metric tensor is backpropagated through the network from output to input:
g(x(l) )al bl
?
0
?x(l +1)
?
=
?x(l0 )
l0 =L?1
l
Y
!al0 +1 .
.al0
0
?x(l +1)
?x(l0 )
!bl0 +1 . ?
? ?aL bL
(10)
.bl0
If the network is taken to be residual as in Equation 2, then the Jacobian of the coordinate transfora
.
mation is found, with ?.al+1
the Kronecker delta:
l
?x(l+1)
?x(l)
al+1 .
=
al+1 .
?.a
l
+
.al
4
?f x(l) ; l
?x(l)
!al+1 .
?l
.al
(11)
layer 0
layer 1
layer 2
layer 3
layer 4
layer 5
layer 6
layer 7
Figure 2: Layerwise coordinate transformations with a C 1 (residual) network, used to change the
shape of the input to match an output via `2 minimization. The coordinate transformations take
place smoothly over the network as the next layer is a slight perturbation from the previous. Also
note that if distances at the output are measured by the Euclidean metric, then to preserve the
metric properties from the input, the output coordinate space becomes non Cartesian.
Backpropagating the coordinate representation of the metric tensor requires the sequence of matrix
products from output to input, and can be defined for any layer l:
aL .
P.a
:=
l
L?1
Y
?
0
al0 +1 .
??.a
l0
+
l0 =l
?f (z (l +1) ; l0 )
?z (l0 +1)
!al0 +1 .
.el0 +1
0
?z (l +1)
?x(l0 )
!el0 +1 .
?
?l?
(12)
.al0
where z (l+1) := W (l) x(l) + b(l) . With this, taking the output metric to be the standard Euclidean
metric ?ab , the linear element can be represented in the coordinate space for any layer l:
a. b.
ds2 = P.a
P ? dxal dxbl
l .bl ab
(13)
The data manifold is independent of coordinate representation. At the output where distances
are measured by the standard Euclidean metric an -ball can be defined. The linear element in
Equation 13 defines the corresponding ?-ball at layer l. This can be used to see what in the input
space the neural network says is close in the output space.
As L ? ? this becomes an infinite product of matricies (from our infinite applications of the chain
QL?1 a 0 .
PL?1
a0 .
a0 .
aL .
rule). Analogous to the scalar case, P.a
= l0 =0 (?.all0+1 + A.all0+1 ) converges if l0 =0 ||A.all0+1 ||2
0
converges [18]. For a fully connected network with activation tanh (z), the following inequality
holds:
L?1
X
l0 =0
a0 .
||A.all0+1 ||2
=
L?1
X
l0 =0
0
||
?f (z (l +1) ; l0 )
?z (l0 +1)
!al0 +1 .
L?1
X
.el0 +1
0
?z (l +1)
?x(l0 )
!el0 +1 .
?l||2 ?
.al0
e0 .
e0 .
2 ? ||W.all0+1 ||2 ?l = 2 ? E ||W.all0+1 ||2 < ? (14)
l0 =0
where ||.||2 is the `2 norm and E [.] is the expectation. This shows that the infinite sum converges,
implying that in the limit Equation 12 converges. In the limit the actions of the coordinate transformations on the metric tensor smoothly transform the metric tensor coordinate representation.
This analysis has so far assumed a constant layerwise dimension, which is not how most neural
networks are used in practice, where the number of nodes often changes. This is handled by
considering the pullback metric [17].
5
Definition 4.2. (Pushforward map) Let M and N be topological manifolds, ?(l) : M ? N a
smooth map and T M and T N be their respective tangent spaces. Also let X ? T M where
X : C ? (M ) ? R, and f ? C ? (N ). The pushfowardmap ?(l)
? : T M ? T N takes an element
(l)
X ? ?(l)
X
and
is
defined
by
its
action
on
f
as
?
X
(f
)
:=
X (f ? ?(l) ).
?
?
Definition 4.3. (Pullback metric) Let (M, gM ) and (N, gN ) be Riemannian manifolds, ?(l) : M ?
N a smooth map and ?(l)
? : T M ? T N the pushfoward between their tangent spaces T M and
T N . Then the pullback metric on M is given by gM (X, Y ) := gN ??(l) X, ??(l) Y ?X, Y ? T M .
In practice being able to change dimensions in the neural network is important for many reasons.
One reason is that neural networks usually have access to a limited number of types of nonlinear
coordinate transformations, for example tanh, ? and ReLU. This severly limits the ability of the
network to separate the wide variety of manifolds that exist. For example, the networks have
difficulty linearly separating the simple toy spirals in Figures 1 because they only have access to
coordinate transformations of the form tanh. If instead they had access to a coordinate system that
was more appropriate for spirals, such as polar coordinates, they could very easily separate the data.
This is the reason why Locally Linear Embeddings [13] could very easily discover the coordinate
charts for the underlying manifold, because k-nearest neighbors is an extremely flexible type of
nonlinearity. Allowing the network to go into higher dimensions makes it easier to separate data.
5
Lie Group actions on the metric fibre bundle
This section will abstractly formulate Section 4 as neural networks learning sequences of left Lie
Group actions on the metric (fibre) space over the data manifold to make the metric representation of
the underlying data manifold Euclidean. Several definitions, which can be found in the appendix in
the full version of this paper, are needed to formulate Lie group actions on principal and associated
fibre bundles, namely of bundles, fibre bundles, Lie Groups and their actions on manifolds [17].
Definition 5.1. (Principal fibre bundle) A bundle (E, ?, M ) is called a principal G-bundle if:
(i.) E is equipped with a right G-action C
(ii.) The right G-action C is free.
(iii.) (E, ?, M ) is (bundle) isomorphic to (E, ?, E/G) where the surjective projection map ? : E ?
E/G is defined by ? () := [] as the equivalence class of points of
Remark. (Principal bundle) The principal fibre bundle can be thought of (locally) as a fibre bundle
with fibres G over the base manifold M .
Definition 5.2. (Associated fibre bundle) Given a G principal bundle and a smooth manifold F on
which exists a left G-action B: G ? F ? F , the associated fibre bundle (PF , ?F , M ) is defined as
follows:
(i.) let ?G be the relation on P ? F defined as follows:
(p, f ) ?G (p0 , f 0 ) : ?? ?h ? G : p0 = p C h and f 0 = h?1 B f , and thus PF := (P ? F ) / ?G .
(ii.) define ?F : PF ? M by ?F ([(p, f )]) := ? (p)
Neural network actions on the manifold M are a (layerwise) sequence of left G-actions on the
associated (metric space) fibre bundle. Let the dimension of the manifold d := dim M .
The group G is taken to be the general linear group of dimension d over R, i.e. G = GL (d, R) :=
{? : Rd ? Rd | det ? 6= 0}.
The principal bundle P is taken to be the frame bundle, i.e. P = LM := ?p?M Lp M :=
?p?M {(e1 , ..., ed ) ? Tp M | (e1 , ..., ed ) is a basis of Tp M }, where Tp M is the tangent space of
M at the point p ? M .
The right G-action C: LM ? GL (d, R) ? LM is defined by e C h = (e1 , ..., ed ) C h :=
(ha.1l . eal , ..., ha.dl . eal ), which is the standard transformation law of linear algebra.
The fibre F in the associated
? bundle will be the metric tensor space, with two lower indicies,
?
and so F = Rd ? Rd , where the ? denotes the cospace. With this, the left G-action
a
. bl+1 .
B: GL (d, R) ? F ? F is defined by h?1 B g a b := gal+1 bl+1 h.al+1
h.bl
l
l l
6
Layerwise sequential applications of the left G-action from output to input is thus simply understood:
?1
?1
h?1
0 B h1 B ... B hL B g
a0 b0
L?1
Y a 0 . b 0 .
?1
= h?1
B gaL bL =
h.al0 +1 h.bll0+1 gaL bL
0 ? ... ? hL
l
l0 =0
(15)
This is equivalent to Equation 10, only formulated in a formal, abstract sense.
6
Backpropagation as a sequence of right Lie Group actions
A similar analysis that has been performed in Sections 4 and 5 can be done to generalize error
backpropagation as a sequence of left Lie Group actions on the output error, as well as show that
backpropagation converges as L ? ?. The discrete layerwise error backpropagation algorithm [14]
is derived using the chain rule on graphs. The closed form solution of the gradient of the output
error E with respect to any layer weight W (l?1) can be solved for recursively from the output, by
backpropagating errors:
?E
=
?W (l?1)
?E
?x(L)
aL l0 =l
In practice, one further applies the chain rule
W
(l?1)
0
?x(l +1)
?x(l0 )
L?1
Y
?x(l)
?W (l?1)
!al0 +1 .
al0
.al0
=
?x(l)
?W (l?1)
?x(l)
?z (l)
al .
.bl
al
?z (l)
?W (l?1)
(16)
bl
. Note that
is a coordinate chart on the parameter manifold [1], not the data manifold.
In this form it isimmediately seen that error backpropagation is a sequence of right G-actions
QL?1 ?x(l0 +1) al0 +1 .
on the output frame bundle ?x?(L) a . This transforms the frame bundle
0
l0 =l
L
?x(l )
.al0
al
?x(l)
acting on E to the coordinate system at layer l, and thus puts it in the same space as ?W
.
(l?1)
For the residual network, the transformation matrix Equation 11 can be inserted into Equation 16.By
the same logic as Equation 14, the infinite tensor product in Equation 16 converges in the limit
L ? ? in the same way as in Equation 12, and so it is not rewritten here. In the limit this becomes
a smooth right G-action on the frame bundle, which itself is acting on the error cost function.
7
Numerical experiments
This section presents the results of numerical experiments used to understand the proposed theory.
The C ? hyperbolic tangent has been used for all experiments, with weights initialized according
to [5]. For all of the experiments, layer 0 is the input Cartesian coordinate representation of the
data manifold, and the final layer L is the last hidden layer before the linear softmax classifier. GPU
implementations of the neural networks are written in the Python library Theano [2, 15].
7.1
Neural networks with C k differentiable coordinate transformations
As described in Section 3, k th order smoothness can be imposed on the network by considering
network structures defined by e.g. Equations 3-4. As seen in Figure 1a, the standard C 0 network
with no impositions on differentiability has very sharp layerwise transformations and separates
the data in an unintuitive way. The C 1 residual network and C 2 network can be seen in Figures 1b
and 1c, and exhibit smooth layerwise transformations and separate the data in a more intuitive way.
Forward differencing is used for the C 1 network, while central differencing was used for the C 2
network, except at the output layer where backward differencing was used, and at the input first
order smoothness was used as forward differencing violates causality.
In Figure 1c one can see that for the C 2 network the red and blue data sets pass over each other
in layers 4, 5 and 6. This can be understood as the C 2 network has the same form as a residual
network, with an additional momentum term pushing the data past each other.
7
layer 0
layer 2
layer 4
layer 6
layer 8
layer 10
layer 12
layer 14
layer 16
layer 18
layer 20
layer 22
layer 24
layer 26
layer 28
layer 30
layer 32
layer 34
layer 36
layer 38
layer 40
layer 42
layer 44
layer 46
layer 48
layer 50
(a) A batch size of 300 for untangling data. As early as layer 4 the input connected sets have been disconnected
and the data are untangled in an unintuitive way. This means a more complex coordinate representation of
the data manifold was learned.
layer 0
layer 2
layer 4
layer 6
layer 8
layer 10
layer 12
layer 14
layer 16
layer 18
layer 20
layer 22
layer 24
layer 26
layer 28
layer 30
layer 32
layer 34
layer 36
layer 38
layer 40
layer 42
layer 44
layer 46
layer 48
layer 50
(b) A batch size of 1000 for untangling data. Because the large batch size can well-sample the data manifold, the
sprial sets stay connected and are untangled in an intuitive way. This means a simple coordinate representation
of the data manifold was learned.
Figure 3: The effect of batch size on coordinate representation learned by the same 2-dimensional
C 1 network, where layer 0 is the input representation, and both examples achieve 0% error. A basic
theorem in topology says continuous functions map connected sets to connected sets. A small batch
size of 300 during training sparsely samples from the connected manifold and the network learns
overfitted coordinate representations. With a larger batch size of 1000 during training the network
learns a simpler coordinate representation and keeps the connected input connected throughout.
7.2
Coordinate representations of the data manifold
As described in Sections 4 and 5, the network is learning a sequence of coordinate transformations,
beginning with Cartesian coordinates, to find a coordinate representation of the data manifold that
is required by the cost function. This process can be visualized in Figure 2. This experiment used a
C 1 network with an `2 minimization between the input and output data, so the group actions on
the principal and associated bundles act smoothly along the fibres. Grid lines were not displayed in
the other experiments so the specific effects of the other experiments can be more clearly seen.
In the forward direction, beginning with Cartesian coordinates, a sequence of C 1 differential coordinate transformations is applied to find a nonlinear coordinate representation of the data manifold
such that in the output coordinates the classes satisfy the cost restraint. In the reverse direction,
starting with a standard Euclidean metric at the output Equation 8, the coordinate representation
of the metric tensor is backpropagated through the network to the input by Equations 9-10 to find
the metric tensor representation in the input Cartesian coordinates.
7.3
Effect of batch size on set connectedness and topology
A basic theorem in topology says that continuous functions map connected sets to connected sets.
However, in Figure 3a it is seen that as early as layer 4 the continuous neural network is breaking
the connected input set into disconnected sets. Additionally, and although it achieves 0% error,
it is learning very complicated and unintuitive coordinate transformations to represent the data
in a linearly separable form. This is because during training with a small batch size of 300 in
the stochastic gradient descent search, the underlying manifold was not sufficiently sampled to
represent the entire connected manifold and so it seemed disconnected.
This is compared to Figure 3b in which a larger batch size of 1000 was used and was sufficiently
sampled to represent the entire connected manifold, and the network was also able to achieve 0%
error. The coordinate transformations learned by the neural network with the larger batch size
seem to more intuitively untangle the data in a simpler way than that of Figure 3a. Note that this
experiment is in 2-dimensions, and with higher dimensional data the issue of batch size and set
connectedness becomes exponentially more important.
8
layer 0
layer 1
layer 2
layer 3
layer 4
layer 5
layer 6
layer 7
layer 8
layer 9
layer 10
(a) A 10 layer C 1 network struggles to separate the spirals and has 1% error rate.
layer 0
layer 2
layer 4
layer 6
layer 8
layer 10
layer 12
layer 14
layer 16
layer 18
layer 20
(b) A 20 layer C 1 network is able to separate the spirals and has 0% error rate.
layer 0
layer 4
layer 8
layer 12
layer 16
layer 20
layer 24
layer 28
layer 32
layer 36
layer 40
(c) A 40 layer C 1 network is able to separate the spirals and has 0% error rate.
Figure 4: The effect of number of layers on the separation process of a C 1 neural network. In
Figure 4a it is seen that the ?l is too large to properly separate the data. In Figures 4b and 4c the
?l is sufficiently small to separate the data. Interestingly, the separation process is not as simple as
merely doubling the parameterization and halving the partitioning in Equation 7 because this is
a nonlinear system of ODE?s. This is seen in Figures 4b and 4c; the data are at different levels of
separation at the same position of layer parameterization, for example by comparing layer 18 in
Figure 4b to layer 36 in Figure 4c.
7.4
Effect of number of layers on the separation process
This section compares the process in which 2-dimensional C 1 networks with 10, 20 and 40 layers
separate the same data, thus experimenting on the ?l in the partitioning of Equation 7, as seen in
Figure 4. The 10 layer network is unable to properly separate the data and achieves a 1% error rate,
whereas the 20 and 40 layer networks both achieve 0% error rates. In Figures 4b and 4c it is seen
that at same positions of layer parameterization, for example layers 18 and 36 respectively, the
data are at different levels of separation. This implies that the partitioning cannot be interpreted as
simply as halving the ?l when doubling the number of layers. This is because the system of ODE?s
are nonlinear and the ?l is implicit in the weight matrix.
8
Conclusions
This paper forms part of an attempt to construct a formalized general theory of neural networks as a
branch of Riemannian geometry. In the forward direction, and starting in Cartesian coordinates, the
network is learning a sequence of coordinate transformations to find a coordinate representation
of the data manifold that is linearly separable. This implicitly imposes a flatness constraint on
the metric tensor in this learned coordinate system. One can then backpropagate the coordinate
representation of the metric tensor to find its form in Cartesian coordinates. This can be used to
define an ? ? relationship between the input and output data. Coordinate backpropagation was
formulated in a formal, abstract sense in terms of Lie Group actions on the metric fibre bundle. The
error backpropagation algorithm was then formulated in terms of Lie group actions on the frame
bundle. For a residual network in the limit, the Lie group acts smoothly along the fibres of the
bundles. Experiments were conducted to confirm and better understand aspects of this formulation.
9
Acknowledgements
This work has been supported in part by the U.S. Air Force Office of Scientific Research (AFOSR)
under Grant No. FA9550-15-1-0400. The first author has been supported by PSU/ARL Walker Fellowship. Any opinions, findings and conclusions or recommendations expressed in this publication
are those of the authors and do not necessarily reflect the views of the sponsoring agencies.
9
References
[1] Shun-ichi Amari and Hiroshi Nagaoka. Methods of information geometry, volume 191. American
Mathematical Soc., 2007.
[2] Fr?d?ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud
Bergeron, Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. Theano: new features
and speed improvements. arXiv preprint arXiv:1211.5590, 2012.
[3] Yoshua Bengio, Gr?goire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep
representations. In Proceedings of the 30th International Conference on Machine Learning
(ICML-13), pages 552?560, 2013.
[4] Alexandre Boulch. Sharesnet: reducing residual network parameter number by sharing
weights. arXiv preprint arXiv:1702.08782, 2017.
[5] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward
neural networks. In Aistats, volume 9, pages 249?256, 2010.
[6] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint
arXiv:1410.5401, 2014.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 770?778, 2016.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual
networks. In European Conference on Computer Vision, pages 630?645. Springer, 2016.
[9] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation,
9(8):1735?1780, 1997.
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097?1105, 2012.
[11] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444,
2015.
[12] Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space
word representations. In Hlt-naacl, volume 13, pages 746?751, 2013.
[13] Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear
embedding. science, 290(5500):2323?2326, 2000.
[14] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, DTIC Document, 1985.
[15] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016.
[16] Andreas Veit, Michael J Wilber, and Serge Belongie. Residual networks behave like ensembles
of relatively shallow networks. In Advances in Neural Information Processing Systems, pages
550?558, 2016.
[17] Gerard Walschap. Metric structures in differential geometry, volume 224. Springer Science &
Business Media, 2012.
[18] Joseph Henry Maclagan Wedderburn. Lectures on matrices, volume 17. American Mathematical
Soc., 1934.
[19] Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional
networks. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages
2528?2535. IEEE, 2010.
10
| 6873 |@word version:2 norm:1 d2:1 closure:2 p0:2 yih:1 recursively:1 reduction:1 document:1 interestingly:1 deconvolutional:1 past:1 comparing:1 activation:4 dx:1 written:2 gpu:1 ronald:1 numerical:2 partition:1 shape:1 designed:3 implying:1 device:1 parameterization:3 accordingly:1 ivo:1 beginning:2 short:2 feedfoward:2 fa9550:1 pascanu:1 complication:1 node:2 successive:4 hyperplanes:2 simpler:2 zhang:2 mathematical:4 along:2 olah:1 constructed:1 differential:6 veit:1 ray:1 inspired:1 equipped:1 considering:2 pf:3 becomes:4 discover:1 underlying:5 notation:3 medium:1 what:1 interpreted:1 developed:1 finding:1 transformation:33 contravariant:1 gal:3 act:3 classifier:2 partitioning:4 wayne:1 grant:1 danihelka:1 positive:1 before:1 engineering:2 understood:6 tends:1 limit:8 io:1 struggle:1 subscript:1 connectedness:2 equivalence:1 suggests:1 limited:1 lecun:2 recursive:1 block:3 definite:1 practice:3 backpropagation:7 razvan:1 thought:1 hyperbolic:1 projection:1 word:5 bergeron:1 suggest:1 cannot:1 close:2 put:1 writing:1 equivalent:2 map:9 imposed:1 go:2 sepp:1 starting:2 williams:1 formulate:2 tomas:1 formalized:2 immediately:1 rule:3 lamblin:1 embedding:4 coordinate:79 analogous:1 gm:2 homogeneous:1 designing:1 goodfellow:1 pa:2 element:3 rumelhart:1 recognition:3 sparsely:1 inserted:1 preprint:3 solved:2 connected:13 sun:2 mesnil:1 overfitted:1 intuition:3 agency:1 warde:1 dynamic:1 trained:2 algebra:1 untangling:3 basis:2 easily:2 represented:6 fast:1 describe:1 hiroshi:1 quite:1 larger:3 cvpr:1 say:3 pullback:3 amari:1 ability:1 nagaoka:1 abstractly:1 transform:1 itself:2 superscript:1 final:1 dimm:1 sequence:10 differentiable:6 wilber:1 product:4 fr:1 mixing:1 achieve:4 roweis:1 intuitive:3 untangled:2 sutskever:1 convergence:1 regularity:1 gerard:1 bl0:2 converges:6 gab:3 measured:4 nearest:1 finitely:1 b0:1 dividing:1 soc:2 homeomorphism:1 implies:2 come:1 arl:1 direction:3 inhomogeneous:1 attribute:1 stochastic:1 opinion:1 violates:1 unreferenced:1 shun:1 summation:1 pl:1 hold:2 sufficiently:3 lawrence:1 mapping:1 lm:3 matthew:1 rgen:1 vary:1 early:2 achieves:2 polar:1 tanh:4 tool:1 minimization:2 clearly:2 mation:1 avoid:1 varying:1 office:1 publication:1 linguistic:1 ax:1 l0:21 derived:1 notational:1 properly:2 improvement:1 experimenting:1 sense:5 dilip:1 dim:1 entire:5 a0:4 hidden:1 relation:1 issue:1 classification:1 flexible:1 pascal:1 dauphin:1 development:1 raised:1 softmax:4 construct:2 psu:3 beach:1 icml:1 others:1 yoshua:4 develops:1 piecewise:1 report:1 wen:1 preserve:2 geometry:7 attempt:3 ab:3 restraint:1 certainly:1 farley:1 matricies:1 bundle:26 chain:3 respective:1 euclidean:11 taylor:1 rotating:1 circle:1 initialized:2 e0:2 theoretical:1 untangle:1 eal:2 modeling:1 gn:2 tp:3 ordinary:1 cost:4 uniform:1 krizhevsky:1 conducted:1 gr:1 too:1 hauser:1 dependency:1 st:1 fundamental:1 international:1 stay:2 michael:2 together:1 ilya:1 central:2 reflect:1 successively:1 opposed:2 american:2 derivative:1 toy:2 bergstra:1 satisfy:1 performed:1 h1:1 view:1 closed:2 red:2 sort:1 complicated:1 bouchard:1 chart:2 air:1 accuracy:2 greg:1 convolutional:1 ensemble:2 serge:1 generalize:1 overlook:1 ren:2 trajectory:5 apple:1 published:1 sharing:1 ed:3 hlt:1 definition:8 james:1 associated:7 riemannian:8 static:2 sampled:4 lim:2 car:1 fractional:1 dimensionality:1 back:1 alexandre:1 higher:3 formulation:2 done:2 governing:1 implicit:2 traveling:1 hand:1 nonlinear:8 propagation:3 defines:3 resemblance:1 scientific:1 usa:1 effect:5 naacl:1 hausdorff:1 xavier:1 arnaud:1 deal:1 during:4 backpropagating:3 sponsoring:1 image:5 wise:3 exponentially:1 volume:5 slight:1 salah:1 he:2 interpret:1 composition:2 smoothness:5 rd:4 grid:1 similarly:1 particle:3 nonlinearity:1 language:1 had:2 henry:1 access:3 etc:1 base:1 perspective:9 apart:2 reverse:1 schmidhuber:1 inequality:1 seen:11 additional:1 impose:1 xiangyu:2 signal:1 preservation:1 ii:2 full:2 branch:1 flatness:1 smooth:9 technical:1 match:1 long:4 zweig:1 divided:1 e1:3 parenthesis:2 halving:2 basic:2 vision:3 metric:32 expectation:1 arxiv:7 represent:4 hochreiter:1 cell:1 addition:1 whereas:2 fellowship:1 ode:2 interval:1 walker:1 jian:2 extra:1 operate:2 seem:1 integer:1 near:1 feedforward:4 iii:1 embeddings:4 spiral:6 bengio:4 variety:1 krishnan:1 relu:2 pennsylvania:2 architecture:2 topology:3 reduce:1 inner:1 idea:1 dl2:1 rifai:1 andreas:1 det:1 pushforward:1 handled:1 expression:1 shaoqing:2 action:22 remark:1 deep:7 generally:1 transforms:4 backpropagated:3 locally:6 visualized:1 differentiability:1 http:1 exist:1 delta:1 track:1 blue:2 write:1 discrete:1 group:15 ichi:1 changing:2 backward:1 graph:1 merely:1 sum:1 fibre:16 run:1 turing:2 imposition:1 powerful:1 place:2 throughout:2 yann:2 separation:5 appendix:2 ric:1 graham:1 layer:180 bound:1 followed:1 topological:3 constraint:2 sharply:1 kronecker:1 alex:2 flat:1 aspect:1 layerwise:9 speed:1 extremely:1 mikolov:1 separable:4 relatively:1 department:2 maxn:1 according:1 coordi:1 combination:1 ball:2 disconnected:3 slightly:1 sam:1 lp:1 shallow:1 joseph:1 rob:1 hl:2 intuitively:1 indexing:1 theano:4 taken:3 equation:25 eventually:1 needed:1 operation:1 rewritten:2 hierarchical:1 einstein:1 appropriate:1 batch:11 denotes:1 zeiler:1 placeholder:1 pushing:1 homeomorphic:1 surjective:1 bl:16 tensor:17 implied:1 move:2 objective:1 print:1 usual:1 unraveling:1 exhibit:3 gradient:2 distance:8 separate:12 mapped:1 separating:1 unable:1 manifold:43 reason:3 minority:2 index:9 relationship:1 differencing:5 difficult:1 ql:2 ds2:1 unintuitive:3 implementation:1 countable:1 indicies:1 shallower:1 allowing:1 convolution:2 finite:3 descent:1 behave:1 displayed:1 hinton:3 team:1 frame:5 perturbation:2 articulates:1 sharp:1 david:2 namely:1 mechanical:2 required:1 connection:3 imagenet:2 learned:9 nip:1 able:4 dynamical:1 usually:3 pattern:2 reading:1 memory:4 tau:1 difficulty:2 force:1 business:1 residual:24 github:1 library:1 al0:13 created:1 geometric:2 understanding:2 l2:3 tangent:5 python:2 acknowledgement:1 graf:1 law:1 afosr:1 fully:1 lecture:1 proven:1 geoffrey:4 affine:2 imposes:1 principle:1 succinctly:1 changed:1 gl:3 last:1 free:3 supported:2 formal:3 side:1 understand:2 warp:1 wide:1 neighbor:1 taking:5 saul:1 curve:1 dimension:10 rich:1 seemed:1 forward:6 author:2 far:3 implicitly:1 keep:2 confirm:2 logic:1 global:1 assumed:1 severly:1 belongie:1 xi:1 fergus:1 continuous:5 latent:3 search:1 why:1 additionally:1 learn:2 nature:1 ca:1 nicolas:1 complex:2 necessarily:1 european:1 aistats:1 linearly:5 repeated:1 causality:1 fashion:1 momentum:4 position:2 lie:10 breaking:1 jacobian:1 learns:3 ian:1 formula:1 theorem:2 specific:1 bastien:1 showing:1 nates:1 offset:3 explored:1 glorot:1 dl:2 exists:3 sequential:1 flattened:1 cartesian:9 dtic:1 easier:2 backpropagate:1 smoothly:5 simply:2 expressed:1 kaiming:2 scalar:1 doubling:2 recommendation:1 applies:1 springer:2 aa:1 nested:1 covariant:1 lth:2 identity:4 formulated:4 shared:1 change:5 infinite:4 except:2 reducing:1 acting:4 principal:9 called:2 pas:2 isomorphic:1 ew:2 college:2 internal:1 goire:1 |
6,494 | 6,874 | Cold-Start Reinforcement Learning with
Softmax Policy Gradient
Nan Ding
Google Inc.
Venice, CA 90291
[email protected]
Radu Soricut
Google Inc.
Venice, CA 90291
[email protected]
Abstract
Policy-gradient approaches to reinforcement learning have two common and undesirable overhead procedures, namely warm-start training and sample variance
reduction. In this paper, we describe a reinforcement learning method based on a
softmax value function that requires neither of these procedures. Our method combines the advantages of policy-gradient methods with the efficiency and simplicity
of maximum-likelihood approaches. We apply this new cold-start reinforcement
learning method in training sequence generation models for structured output
prediction problems. Empirical evidence validates this method on automatic summarization and image captioning tasks.
1
Introduction
Reinforcement learning is the study of optimal sequential decision-making in an environment [16]. Its
recent developments underpin a large variety of applications related to robotics [11, 5] and games [20].
Policy search in reinforcement learning refers to the search for optimal parameters for a given policy
parameterization [5]. Policy search based on policy-gradient [26, 21] has been recently applied to
structured output prediction for sequence generations. These methods alleviate two common problems
that approaches based on training with the Maximum-likelihood Estimation (MLE) objective exhibit,
namely the exposure-bias problem [24, 19] and the wrong-objective problem [19, 15] (more on this
in Section 2). As a result of addressing these problems, policy-gradient methods achieve improved
performance compared to MLE training in various tasks, including machine translation [19, 7], text
summarization [19], and image captioning [19, 15].
Policy-gradient methods for sequence generation work as follows: first the model proposes a sequence,
and the ground-truth target is used to compute a reward for the proposed sequence with respect to
the reward of choice (using metrics known to correlate well with human-rated correctness, such
as ROUGE [13] for summarization, BLEU [18] for machine translation, CIDEr [23] or SPICE [1]
for image captioning, etc.). The reward is used as a weight for the log-likelihood of the proposed
sequence, and learning is done by optimizing the weighted average of the log-likelihood of the
proposed sequences. The policy-gradient approach works around the difficulty of differentiating the
reward function (the majority of which are non-differentiable) by using it as a weight. However, since
sequences proposed by the model are also used as the target of the model, they are very noisy and
their initial quality is extremely poor. The difficulty of aligning the model output distribution with
the reward distribution over the large search space of possible sequences makes training slow and
inefficient? . As a result, overhead procedures such as warm-start training with the MLE objective
and sophisticated methods for sample variance reduction are required to train with policy gradient.
?
Search space size is O(V T ), where V is the number of word types in the vocabulary (typically between 104
and 106 ) and T is the the sequence length (typically between 10 and 50), hence between 1040 and 10300 .
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The fundamental reason for the inefficiency of policy-gradient?based reinforcement learning is the
large discrepancy between the model-output distribution and the reward distribution, especially in
the early stages of training. If, instead of generating the target based solely on the model-output
distribution, we generate it based on a proposal distribution that incorporates both the model-output
distribution and the reward distribution, learning would be efficient, and neither warm-start training
nor sample variance reduction would be needed. The outstanding problem is finding a value function
that induces such a proposal distribution.
In this paper, we describe precisely such a value function, which in turn gives us a Softmax Policy
Gradient (SPG) method. The softmax terminology comes from the equation that defines this value
function, see Section 3. The gradient of the softmax value function is equal to the average of the
gradient of the log-likelihood of the targets whose proposal distribution combines both model output
distribution and reward distribution. Although this distribution is infeasible to sample exactly, we
show that one can draw samples approximately, based on an efficient forward-pass sampling scheme.
To balance the importance between the model output distribution and the reward distribution, we use
a bang-bang [8] mixture model to combine the two distributions. Such a scheme removes the need of
fine-tuning the weights across different datasets and throughout the learning epochs. In addition to
using a main metric as the task reward (ROUGE, CIDEr, etc.), we show that one can also incorporate
additional, task-specific metrics to enforce various properties on the output sequences (Section 4).
We numerically evaluate our method on two sequence generation benchmarks, a headline-generation
task and an image-caption?generation task (Section 5). In both cases, the SPG method significantly
improves the accuracy, compared to maximum-likelihood and other competing methods. Finally, it is
worth noting that although the training and inference of the SPG method in the paper is mainly based
on sequence learning, the idea can be extended to other reinforcement learning applications.
2
Limitations of Existing Sequence Learning Regimes
One of the standard approaches tosequence-learning
training is Maximum-likelihood
Estimation
(MLE). Given a set of inputs X = xi and target sequences Y = yi , the MLE loss function is:
X
LM LE (?) =
LiM LE (?), where LiM LE (?) = ? log p? (yi |xi ).
(1)
i
Here xi and yi = y1i , . . . , yTi denote the input and the target sequence of the i-th example,
respectively. For instance, in the image captioning task, xi is the image of the i-th example, and yi is
the groundtruth caption of the i-th example.
Although widely used in many different applications, MLE estimation for sequence learning suffers
from the exposure-bias problem [24, 19]. Exposure-bias refers to training procedures that produce
brittle models that have only been exposed to their
P training data distribution but not to their own
predictions. At training-time, log p? (yi |xi ) = t log p? (yti |xi , yi1...t?1 ), i.e. the loss of the t-th
word is conditional on the true previous-target tokens yi1...t?1 . However, since yi1...t?1 are unavailable
during inference, replacing them with tokens zi1...t?1 generated by p? (zi1...t?1 |xi ) yields a significant
discrepancy between how the model is used at training time versus inference time. The exposure-bias
problem has recently received attention in neural-network settings with the ?data as demonstrator? [24]
and ?scheduled sampling? [3] approaches. Although improving model performance in practice, such
proposals have been shown to be statistically inconsistent [10], and still need to perform MLE-based
warm-start training.
A more general approach to MLE is the Reward Augmented Maximum Likelihood (RAML)
method [17]. RAML makes the correct observation that, under MLE, all alternative outputs are
equally penalized through normalization, regardless of their relationship to the ground-truth target.
Instead, RAML corrects for this shortcoming using an objective of the form:
X
LiRAM L (?) = ?
rR (zi |yi ) log p? (zi |xi ).
(2)
zi
where rR (zi |yi ) =
i
i
)
Pexp(R(z |yi )/?
.
i
zi exp(R(z |y )/? )
i
i
This formulation uses R(zi |yi ) to denote the value of a
similarity metric R between z and y (the reward), with yi = argmaxzi R(zi |yi ); ? is a temperature
hyper-parameter to control the peakiness of this reward distribution. Since the sum over all zi for
2
the reward distribution rR (zi |yi ) in Eq. (2) is infeasible to compute, a standard approach is to
draw J samples zij from the reward distribution, and approximate the expectation by Monte Carlo
integration:
J
1X
LiRAM L (?) ' ?
log p? (zij |xi ).
(3)
J j=1
Although a clear improvement over Eq. (1), the sampling for zij in Eq. (3) is solely based on
rR (zi |yi ) and completely ignores the model probability. At the same time, this technique does not
address the exposure bias problem at all.
A different approach, based on reinforcement learning methods, achieves sequence learning following
a policy-gradient method [21]. Its appeal is that it not only solves the exposure-bias problem, but also
directly alleviates the wrong-objective problem [19, 15] of MLE approaches. Wrong-objective refers
to the critique that MLE-trained models tend to have suboptimal performance because such models
are trained on a convenient objective (i.e., maximum likelihood) rather than a desirable objective
(e.g., a metric known to correlate well with human-rated correctness). The policy-gradient method
uses a value function VP G , which is equivalent to a loss LP G defined as:
LiP G (?) = ?VPi G (?), VPi G (?) = Ep? (zi |xi ) [R(zi |yi )].
(4)
The gradient for Eq. (4) is:
X
? i
?
LP G (?) = ?
(5)
p? (zi |xi )R(zi |yi ) log p? (zi |xi ).
??
??
i
z
Similar to (3), one can draw J samples zij from p? (zi |xi ) to approximate the expectation by MonteCarlo integration:
J
1X
? i
?
LP G (?) ' ?
R(zij |yi ) log p? (zij |xi ).
(6)
??
J j=1
??
However, the large discrepancy between the model prediction distribution p? (zi |xi ) and the reward
R(zi |yi )?s values, which is especially acute during the early training stages, makes the Monte-Carlo
integration extremely inefficient. As a result, this method also requires a warm-start phase in which
the model distribution achieves some local maximum with respect to a reward-metric?free objective
(e.g., MLE), followed by a model refinement phase in which reward-metric?based PG updates are
used to refine the model [19, 7, 15]. Although this combination achieves better results in practice
compared to pure likelihood-based approaches, it is unsatisfactory from a theoretical and modeling
perspective, as well as inefficient from a speed-to-convergence perspective. Both these issues are
addressed by the value function we describe next.
3
Softmax Policy Gradient (SPG) Method
In order to smoothly incorporate both the model distribution p? (zi |xi ) and the reward metric R(zi |yi ),
we replace the value function from Eq. 4 with a Softmax value function for Policy Gradient (SPG),
VSP G , equivalent to a loss LSP G defined as:
i
i
i i
LiSP G (?) = ?VSP
(7)
G (?), VSP G (?) = log Ep? (zi |xi ) [exp(R(z |y ))] .
Because the value function
for example i is equal to Softmaxzi (log p? (zi |xi ) + R(zi |yi )), where
P
Softmaxzi (?) = log zi exp(?), we call it the softmax value function. Note that the softmax value
function from Eq. (7) is the dual of the entropy-regularized policy search (REPS) objective [5, 16]
L(q) = Eq [R] + KL(q|p? ). However, our learning and sampling procedures are significantly
different from REPS, as shown in what follows.
The gradient for Eq. (7) is:
!
X
? i
1
?
i
i
i
i
i
i
p? (z |x ) exp(R(z |y )) log p? (z |x )
L
(?) = ? P
i i
i i
?? SP G
??
zi p? (z |x ) exp(R(z |y ))
zi
X
?
(8)
=?
q? (zi |xi , yi ) log p? (zi |xi )
??
i
z
where q? (zi |xi , yi ) =
P
zi
1
i i
i i
p? (zi |xi ) exp(R(zi |yi )) p? (z |x ) exp(R(z |y )).
3
There are several advantages associated with the gradient from Eq. (8).
rR
First, q? (zi |xi , yi ) takes into account both p? (zi |xi )
and R(zi |yi ). As a result, Monte Carlo integration
over q? -samples approximates Eq. (8) better, and has
smaller variance compared to Eq. (5). This allows
our model to start learning from scratch without the
warm-start and variance-reduction crutches needed
by previously-proposed PG approaches.
MLE target
RAML targets
PG targets
SPG targets
Second, as Figure 1 shows, the samples for the SPG
method (pentagons) lie between the ground-truth tar- Figure 1: Comparing the target samples for
get distribution (triangle and circles) and the model MLE, RAML (the rR distribution), PG (the
distribution (squares). These targets are both easier p? distribution), and SPG (the q? distribution).
to learn by p? compared to ground-truth?only targets
like the ones for MLE (triangle) and RAML (circles),
and also carry more information about the ground-truth target compared to model-only samples (PG
squares). This formulation allows us to directly address the exposure-bias problem, by allowing the
model distribution to learn at training time how to deal with events conditioned on model-generated
tokens, similar with what happens at inference time (more on this in Section 3.2). At the same time,
the updates used for learning rely heavily on the influence of the reward metric R(zi |yi ), therefore
directly addressing the wrong-objective problem. Together, these properties allow the model to
achieve improved accuracy.
Third, although q? is infeasible for exact sampling, since both p? (zi |xi ) and exp(R(zi |yi )) are
factorizable across zti (where zti denotes the t-th word of the i-th output sequence), we can apply
efficient approximate inference for the SPG method as shown in the next section.
3.1
Inference
In order to estimate the gradient from Eq. (8) with Monte-Carlo integration, one needs to be able
to draw samples from q? (zi |xi , yi ). To tackle this problem, we first decompose R(zi |yi ) along the
t-axis:
T
X
R(zi |yi ) =
R(zi1:t |yi ) ? R(zi1:t?1 |yi ),
|
{z
}
t=1
,?rti (zti |yi ,zi1:t?1 )
where R(zi1:t |yi ) ? R(zi1:t?1 |yi ) characterizes the reward increment for zti . Using the reward
increment notation, we can rewrite:
T
Y
1
q? (zi |xi , yi ) =
exp(log p? (zti |zi1:t?1 , xi ) + ?rti (zti |yi , zi1:t?1 ))
Z? (xi , yi ) t=1
where Z? (xi , yi ) is the partition function equal to the sum over all configurations of zi . Since the
number of such configurations grows exponentially with respect to the sequence-length T , directly
drawing from q? (zi |xi , yi ) is infeasible. To make the inference efficient, we replace q? (zi |xi , yi )
with the following approximate distribution:
T
Y
q?? (zi |xi , yi ) =
q?? (zti |xi , yi , zi1:t?1 ),
t=1
where
1
exp(log p? (zti |zi1:t?1 , xi ) + ?rti (zti |yi , zi1:t?1 )).
Z?? (xi , yi , zi1:t?1 )
By replacing q? in Eq. (8) with q?? , we obtain:
X
? i
?
LSP G (?) = ?
q? (zi |xi , yi ) log p? (zi |xi )
??
??
zi
X
?
? ?i
'?
q?? (zi |xi , yi ) log p? (zi |xi ) ,
L
(?)
(9)
??
?? SP G
i
q?? (zti |xi , yi , zi1:t?1 ) =
z
4
Compared to Z? (xi , yi ), Z?? (xi , yi , zi1:t?1 ) sums over the configurations of one zti only. Therefore,
the cost of drawing one zi from q?? (zi |xi , yi ) grows only linearly with respect to T . Furthermore, for
common reward metrics such as ROUGE and CIDEr, the computation of ?rti (zti |yi , zi1:t?1 ) can be
done in O(T ) instead of O(V ) (where V is the size of the state space for zti , i.e., vocabulary size).
That is because the maximum number of unique words in yi is T , and any words not in yi have the
same reward increment. When we limit ourselves to J = 1 sample for each example in Eq. (9), the
approximate SPG inference time of each example is similar to the inference time for the gradient of
the MLE objective. Combined with the empirical findings in Section 5 (Figure 3) where the steps
for convergence are comparable, we conclude that the time for convergence for the SPG method is
similar to the MLE based method.
3.2
Bang-bang Rewarded SPG Method
One additional difficulty for the SPG method is that the model?s log-probability values
log p? (zti |zi1:t?1 , xi ) and the reward-increment values R(zi1:t |yi ) ? R(zi1:t?1 |yi ) are not on the
same scale. In order to balance the impact of these two factors, we need to weigh them appropriately.
Formally, we achieve this by adding a weight wti to the reward increments: ?rti (zti |yi , zi1:t?1 , wti ) ,
PT
wti ??rti (zti |yi , zi1:t?1 ) so that the total reward R(zi |yi , wi ) = t=1 ?rti (zti |yi , zi1:t?1 , wti ). The apQ
T
proximate proposal distribution becomes q?? (zi |xi , yi , wi ) = t=1 q?? (zti |xi , yi , zi1:t?1 , wti ), where
q?? (zti |xi , yi , zi1:t?1 , wti ) ? exp(log p? (zti |zi1:t?1 , xi ) + ?rti (zti |yi , zi1:t?1 , wti )).
The challenge in this case is to choose an appropriate weight wti , because log p? (zti |zi1:t?1 , xi ) varies
heavily for different i, t, as well as across different iterations and tasks.
In order to minimize the efforts for fine-tuning the reward weights, we propose a bang-bang rewarded
softmax value function, equivalent to a loss LBBSP G defined as:
X
LiBBSP G (?) = ?
p(wi ) log Ep? (zi |xi ) [exp(R(zi |yi , wi ))] ,
(10)
wi
X
X
? ?i
?
and
LBBSP G (?) = ?
p(wi )
q?? (zi |xi , yi , wi ) log p? (zi |xi ),
??
??
wi
zi
{z
}
|
(11)
? ?i
,? ??
LSP G (?|wi )
Q
where p(wi ) = t p(wti ) and p(wti = 0) = pdrop = 1 ? p(wti = W ). Here W is a sufficiently
large number (e.g., 10,000), pdrop is a hyper-parameter in [0, 1]. The name bang-bang is borrowed
from control theory [8], and refers to a system which switches abruptly between two extreme states
(namely W and 0).
When wti = W , the term ?rti (zti |yi , zi1:t?1 , wti )
1
2
3
4
5
6
7
t
overwhelms log p? (zti |zi1:t?1 , xi ), so the sampling of
i
i
zt is decided by the reward increment of zt . It is imyt
a
man
is
sitting
in
the
park
portant to emphasize that in general the groundtruth
wt
W
W
W
0
W
...
...
label yti 6= argmaxzti ?rti (zti |yi , zi1:t?1 ), because
i
i
z
the
a
man
is
...
...
in
z1:t?1 may not be the same as y1:t?1 (see an ext
ample in Figure 2). The only special case is when
argmax r5(z5|y, z1:4) = ?the? ? y5 = ?in?
pdrop = 0, which forces wti to always equal W , and
i
?
i
implies zt is always equal to yt (and therefore the
Figure 2: An example of sequence generation
SPG method reduces to the MLE method).
with the bang-bang reward weights. z4 =
i
On the other hand, when wt = 0, by definition ?in? is sampled from the model distribution
?rti (zti |yi , zi1:t?1 , wti ) = 0. In this case, the sam- since w4 = 0. Although w5 = W , z5 =
pling of zti is based only on the model prediction ?the? 6= y5 because z4 = ?in?.
distribution p? (zti |zi1:t?1 , xi ), the same situation we
have at inference time. Furthermore, we have the
following lemma (with the proof provided in the Supplementary Material):
?
This follows from recursively applying R?s property that yti = argmaxzti ?rti (zti |yi , zi1:t?1 = yi1:t?1 ).
5
Lemma 1 When wti = 0,
X
q?? (zi |xi , yi , wi )
zi
?
log p? (zti |xi , zi1:t?1 ) = 0.
??
? ?i
As a result, ??
LSP G (?|wi ) is very different from traditional PG-method gradients, in that only the zti
PT
i
with wt 6= 0 are included. To see that, using the fact that log p? (zi |xi ) = t=1 log p? (zti |xi , zi1:t?1 ),
XX
? ?i
?
q?? (zi |xi , yi , wi ) log p? (zti |xi , zi1:t?1 ),
LSP G (?|wi ) = ?
(12)
??
??
i
t
z
Using the result of Lemma 1, Eq. (12) is equal to:
X X
?
? ?i
q?? (zi |xi , yi , wi ) log p? (zti |xi , zi1:t?1 )
LSP G (?|wi ) = ?
??
??
{t:wti 6=0} zi
X
X
?
=?
q?? (zi |xi , yi , wi )
log p? (zti |xi , zi1:t?1 )
??
zi
{t:wti 6=0}
(13)
Using Monte-Carlo integration, we approximate Eq. (11) by first drawing wij from p(wi ) and then
i
i
iteratively drawing ztj from q?? (zti |xi , zi1:t?1 , yi , wtj ) for t = 1, . . . , T . For larger values of pdrop ,
i
the wij sample contains more wtj = 0 and the resulting zij contains proportionally more samples
from the model prediction distribution (with a direct effect on alleviating the exposure-bias problem).
i
i
After zij is obtained, only the log-likelihood of ztj when wtj 6= 0 are included in the loss:
J
1X X
? ?i
LBBSP G (?) ' ?
??
J j=1 n i
j
t:wt 6=0
o
?
ij
i
).
log p? (ztj |xi , z1:t?1
??
(14)
The details about the gradient evaluation for the bang-bang rewarded softmax value function are
described in Algorithm 1 of the Supplementary Material.
4
Additional Reward Functions
Besides the main reward function R(zi |yi ), additional reward functions can be used to enforce
desirable properties for the output sequences. For instance, in summarization, we occasionally find
that the decoded output sequence contains repeated words, e.g. "US R&B singer Marie Marie Marie
Marie ...". In this framework, this can be directly fixed by using an additional auxiliary reward
function that simply rewards negatively two consecutive tokens in the generated sequence:
i
?1 if zti = zt?1
,
DUPit =
0
otherwise.
In conjunction with the bang-bang weight scheme, the introduction of such a reward function has the
immediate effect of severely penalizing such ?stuttering? in the model output; the decoded sequence
after applying the DUP negative reward becomes: "US R&B singer Marie Christina has ...".
Additionally, we can use the same approach to correct for certain biases in the forward sampling
approximation. For example, the following function negatively rewards the end-of-sentence symbol
when the length of the output sequence is less than that of the ground-truth target sequence |yi |:
?1 if zti = </S> and t < |yi |,
i
EOSt =
0
otherwise.
A more detailed discussion about such reward functions is available in the Supplementary Material.
During training, we linearly combine the main reward function with the auxiliary functions:
?rti (zti |yi , zi1:t?1 , wti ) = wti ? R(zi1:t |yi ) ? R(zi1:t?1 |yi ) + DUPit + EOSit ,
with W = 10, 000. During testing, since the ground-truth target yi is unavailable, this becomes:
?rti (zti |yi , zi1:t?1 , W ) = W ? DUPit .
6
5
Experiments
We numerically evaluate the proposed softmax policy gradient (SPG) method on two sequence
generation benchmarks: a document-summarization task for headline generation, and an automatic
image-captioning task. We compare the results of the SPG method against the standard maximum
likelihood estimation (MLE) method, as well as the reward augmented maximum likelihood (RAML)
method [17]. Our experiments indicate that the SPG method outperforms significantly the other
approaches on both the summarization and image-captioning tasks.
We implemented all the algorithms using TensorFlow 1.0 [6]. For the RAML method, we used
? = 0.85 which was the best performer in [17]. For the SPG algorithm, all the results were obtained
using a variant of ROUGE [13] as the main reward metric R, and J = 1 (sample one target for each
example, see Eq. (14)). We report the impact of the pdrop for values in {0.2, 0.4, 0.6, 0.8}.
In addition to using the main reward-metric for sampling targets, we also used it to weight the loss
for target zij , as we found that it improved the performance of the SPG algorithm. We also applied
a naive version of the policy gradient (PG) algorithm (without any variance reduction) by setting
pdrop = 0.0, W ? 0, but failed to train any meaningful model with cold-start. When starting from a
pre-trained MLE checkpoint, we found that it was unable to improve the original MLE result. This
result confirms that variance-reduction is a requirement for the PG method to work, whereas our SPG
method is free of such requirements.
5.1
Summarization Task: Headline Generation
Headline generation is a standard text generation task, taking as input a document and generating a
concise summary/headline for it. In our experiments, the supervised data comes from the English
Gigaword [9], and consists of news-articles paired with their headlines. We use a training set of
about 6 million article-headline pairs, in addition to two randomly-extracted validation and evaluation
sets of 10K examples each. In addition to the Gigaword evaluation set, we also report results on the
standard DUC-2004 test set. The DUC-2004 consists of 500 news articles paired with four different
human-generated groundtruth summaries, capped at 75 bytes.? The expected output is a summary of
roughly 14 words, created based on the input article.
Method Gigaword-10K DUC-2004
We use the sequence-to-sequence recurrent neural netMLE
35.2 ? 0.3
22.6 ? 0.6
work with attention model [2]. For encoding, we use
36.4 ? 0.2
23.1 ? 0.6
RAML
a three-layer, 512-dimensional bidirectional RNN arSPG
0.2
36.6
?
0.2
23.5 ? 0.6
chitecture, with a Gated Recurrent Unit (GRU) as the
37.8 ? 0.2
24.3 ? 0.5
SPG 0.4
unit-cell [4]; for decoding, we use a similar three-layer,
37.4 ? 0.2
24.1 ? 0.5
SPG 0.6
512-dimensional GRU-based architecture. Both the enSPG 0.8
37.3 ? 0.2
24.6 ? 0.5
coder and decoder networks use a shared vocabulary
and embedding matrix for encoding/decoding the word Table 1: The F1 ROUGE-L scores (with
sequences, with a vocabulary consisting of 220K word standard errors) for headline generation.
types and a 512-dimensional embedding. We truncate
the encoding sequences to a maximum of 30 tokens, and the decoding sequences to a maximum of
15 tokens. The model is optimized using ADAGRAD with a mini-batch size of 200, a learning rate
of 0.01, and gradient clipping with norm equal to 4. We use 40 workers for computing the updates,
and 10 parameter servers for model storing and (asynchronous and distributed) updating. We run
the training procedure for 10M steps and pick the checkpoint with the best ROUGE-2 score on the
Gigaword validation set.
We report ROUGE-L scores on the Gigaword evaluation set, as well as the DUC-2004 set, in Table 1.
The scores are computed using the standard pyrouge package? , with standard errors computed using
bootstrap resampling [12]. As the numerical values indicate, the maximum performance is achieved
when pdrop is in mid-range, with 37.8 F1 ROUGE-L at pdrop = 0.4 on the large Gigaword evaluation
set (a larger range for pdrop between 0.4 and 0.8 gives comparable scores on the smaller DUC-2004
set). These numbers are significantly better compared to RAML (36.4 on Gigaword-10K), which in
turn is significantly better compared to MLE (35.2).
?
?
This dataset is available by request at http://duc.nist.gov/data.html.
Available at pypi.python.org/pypi/pyrouge/0.1.3
7
5.2
Automatic Image-Caption Generation
Validation-4K
C40
For the image-captioning task, we use the standard
Method CIDEr ROUGE-L CIDEr
MSCOCO dataset [14]. The MSCOCO dataset contains
MLE
0.968
37.7 ? 0.1
0.94
82K training images and 40K validation images, each
RAML
0.997
38.0 ? 0.1
0.97
with at least 5 groundtruth captions. The results are
SPG 0.2
1.001
38.0 ? 0.1
0.98
reported using the numerical values for the C40 testset
SPG 0.4
1.013
38.1 ? 0.1
1.00
?
reported by the MSCOCO online evaluation server .
SPG 0.6
1.033
38.2 ? 0.1
1.01
Following standard practice, we combine the training
SPG 0.8
1.009
37.7 ? 0.1
1.00
and validation datasets for training our model, and hold
out a subset of 4K images as our validation set.
Table 2: The CIDEr (with the coco-caption
Our model architecture is simple, following the ap- package) and ROUGE-L (with the pyrouge
proach taken by the Show-and-Tell approach [25]. We package) scores for image captioning on
use a one 512-dimensional RNN architecture with an MSCOCO.
LSTM unit-cell, with a dropout rate equal of 0.3 applied to both input and output of the LSTM layer. We use the same vocabulary size of 8,854
word-types as in [25], with 512-dimensional word-embeddings. We truncate the decoding sequences
to a maximum of 15 tokens. The input image is embedded by first passing it through a pretrained
Inception-V3 network [22], and then projected to a 512-dimensional vector. The model is optimized
using ADAGRAD with a mini-batch size of 25, a learning rate of 0.01, and gradient clipping with
norm equal to 4. We run the training procedure for 4M steps and pick the checkpoint of the best
CIDEr score [23] on our held-out 4K validation set.
1.04
1.02
CIDER Score
We report both CIDEr and ROUGE-L scores on our
4K Validation set, as well as CIDEr scores on the official C40 testset as reported by the MSCOCO online
evaluation server, in Table 2. The CIDEr scores are reported using the coco-caption evaluation toolkitk , while
ROUGE-L scores are reported using the standard pyrouge package (note that these ROUGE-L scores are
generally lower than those reported by the coco-caption
toolkit, as it reports an average score over multiple
reference, while the latter reports the maximum).
MLE
RAML
SPG 0.6
1.00
0.98
0.96
0.94
0.92
0.90
0
500000
1000000
1500000
2000000
2500000
The evaluation results indicate that the SPG method is
Steps
superior to both the MLE and RAML methods. The
maximum score is obtained with pdrop = 0.6, with a Figure 3: Number of training steps vs.
CIDEr score of 1.01 on the C40 testset. In contrast, CIDEr scores (on Validation-4K) for varon the same testset, the RAML method has a CIDEr ious learning regimes.
score of 0.97, and the MLE method a score of 0.94. In
Figure 3, we show that the number of steps for SPG to converge is similar to the one for MLE/RAML.
With the per-step inference cost of those methods being similar (see Section 3.1), the overall convergence time for the SPG method is similar to the MLE and RAML methods.
6
Conclusion
The reinforcement learning method presented in this paper, based on a softmax value function, is
an efficient policy-gradient approach that eliminates the need for warm-start training and sample
variance reduction during policy updates. We show that this approach allows us to tackle sequence
generation tasks by training models that avoid two long-standing issues: the exposure-bias problem
and the wrong-objective problem. Experimental results confirm that the proposed method achieves
superior performance on two different structured output prediction problems, one for text-to-text
(automatic summarization) and one for image-to-text (automatic image captioning). We plan to
explore and exploit the properties of this method for other reinforcement learning problems as well
as the impact of various, more-advanced reward functions on the performance of the learned models.
?
k
Available at http://mscoco.org/dataset/#captions-eval.
Available at https://github.com/tylin/coco-caption.
8
Acknowledgments
We greatly appreciate Sebastian Goodman for his contributions to the experiment code. We would
also like to acknowledge Ning Ye and Zhenhai Zhu for their help with the image captioning model
calibration as well as the anonymous reviewers for their valuable comments.
References
[1] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. SPICE: semantic
propositional image caption evaluation. In ECCV, 2016.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align
and translate. In Proceedings of ICLR, 2015.
[3] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for
sequence prediction with recurrent neural networks. In Advances in Neural Information
Processing Systems 28, pages 1171?1179. 2015.
[4] K. Cho, B. van Merrienboer, C. G?l?ehre, D. Bahdanau, F. Bougares, H. Schwenk, and
Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine
translation. In Proceedings of EMNLP, pages 1724?1734, 2014.
[5] Marc P. Deisenroth, Gerhard Neumann, and Jan Peters. A survey on policy search for robotics.
R in Robotics, 2(1?2):1?142, 2013. ISSN 1935-8253.
Foundations and Trends
[6] M. Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.
URL http://tensorflow.org/.
[7] Y. Wu et al. Google?s neural machine translation system: Bridging the gap between human and
machine translation. CoRR, abs/1609.08144, 2016.
[8] L. C. Evans. An introduction to mathematical optimal control theory. Preprint, version 0.2.
[9] David Graff and Christopher Cieri. English Gigaword Fifth Edition LDC2003T05. In Linguistic
Data Consortium, Philadelphia, 2003.
[10] Ferenc Huszar. How (not) to train your generative model: Scheduled sampling, likelihood,
adversary? CoRR, abs/1511.05101, 2015.
[11] Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey.
The International Journal of Robotics Research, 32(11):1238?1274, 2013.
[12] Philipp Koehn. Statistical significance tests for machine translation evaluation. In Proceedings
of EMNLP, pages 388?-395, 2004.
[13] Chin-Yew Lin and Franz Josef Och. Automatic evaluation of machine translation quality using
longest common subsequence and skip-bigram statistics. In Proceedings of ACL, 2004.
[14] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James
Hays, Pietro Perona, Deva Ramanan, Piotr Doll?r, and C. Lawrence Zitnick. Microsoft COCO:
common objects in context. CoRR, abs/1405.0312, 2014.
[15] Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. Optimization of image
description metrics using policy gradient methods. In International Conference on Computer
Vision (ICCV), 2017.
[16] Gergely Neu, Anders Jonsson, and Vicen? G?mez. A unified view of entropy-regularized
markov decision processes. CoRR, abs/1705.07798, 2017.
[17] M. Norouzi, S. Bengio, Z. Chen, N. Jaitly, M. Schuster, Y. Wu, and D. Schuurmans. Reward
augmented maximum likelihood for neural structured prediction. In Advances in Neural
Information Processing Systems 29, pages 1723?1731, 2016.
[18] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: A method for automatic
evaluation of machine translation. In Proceedings of ACL, 2002.
9
[19] Marc?Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level
training with recurrent neural networks. CoRR, abs/1511.06732, 2015.
[20] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484?489,
2016.
[21] RS Sutton, D McAllester, S Singh, and Y Mansour. Policy gradient methods for reinforcement
learning with function approximation. In NIPS, 1999.
[22] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.
Rethinking the inception architecture for computer vision. volume abs/1512.00567, 2015.
[23] Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image
description evaluation. In The IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), June 2015.
[24] Arun Venkatraman, Martial Hebert, and J. Andrew Bagnell. Improving multi-step prediction of
learned time series models. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial
Intelligence, pages 3024?3030. AAAI Press, 2015.
[25] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural
image caption generator. In Proc. of IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2015.
[26] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229?256, 1992.
10
| 6874 |@word version:2 bigram:1 norm:2 confirms:1 r:1 pg:8 pick:2 concise:1 recursively:1 carry:1 reduction:7 initial:1 inefficiency:1 configuration:3 contains:4 zij:9 score:19 liu:1 series:1 document:2 outperforms:1 existing:1 guadarrama:1 com:3 comparing:1 guez:1 evans:1 numerical:2 partition:1 ronald:1 christian:1 remove:1 update:4 resampling:1 v:1 generative:1 intelligence:1 parameterization:1 yi1:4 aja:1 philipp:1 org:3 mathematical:1 along:1 direct:1 abadi:1 consists:2 overhead:2 combine:5 expected:1 roughly:1 nor:1 multi:1 zti:40 gov:1 becomes:3 provided:1 xx:1 notation:1 coder:1 what:2 unified:1 finding:2 tackle:2 zaremba:1 exactly:1 wrong:5 control:3 unit:3 ramanan:1 och:1 local:1 todd:1 limit:1 severely:1 rouge:13 ext:1 encoding:3 sutton:1 critique:1 laurent:1 solely:2 approximately:1 ap:1 acl:2 raml:16 peakiness:1 range:2 statistically:1 decided:1 unique:1 acknowledgment:1 testing:1 practice:3 bootstrap:1 cold:3 procedure:7 jan:2 maire:1 empirical:2 w4:1 rnn:3 significantly:5 convenient:1 word:11 pre:1 refers:4 consortium:1 get:1 undesirable:1 salim:1 context:1 influence:1 applying:2 equivalent:3 reviewer:1 yt:1 exposure:9 attention:2 regardless:1 starting:1 go:1 survey:2 williams:1 simplicity:1 pure:1 shlens:1 vsp:3 his:1 embedding:2 increment:6 target:21 pt:2 heavily:2 alleviating:1 caption:11 exact:1 gerhard:1 us:2 samy:2 jaitly:2 trend:1 recognition:2 updating:1 ep:3 preprint:1 ding:1 news:2 ranzato:1 valuable:1 weigh:1 environment:1 reward:45 trained:3 singh:1 rewrite:1 overwhelms:1 ferenc:1 exposed:1 wtj:3 deva:1 negatively:2 efficiency:1 completely:1 triangle:2 schwenk:1 various:3 train:3 describe:3 shortcoming:1 monte:5 artificial:1 tell:2 hyper:2 kevin:1 basura:1 whose:1 widely:1 supplementary:3 larger:2 koehn:1 drawing:4 otherwise:2 cvpr:2 encoder:1 statistic:1 ward:1 jointly:1 noisy:1 validates:1 online:2 advantage:2 sequence:38 differentiable:1 rr:6 propose:1 kober:1 alleviates:1 translate:1 achieve:3 papineni:1 description:2 convergence:4 requirement:2 neumann:1 jing:1 produce:1 captioning:10 generating:2 silver:1 object:1 help:1 pexp:1 recurrent:4 andrew:2 bourdev:1 ij:1 received:1 borrowed:1 eq:17 solves:1 auxiliary:2 implemented:1 skip:1 come:2 implies:1 indicate:3 iou:1 ning:2 correct:2 human:4 jonathon:1 mcallester:1 material:3 f1:2 alleviate:1 decompose:1 merrienboer:1 anonymous:1 hold:1 around:1 sufficiently:1 ground:7 exp:12 lawrence:2 lm:1 achieves:4 early:2 consecutive:1 dingnan:1 estimation:4 proc:1 label:1 headline:8 ross:1 correctness:2 arun:1 weighted:1 always:2 cider:15 rather:1 avoid:1 tar:1 conjunction:1 linguistic:1 june:1 improvement:1 unsatisfactory:1 longest:1 likelihood:15 mainly:1 greatly:1 contrast:1 inference:11 anders:1 typically:2 perona:1 wij:2 josef:1 issue:2 dual:1 html:1 overall:1 development:1 proposes:1 plan:1 softmax:13 integration:6 special:1 equal:9 beach:1 sampling:10 piotr:1 r5:1 park:1 venkatraman:1 siqi:1 discrepancy:3 report:6 connectionist:1 randomly:1 murphy:1 phase:2 ourselves:1 consisting:1 argmax:1 microsoft:1 ab:6 w5:1 chitecture:1 eval:1 evaluation:14 ztj:3 mixture:1 extreme:1 held:1 worker:1 arthur:1 tree:1 circle:2 girshick:1 theoretical:1 instance:2 modeling:1 clipping:2 cost:2 phrase:1 addressing:2 subset:1 johnson:1 sumit:1 reported:6 varies:1 combined:1 cho:2 st:1 fundamental:1 lstm:2 international:2 standing:1 corrects:1 decoding:4 michael:2 together:1 gergely:1 aaai:2 choose:1 huang:1 emnlp:2 inefficient:3 wojciech:1 szegedy:1 account:1 ioannis:1 inc:2 tsung:1 view:1 characterizes:1 start:11 contribution:1 minimize:1 square:2 accuracy:2 variance:8 yield:1 sitting:1 serge:1 yew:1 vp:1 norouzi:1 vincent:1 carlo:5 worth:1 suffers:1 sebastian:1 neu:1 definition:1 against:1 james:1 associated:1 proof:1 sampled:1 dataset:4 lim:2 improves:1 sophisticated:1 stuttering:1 bidirectional:1 supervised:1 improved:3 wei:1 formulation:2 done:2 anderson:1 furthermore:2 inception:2 stage:2 mez:1 hand:1 replacing:2 duc:6 christopher:1 google:5 defines:1 quality:2 scheduled:3 grows:2 usa:1 name:1 effect:2 ye:2 true:1 hence:1 iteratively:1 semantic:1 deal:1 game:2 during:5 chin:1 temperature:1 image:22 recently:2 parikh:1 common:5 lsp:6 superior:2 exponentially:1 volume:1 million:1 approximates:1 numerically:2 bougares:1 significant:1 automatic:7 tuning:2 z4:2 toolkit:1 calibration:1 similarity:1 acute:1 etc:2 pling:1 aligning:1 align:1 sergio:1 own:1 recent:1 perspective:2 optimizing:1 rewarded:3 coco:5 occasionally:1 certain:1 server:3 hay:1 rep:2 yi:83 jens:1 additional:5 george:1 performer:1 converge:1 v3:1 fernando:1 stephen:1 multiple:1 desirable:2 reduces:1 rti:14 long:2 lin:2 c40:4 proximate:1 mle:28 equally:1 christina:1 paired:2 impact:3 prediction:10 z5:2 variant:1 heterogeneous:1 vision:4 metric:13 expectation:2 navdeep:1 iteration:1 normalization:1 sergey:1 robotics:5 cell:2 achieved:1 proposal:5 addition:4 whereas:1 fine:2 addressed:1 appropriately:1 goodman:1 eliminates:1 comment:1 tend:1 bahdanau:2 ample:1 incorporates:1 inconsistent:1 lisp:1 call:1 chopra:1 noting:1 bengio:5 embeddings:1 variety:1 switch:1 zi:69 wti:20 competing:1 suboptimal:1 architecture:4 idea:1 lubomir:1 veda:1 url:1 bridging:1 effort:1 abruptly:1 peter:3 passing:1 deep:1 generally:1 clear:1 proportionally:1 detailed:1 mid:1 induces:1 generate:1 demonstrator:1 http:4 spice:2 pdrop:10 per:1 gigaword:8 proach:1 four:1 terminology:1 neither:2 marie:5 penalizing:1 pietro:1 sum:3 run:2 package:4 throughout:1 groundtruth:4 wu:2 venice:2 draw:4 decision:2 lanctot:1 vpi:2 comparable:2 dropout:1 layer:3 huszar:1 nan:1 followed:1 refine:1 precisely:1 your:1 y1i:1 toshev:1 speed:1 extremely:2 gould:1 radu:1 structured:4 truncate:2 combination:1 poor:1 request:1 across:3 smaller:2 sam:1 mastering:1 wi:18 lp:3 making:1 happens:1 den:1 iccv:1 apq:1 taken:1 equation:1 previously:1 turn:2 montecarlo:1 needed:2 singer:2 antonoglou:1 end:1 available:5 panneershelvam:1 doll:1 apply:2 enforce:2 appropriate:1 alternative:1 batch:2 original:1 denotes:1 pentagon:1 exploit:1 especially:2 appreciate:1 objective:13 traditional:1 bagnell:2 exhibit:1 gradient:31 iclr:1 unable:1 rethinking:1 majority:1 decoder:2 chris:1 maddison:1 y5:2 consensus:1 bleu:2 reason:1 length:3 besides:1 code:1 relationship:1 mini:2 issn:1 balance:2 julian:1 schrittwieser:1 portant:1 noam:1 negative:1 underpin:1 wojna:1 zbigniew:1 zt:4 policy:25 summarization:8 perform:1 allowing:1 gated:1 twenty:1 observation:1 datasets:2 markov:1 benchmark:2 nist:1 acknowledge:1 immediate:1 situation:1 extended:1 zi1:43 y1:1 shazeer:1 auli:1 mansour:1 ninth:1 jonsson:1 david:2 propositional:1 namely:3 required:1 kl:1 pair:1 z1:3 sentence:1 gru:2 optimized:2 learned:2 tensorflow:3 nip:2 address:2 able:1 capped:1 adversary:1 pattern:2 regime:2 challenge:1 including:1 event:1 difficulty:3 warm:7 regularized:2 rely:1 force:1 advanced:1 zhu:3 scheme:3 improve:1 github:1 rated:2 axis:1 created:1 martial:1 naive:1 philadelphia:1 text:5 epoch:1 byte:1 python:1 adagrad:2 embedded:1 loss:7 brittle:1 generation:15 limitation:1 versus:1 ramakrishna:1 generator:1 validation:9 foundation:1 vanhoucke:1 article:4 storing:1 roukos:1 translation:9 eccv:1 ehre:1 penalized:1 token:7 summary:3 free:2 english:2 infeasible:4 asynchronous:1 hebert:1 bias:10 allow:1 taking:1 differentiating:1 fifth:1 distributed:1 van:2 vocabulary:5 ignores:1 forward:2 reinforcement:14 refinement:1 projected:1 testset:4 franz:1 kishore:1 sifre:1 erhan:1 correlate:2 approximate:6 emphasize:1 confirm:1 ioffe:1 conclude:1 belongie:1 vedantam:1 xi:67 subsequence:1 search:8 table:4 lip:1 additionally:1 learn:2 nature:1 scratch:1 ca:3 unavailable:2 improving:2 schuurmans:1 zitnick:2 spg:30 factorizable:1 sp:2 official:1 main:5 marc:3 linearly:2 significance:1 aurelio:1 edition:1 repeated:1 augmented:3 slow:1 mscoco:6 decoded:2 lie:1 third:1 dumitru:1 specific:1 symbol:1 appeal:1 evidence:1 sequential:1 adding:1 importance:1 corr:5 zhenhai:2 conditioned:1 gap:1 easier:1 chen:1 smoothly:1 entropy:2 simply:1 explore:1 devi:1 failed:1 vinyals:2 pretrained:1 driessche:1 truth:7 extracted:1 conditional:1 bang:14 replace:2 man:2 yti:4 shared:1 included:2 checkpoint:3 graff:1 wt:4 lemma:3 total:1 pas:1 experimental:1 meaningful:1 formally:1 deisenroth:1 mark:1 latter:1 alexander:1 outstanding:1 oriol:2 incorporate:2 evaluate:2 schuster:1 |
6,495 | 6,875 | Online Dynamic Programming
Holakou Rahmanian
Department of Computer Science
University of California Santa Cruz
Santa Cruz, CA 95060
[email protected]
Manfred K. Warmuth
Department of Computer Science
University of California Santa Cruz
Santa Cruz, CA 95060
[email protected]
Abstract
We consider the problem of repeatedly solving a variant of the same dynamic
programming problem in successive trials. An instance of the type of problems
we consider is to find a good binary search tree in a changing environment. At the
beginning of each trial, the learner probabilistically chooses a tree with the n keys
at the internal nodes and the n + 1 gaps between keys at the leaves. The learner
is then told the frequencies of the keys and gaps and is charged by the average
search cost for the chosen tree. The problem is online because the frequencies can
change between trials. The goal is to develop algorithms with the property that
their total average search cost (loss) in all trials is close to the total loss of the best
tree chosen in hindsight for all trials. The challenge, of course, is that the algorithm
has to deal with exponential number of trees. We develop a general methodology
for tackling such problems for a wide class of dynamic programming algorithms.
Our framework allows us to extend online learning algorithms like Hedge [16] and
Component Hedge [25] to a significantly wider class of combinatorial objects than
was possible before.
1
Introduction
Consider the following online learning problem. In each trial, the algorithm plays with a Binary
Search Tree (BST) for a given set of n keys. Then the adversary reveals a set of probabilities for the
n keys and their n + 1 gaps, and the algorithm incurs a linear loss of average search cost. The goal is
to predict with a sequence of BSTs minimizing regret which is the difference between the total loss
of the algorithm and the total loss of the single best BST chosen in hindsight.
A natural approach to solve this problem is to keep track of a distribution on all possible BSTs during
the trials (e.g. by running the Hedge algorithm [16] with one weight per BST). However, this seems
impractical since it requires maintaining a weight vector of exponential size. Here we focus on
combinatorial objects that are comprised of n components where the number of objects is typically
exponential in n. For a BST the components are the depth values of the keys and the gaps in the tree.
This line of work requires that the loss of an object is linear in the components (see e.g. [35]). In our
BST examples the loss is simply the dot product between the components and the frequencies.
There has been much work on developing efficient algorithms for learning objects that are composed
of components when the loss is linear in the components. These algorithms get away with keeping
one weight per component instead of one weight per object. Previous work includes learning k-sets
[36], permutations [19, 37, 2] and paths in a DAG [35, 26, 18, 11, 5]. There are also general tools for
learning such combinatorial objects with linear losses. The Follow the Perturbed Leader (FPL) [22]
is a simple algorithm that adds random perturbations to the cumulative loss of each component, and
then predicts with the combinatorial object that has the minimum perturbed loss. The Component
Hedge (CH) algorithm [25] (and its extensions [34, 33, 17]) constitutes another generic approach.
Each object is typically represented as a bit vector over the set of components where the 1-bits
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
indicate the components appearing in the object. The algorithm maintains a mixture of the weight
vectors representing all objects. The weight space of CH is thus the convex hull of the weight
vectors representing the objects. This convex hull is a polytope of dimension n with the objects as
corners. For the efficiency of CH it is typically required that this polytope has a small number of
facets (polynomial in n). The CH algorithm predicts with a random corner of the polytope whose
expectation equals the maintained mixture vector in the polytope.
Unfortunately the results of CH and its current extensions cannot be directly applied to problems
like BST. This is because the BST polytope discussed above does not have a characterization with
polynomially many facets. There is an alternate polytope for BSTs with a polynomial number of
facets (called the associahedron [29]) but the average search cost is not linear in the components used
for this polytope. We close this gap by exploiting the dynamic programming algorithm which solves
the BST optimization problem. This gives us a polytope with a polynomial number of facets while
the loss is linear in the natural components of the BST problem.
Contributions We propose a general method for learning combinatorial objects whose optimization
problem can be solved efficiently via an algorithm belonging to a wide class of dynamic programming
algorithms. Examples include BST (see Section 4.1), Matrix-Chain Multiplication, Knapsack,
Rod Cutting, and Weighted Interval Scheduling (see Appendix A). Using the underlying graph
of subproblems induced by the dynamic programming algorithm for these problems, we define a
representation of the combinatorial objects by encoding them as a specific type of subgraphs called
k-multipaths. These subgraphs encode each object as a series of successive decisions (i.e. the
components) over which the loss is linear. Also the associated polytope has a polynomial number
of facets. These properties allow us to apply the standard Hedge [16, 28] and Component Hedge
algorithms [25].
Paper Outline In Section 2 we start with online learning of paths which are the simplest type
of subgraphs we consider. This section briefly describes the main two existing algorithms for the
path problem: (1) An efficient implementation of Hedge using path kernels and (2) Component
Hedge. Section 3 introduces a much richer class of subgraphs, called k-multipaths, and generalizes
the algorithms. In Section 4, we define a class of combinatorial objects recognized by dynamic
programming algorithms. Then we prove that minimizing a specific dynamic programming problem
from this class over trials reduces to online learning of k-multipaths. The online learning for BSTs
uses k-multipaths for k = 2 (Section 4.1). A large number of additional examples are discussed in
Appendix A. Finally, Section 5 concludes with comparison to other algorithms and future work and
discusses how our method is generalized for arbitrary ?min-sum? dynamic programming problems.
2
Background
Perhaps the simplest algorithms in online learning are the ?experts algorithms? like the Randomized
Weighted Majority [28] or the Hedge algorithm [16]. They keep track of a probability vector over
all experts. The weight/probability wi of expert i is proportional to exp( ? L(i)), where L(i) is the
cumulative loss of expert i until the current trial and ? is a non-negative learning rate. In this paper we
use exponentially many combinatorial objects (composed of components) as the set of experts. When
Hedge is applied to such combinatorial objects, we call it Expanded Hedge (EH) because it is applied
to a combinatorially ?expanded domain?. As we shall see, if the loss is linear over components (and
thus the exponential weight of an object becomes a product over components), then this often can be
exploited for obtaining an efficient implementations of EH.
Learning Paths The online shortest path has been explored both in full information setting [35, 25]
and various bandit settings [18, 4, 5, 12]. Concretely the problem in the full information setting is as
follows. We are given a directed acyclic graph (DAG) G = (V, E) with a designated source node
s 2 V and sink node t 2 V . In each trial, the algorithm predicts with a path from s to t. Then for
each edge e 2 E, the adversary reveals a loss `e 2 [0, 1]. The loss of the algorithm is given by the
sum of the losses of the edges along the predicted path. The goal is to minimize the regret which
is the difference between the total loss of the algorithm and that of the single best path chosen in
hindsight.
2
Expanded Hedge on Paths Takimoto and Warmuth [35] found an efficient implementation of
EH by exploiting the additivity
Qof the loss over the edges of a path. In this case the weight w?
of a path ? is proportional to e2? exp( ?Le ), where Le is the cumulative loss of edge e. The
algorithm maintains one weight we per edge such
Q that the total weight of all edges leaving any
non-sink node sums to 1. This implies that w? = e2? we and sampling a path is easy. At the end
of the current trial,
Q each edge e receives additional loss `e , and the updated path weights have the
form w?new = Z1 e2? we exp( ?`e ), where Z is a normalization. Now a certain efficient procedure
called weight pushing [31] is applied. It finds new edge weights wenew s.t. the total
Q outflow out of each
node is one and the updated weights are again in ?product form?, i.e. w?new = e2? wenew , facilitating
sampling.
Theorem 1 (Takimoto-Warmuth [35]). Given a DAG G = (V, E) with designated source node s 2 V
and sink node t 2 V , assume N is the number of paths in G from s to t, L? is the total loss of best
path, and B is an upper bound on the loss of any path in each trial. Then with proper tuning of the
learning rate ? over the T trials, EH guarantees:
p
E[LEH ] L? ? B 2 T log N + B log N .
Component Hedge on Paths Koolen, Warmuth and Kivinen [25] applied CH to the path problem.
The edges are the components of the paths. A path is encoded as a bit vector ? of |E| components
where the 1-bits are the edges in the path. The convex hull of all paths is called the unit-flow polytope.
CH maintains a mixture vector in this polytope. The constraints of the polytope enforce an outflow of
1 from the source node s, and flow conservation at every other node but the sink node t. In each trial,
the weight of each edge we is updated multiplicatively by the factor exp( ?`e ). Then the weight
vector is projected back to the unit-flow polytope via a relative entropy projection. This projection is
achieved by iteratively projecting onto the flow constraint of a particular vertex and then repeatedly
cycling through the vertices [8]. Finally, to sample with the same expectation as the mixture vector in
the polytope, this vector is decomposed into paths using a greedy approach which removes one path
at a time and zeros out at least one edge in the remaining mixture vector in each iteration.
Theorem 2 (Koolen-Warmuth-Kivinen [25]). Given a DAG G = (V, E) with designated source node
s 2 V and sink node t 2 V , let D be a length bound of the paths in G from s to t against which the
CH algorithm is compared. Also denote the total loss of the best path of length at most D by L? .
Then with proper tuning of the learning rate ? over the T trials, CH guarantees:
p
E[LCH ] L? ? D 4 T log |V | + 2 D log |V |.
Much of this paper is concerned with generalizing the tools sketched in this section from paths to
k-mulitpaths, from the unit-flow polytope to the k-flow polytope and developing a generalized version
of weight pushing for k-multipaths.
3
Learning k-Multipaths
As we shall see, k-multipaths will be subgraphs of k-DAGs built from k-multiedges. Examples of all
the definitions are given in Figure 1 for the case k = 2.
Definition 1 (k-DAG). A DAG G = (V, E) is called k-DAG if it has following properties:
(i) There exists one designated ?source? node s 2 V with no incoming edges.
(ii) There exists a set of ?sink? nodes T ? V which is the set of nodes with no outgoing edges.
(iii) For all non-sink vertices v, the set of edges leaving v is partitioned into disjoint sets of size k
which are called k-multiedges.
We denote the set of multiedges ?leaving? vertex v as Mv and all multiedges of the DAG as M .
Each k-multipath can be generated by starting with a single multiedge at the source and choosing
inflow many (i.e. number of incoming edges many) successor multiedges at the internal nodes
(until we reach the sink nodes in T ). An example of a 2-multipath is given in Figure 1. Recall that
paths were described as bit vectors ? of size |E| where the 1-bits were the edges in the path. In
k-multipaths each edge bit ?e becomes a non-negative count.
3
Figure 1: On the left we give an example of a 2-DAG. The source s and the nodes in the first layer
each have two 2-multiedges depicted in red and blue. The nodes in the next layer each have one
2-multiedge depicted in green. An example of 2-multipath in the 2-DAG is given on the right. The
2-multipath is represented as an |E|-dimensional count vector ?. The grayed edges are the edges
with count ?e = 0. All non-zero counts ?e are shown next to their associated edges e. Note that for
nodes in the middle layers, the outflow is always 2 times the inflow.
Definition 2 (k-multipath). Given aPk-DAG G = (V, E), let ? 2 N|E| in which P
?e is associated with
e 2 E. Define the inflow ?in (v) := (u,v)2E ?(u,v) and the outflow ?out (v) := (v,u)2E ?(v,u) . We
call ? a k-multipath if it has the below properties:
(i) The outflow ?out (s) of the source s is k.
(ii) For any two edges e, e0 in a multiedge m of G, ?e = ?e0 . (When clear from the context, we
denote this common value as ?m .)
(iii) For each vertex v 2 V T {s}, the outflow is k times the inflow, i.e. ?out (v) = k ? ?in (v).
k-Multipath Learning Problem We define the problem of online learning of k-multipaths on a
given k-DAG as follows. In each trial, the algorithm randomly predicts with a k-multipath ?. Then
for each edge e 2 E, the adversary reveals a loss `e 2 [0, 1] incurred during that trial. The linear loss
of the algorithm during this trial is given by ? ? `. Observe that the online shortest path problem is a
special case when k = |T | = 1. In the remainder of this section, we generalize the algorithms in
Section 2 to the online learning problem of k-multipaths.
3.1
Expanded Hedge on k-Multipaths
We implement EH efficiently for learning k-multipath by considering each k-multipath as an expert.
Recall that each k-multipath can be generated by starting with a single multiedge at the source
and choosing inflow many successor multiedges at the internal nodes. Multipaths are composed of
multiedges as components and with each multiedge m 2 M , we associate a weight wm . We maintain
|M |
a distribution W over multipaths defined in terms of the weights w 2 R 0 on the multiedges. The
distribution W will have the following canonical properties:
Definition 3 (EH distribution properties).
Q
1. The weights are in product form, i.e. W (?) = m2M (wm )?m . Recall that ?m is the
common value in ? among edges in m.
P
2. The weights are locally normalized, i.e. m2Mv wm = 1 for all v 2 V T .
P
3. The total path weight is one, i.e. ? W (?) = 1.
Using these properties, sampling a k-multipath from W can be easily done as follows. We start with
sampling a single k-multiedge at the source and continue sampling inflow many successor multiedges
at the internal nodes until the k-multipath reaches the sink nodes in T . Observe that ?m indicates the
number of times the k-multiedge m is sampled through this process. EH updates the weights of the
4
multipaths as follows:
1
W (?) exp( ? ? ? `)
Z
!
"
Y
X
1
?m
=
(wm )
exp
?
?m
Z
m2M
m2M
h
X i ? ?m
1 Y ?
=
wm exp
?
`e
.
Z
e2m
m2M
|
{z
}
W new (?) =
X
`e
e2m
!#
:=w
bm
Thus the weights wm of each k-multiedge m 2 ?M are
bm by multiplying
Pupdated?multiplicatively to w
the wP
? e2m `e and then renormalizing with Z. Note
m with the exponentiated loss factors exp
that e2m `e is the loss of multiedge m.
Generalized Weight Pushing We generalize the weight pushing algorithm [31] to k-multipaths
to Q
reestablish the three canonical properties of Definition 3. The new weights W new (?) =
1
bm )?m sum to 1 (i.e. Property (iii) holds) since Z normalizes the weights. Our
m2M (w
Z
new
goal is to find
weights
that the other two properties hold as well, i.e.
Qnew multiedge
P wm so
new
new ?m
new
W (?) = m2M (wm )
and m2Mv wm
= 1 for all nonsinks v. For this purpose, we
introduce a normalization Zv for each vertex v. Note that Zs = Z where s is the source node. Now
new
the generalized weight pushing finds new weights wm
for the multiedges to be used in the next trial:
1. For sinks v 2 T , Zv := 1.
2. Recursing backwards in the DAG, let Zv :=
P
m2Mv
w
bm
Q
new
3. For each multiedge m from v to u1 , . . . , uk , wm
:= w
bm
u:(v,u)2m
Qk
i=1
Zu for all non-sinks v.
Zui /Zv .
Appendix B proves the correctness and time complexity of this generalized weight pushing algorithm.
Regret Bound In order to apply the regret bound of EH [16], we have to initialize the distribution
W on k-multipaths to the uniform distribution. This is achieved by setting all wm to 1 followed by
an application of generalized weight pushing. Note that Theorem 1 is a special case of the below
theorem for k = 1.
Theorem 3. Given a k-DAG G with designated source node s and sink nodes T , assume N is the
number of k-multipaths in G from s to T , L? is the total loss of best k-multipath, and B is an upper
bound on the loss of any k-multipath in each trial. Then with proper tuning of the learning rate ?
over the T trials, EH guarantees:
p
E[LEH ] L? ? B 2 T log N + B log N .
3.2
Component Hedge on k-Multipaths
We implement the CH efficiently for learning of k-multipath. Here the k-multipaths are the objects
which are represented as |E|-dimensional1 count vectors ? (Definition 2). The algorithm maintains
an |E|-dimensional mixture vector w in the convex hull of count vectors. This hull is the following
polytope over weight vectors obtained by relaxing the integer constraints on the count vectors:
|E|
Definition 4 (k-flow polytope). Given a k-DAG G = (V, E), let w 2 R 0 in which we is asP
sociated with e 2 E. Define the inflow win (v) := (u,v)2E w(u,v) and the outflow wout (v) :=
P
(v,u)2E w(v,u) . w belongs to the k-flow polytope of G if it has the below properties:
(i) The outflow wout (s) of the source s is k.
(ii) For any two edges e, e0 in a multiedge m of G, we = we0 .
(iii) For each vertex v 2 V T {s}, the outflow is k times the inflow, i.e. wout (v) = k ? win (v).
1
For convenience we use the edges as components for CH instead of the multiedges as for EH.
5
In each trial, the weight of each edge we is updated multiplicatively to w
be = we exp( ?`e ) and then
b is projected back to the k-flow polytope via a relative entropy projection:
the weight vector w
P
b
wnew :=
arg min
(w||w),
where
(a||b) = i ai log abii + bi ai .
w2k-flow polytope
This projection is achieved by repeatedly cycling over the vertices and enforcing the local flow
constraints at the current vertex. Based on the properties of the k-flow polytope in Definition 4, the
corresponding projection steps can be rewritten as follows:
(i) Normalize the wout (s) to k.
(ii) Given a multiedge m, set the k weights in m to their geometric average.
(iii) Given a vertex v 2 V T {s}, scale the adjacent edges of v s.t.
q
q
1
wout (v) := k+1 k (wout (v))k win (v) and win (v) := k+1 k (wout (v))k win (v).
k
See Appendix C for details.
Decomposition The flow polytope has exponentially many objects as its corners. We now rewrite
any vector w in the polytope as a mixture of |M | objects. CH then predicts with a random object
drawn from this sparse mixture. The mixture vector is decomposed by greedily removing a multipath
from the current weight vector as follows: Ignore all edges with zero weights. Pick a multiedge at
s and iteratively inflow many multiedges at the internal nodes until you reach the sink nodes. Now
subtract that constructed multipath from the mixture vector w scaled by its minimum edge weight.
This zeros out at least k edges and maintain the flow constraints at the internal nodes.
Regret Bound The regret bound for CH depends on a good choice of the initial weight vector winit
in the k-flow polytope. We use an initialization technique recently introduced in [32]. Instead of
explicitly selecting winit in the k-flow polytope, the initial weight is obtained by projecting a point
b init outside of the polytope to the inside. This yields the following regret bounds (Appendix D):
w
Theorem 4. Given a k-DAG G = (V, E), let D be the upper bound for the 1-norm of the k-multipaths
in G. Also denote the total loss of the best k-multipath by L? . Then with proper tuning of the learning
rate ? over the T trials, CH guarantees:
p
E[LCH ] L? ? D 2 T (2 log |V | + log D) + 2 D log |V | + D log D.
Moreover, when the k-multipaths are bit vectors, then:
p
E[LCH ] L? ? D 4 T log |V | + 2 D log |V |.
Notice that by setting |T | = k = 1, the algorithm for path learning in [25] is recovered. Also observe
that Theorem 2 is a corollary of Theorem 4 since every path is represented as a bit vector.
4
Online Dynamic Programming with Multipaths
We consider the problem of repeatedly solving a variant of the same dynamic programming problem
in successive trials. We will use our definition of k-DAGs to describe a certain type of dynamic
programming problem. The vertex set V is a set of subproblems to be solved. The source node s 2 V
is the final subproblem. The sink nodes T ? V are the base subproblems. An edge from a node v to
another node v 0 means that subproblem v may recurse on v 0 . We assume a non-base subproblem v
always breaks into exactly k smaller subproblems. A step of the dynamic programming recursion is
thus represented by a k-multiedge. We assume the sets of k subproblems between possible recursive
calls at a node are disjoint. This corresponds to the fact that the choice of multiedges at a node
partitions the edge set leaving that node.
There is a loss associated with any sink node in T . Also with the recursions at the internal node v a
local loss will be added to the loss of the subproblems that depends on v and the chosen k-multiedge
6
leaving v. Recall that Mv is the set of multiedges leaving v. We can handle the following type of
?min-sum? recurrences:
(
LT (v)
hP
i v2T
OPT(v) =
minm2Mv
v2V T.
u:(v,u)2m OPT(u) + LM (m)
The problem of repeatedly solving such a dynamic programming problem over trials now becomes
the problem of online learning of k-multipaths in this k-DAG. Note that due to the correctness of the
dynamic programming, every possible solution to the dynamic programming can be encoded as a
k-multipath in the k-DAG and vice versa.
The loss of a given multipath is the sum of LM (m) over all multiedges m in the multipath plus the
sum of LT (v) for all sink nodes v at the bottom of the multipath. To capture the same loss, we can
alternatively define losses over the edges of the k-DAG. Concretely, for each edge (v, u) in a given
multiedge m define `(v,u) := k1 LM (m) + {u2T } LT (u) where {?} is the indicator function.
In summary we are addressing the above min-sum type dynamic programming problem specified by
a k-DAG and local losses where for the sake of simplicity we made two assumptions: each non-base
subproblem breaks into exactly k smaller subproblems and the choice of k subproblems at a node
are disjoint. We briefly discuss in the conclusion section how to generalize our methods to arbitrary
min-sum dynamic programming problems, where the sets of subproblems can overlap and may have
different sizes.
4.1
The Example of Learning Binary Search Trees
Recall again the online version of optimal binary search tree (BST) problem [10]: We are given a set
of n distinct keys K1 < K2 < . . . < Kn and n + 1 gaps or ?dummy keys? D0 , . . . , Dn indicating
search failures such that for all i 2 {1..n}, Di 1 < Ki < Di . In each trial, the algorithm predicts
withP
a BST. Then
adversary reveals a frequency vector ` = (p, q) with p 2 [0, 1]n , q 2 [0, 1]n+1
Pthe
n
n
and i=1 pi + j=0 qj = 1. For each i, j, the frequencies pi and qj are the search probabilities for
Ki and Dj , respectively. The loss is defined as the average search cost in the predicted BST which is
the average depth2 of all the nodes in the BST:
loss =
n
X
i=1
depth(Ki ) ? pi +
n
X
j=0
depth(Dj ) ? qj .
Convex Hull of BSTs Implementing CH requires a representation where not only the BST polytope
has a polynomial number of facets, but also the loss must be linear over the components. Since the
average search cost is linear in the depth(Ki ) and depth(Dj ) variables, it would be natural to choose
these 2n + 1 variables as the components for representing a BST. Unfortunately the convex hull of all
BSTs when represented this way is not known to be a polytope with a polynomial number of facets.
There is an alternate characterization of the convex hull of BSTs with n internal nodes called the
associahedron [29]. This polytope has polynomial in n many facets but the average search cost is not
linear in the n components associated with this polytope3 .
The Dynamic Programming Representation The optimal BST problem can be solved via dynamic programming [10]. Each subproblem is denoted by a pair (i, j), for 1 ? i ? n + 1 and
i 1 ? j ? n, indicating the optimal BST problem with the keys Ki , . . . , Kj and dummy keys
Di 1 , . . . , Dj . The base subproblems are (i, i 1), for 1 ? i ? n + 1 and the final subproblem is
(1, n). The BST dynamic programming problem uses the following recurrence:
(
qi 1
j =i 1
Pj
Pj
OPT(i, j) =
mini?r?j {OPT(i, r 1)+OPT(r+1, j)+ k=i pk + k=i 1 qk } i ? j.
This recurrence always recurses on 2 subproblems. Therefore we have k = 2 and the associated
2-DAG has the subproblems/vertices V = {(i, j)|1 ? i ? n + 1, i 1 ? j ? n}, source s = (1, n)
2
Here the root starts at depth 1.
Concretely, the ith component is ai bi where ai and bi are the number of nodes in the left and right subtrees
of the ith internal node Ki , respectively.
3
7
4
1
5
3
2
2
3
1
4
5
Figure 2: (left) Two different 2-multipaths in the DAG, in red and blue, and (right) their associated
BSTs of n = 5 keys and 6 ?dummy? keys. Note that each node, and consequently edge, is visited at
most once in these 2-multipaths.
Problem
Optimal Binary Search Trees
Matrix-Chain Multiplications4
Knapsack
Rod Cutting
Weighted Interval Scheduling
FPL
3 p
O(n 2 T )
?p
3
O(n 2 pT )
3
O(n 2 pT )
3
O(n 2 T )
EHp
3
O(n 2 T )p
3
O(n 2 (dmax
)3 T )
3 p
O(n 2 pT )
3
O(n 2 pT )
3
O(n 2 T )
CH p
1
O(n (log n) 2 T )p
1
O(n (log n) 2 (dmaxp
)3 T )
1
O(n (log nC) 2p T )
1
O(n (log n) 2 pT )
1
O(n (log n) 2 T )
Table 1: Performance of various algorithms over different problems. C is the capacity in the Knapsack
problem, and dmax is the upper-bound on the dimension in matrix-chain multiplication problem.
and sinks T = {(i, i 1)|1 ? i ? n + 1}. Also at node (i, j), the set M(i,j) consists of (j i + 1)
many 2-multiedges. The rth 2-multiedge leaving (i, j) comprised of 2 edges going from the node
(i, j) to the nodes (i, r 1) and (r + 1, j). Figure 2 illustrates the 2-DAG and 2-multipaths associated
with BSTs.
Since the above recurrence relation correctly solves the offline optimization problem, every 2multipath in the DAG represents a BST, and every possible BST can be represented by a 2-multipath of
the 2-DAG. We have O(n3 ) edges and multiedges which are the components of our new representation.
Pj
Pj
The loss of each 2-multiedge leaving (i, j) is k=i pk + k=i 1 qk and is upper bounded by 1. Most
crucially, the original average search cost is linear in the losses of the multiedges and the 2-flow
polytope has O(n3 ) facets.
Regret Bound As mentioned earlier, the number of binary trees with n nodes is the nth Catalan
(2n)!
number. Therefore N = n!(n+1)!
2 (2n , 4n ). Also note that the expected search cost is bounded by
3 p
B = n in each trial. Thus using Theorem 3, EH achieves a regret bound of O(n 2 T ).
Additionally, notice that the number of subproblems in the dynamic programming problem for BSTs
is (n+1)(n+2)
. This is also the number of vertices in the associated 2-DAG and each 2-multipath
2
representing a BST consists ofpexactly D = 2n edges. Therefore using Theorem 4, CH achieves a
1
regret bound of O(n (log n) 2 T ).
5
Conclusions and Future Work
We developed a general framework for online learning of combinatorial objects whose offline
optimization problems can be efficiently solved via an algorithm belonging to a large class of
dynamic programming algorithms. In addition to BSTs, several example problems are discussed in
Appendix A. Table 1 gives the performance of EH and CH in our dynamic programming framework
4
The loss of a fully parenthesized matrix-chain multiplication is the number of scalar multiplications in the
execution of all matrix products. This number cannot be expressed as a linear loss over the dimensions of the
matrices. We are thus unaware of a way to apply FPL to this problem using the dimensions of the matrices as the
components. See Appendix A.1 for more details.
8
and compares it with the Follow the Perturbed Leader (FPL) algorithm. FPL additively perturbs the
losses and then uses dynamic programming to find the solution of minimum loss. FPL essentially
always matches EH, and CH is better than both in all cases.
We conclude with a few remarks:
? For EH, projections are simply a renormalization of the weight vector. In contrast, iterative Bregman projections are often needed for projecting back into the polytope used by CH [25, 19]. These
methods are known to converge to the exact projection [8, 6] and are reported to be very efficient
empirically [25]. For the special cases of Euclidean projections [13] and Sinkhorn Balancing [24],
linear convergence has been proven. However we are unaware of a linear convergence proof for
general Bregman divergences. Regardless of the convergence rate, the remaining gaps to the exact
projections have to be accounted for as additional loss in the regret bounds. We do this in Appendix
E for CH.
? For the sake of concreteness, we focused in this paper on dynamic programming problems
with ?min-sum? recurrence relations, a fixed branching factor k and mutually exclusive sets of
choices at a given subproblem. However, our results can be generalized to arbitrary ?min-sum?
dynamic programming problems with the methods introduced in [30]: We let the multiedges in G
form hyperarcs, each of which is associated with a loss. Furthermore, each combinatorial object
is encoded as a hyperpath, which is a sequence of hyperarcs from the source to the sinks. The
polytope associated with such a dynamic programming problem is defined by flow-type constraints
over the underlying hypergraph G of subproblems. Thus online learning a dynamic programming
solution becomes a problem of learning hyperpaths in a hypergraph, and the techniques introduced
in this paper let us implement EH and CH for this more general class of dynamic programming
problems.
? In this work we use dynamic programming algorithms for building polytopes for combinatorial
objects that have a polynomial number of facets. The technique of going from the original polytope
to a higher dimensional polytope in order to reduce the number of facets is known as extended
formulation (see e.g. [21]). In the learning application we also need the additional requirement
that the loss is linear in the components of the objects. A general framework of using extended
formulations to develop learning algorithms has recently been explored in [32].
? We hope that many of the techniques from the expert setting literature can be adapted to learning
combinatorial objects that are composed of components. This includes lower bounding weights
for shifting comparators [20] and sleeping experts [7, 1]. Also in this paper, we focus on full
information setting where the adversary reveals the entire loss vector in each trial. In contrast in fulland semi-bandit settings, the adversary only reveals partial information about the loss. Significant
work has already been done in learning combinatorial objects in full- and semi-bandit settings
[3, 18, 4, 27, 9]. It seems that the techniques introduced in the paper will also carry over.
? Online Markov Decision Processes (MDPs) [15, 14] is an online learning model that focuses
on the sequential revelation of an object using a sequential state based model. This is very much
related to learning paths and the sequential decisions made in our dynamic programming framework.
Connecting our work with the large body of research on MDPs is a promising direction of future
research.
? There are several important dynamic programming instances that are not included in the class
considered in this paper: The Viterbi algorithm for finding the most probable path in a graph,
and variants of Cocke-Younger-Kasami (CYK) algorithm for parsing probabilistic context-free
grammars. The solutions for these problems are min-sum type optimization problem after taking a
log of the probabilities. However taking logs creates unbounded losses. Extending our methods to
these dynamic programming problems would be very worthwhile.
Acknowledgments We thank S.V.N. Vishwanathan for initiating and guiding much of this research.
We also thank Michael Collins for helpful discussions and pointers to the literature on hypergraphs and
PCFGs. This research was supported by the National Science Foundation (NSF grant IIS-1619271).
9
References
[1] Dmitry Adamskiy, Manfred K Warmuth, and Wouter M Koolen. Putting Bayes to sleep. In
Advances in Neural Information Processing Systems, pages 135?143, 2012.
[2] Nir Ailon. Improved bounds for online learning over the Permutahedron and other ranking
polytopes. In AISTATS, pages 29?37, 2014.
[3] Jean-Yves Audibert, S?bastien Bubeck, and G?bor Lugosi. Minimax policies for combinatorial
prediction games. In COLT, volume 19, pages 107?132, 2011.
[4] Jean-Yves Audibert, S?bastien Bubeck, and G?bor Lugosi. Regret in online combinatorial
optimization. Mathematics of Operations Research, 39(1):31?45, 2013.
[5] Baruch Awerbuch and Robert Kleinberg. Online linear optimization and adaptive routing.
Journal of Computer and System Sciences, 74(1):97?114, 2008.
[6] Heinz H Bauschke and Jonathan M Borwein. Legendre functions and the method of random
Bregman projections. Journal of Convex Analysis, 4(1):27?67, 1997.
[7] Olivier Bousquet and Manfred K Warmuth. Tracking a small set of experts by mixing past
posteriors. Journal of Machine Learning Research, 3(Nov):363?396, 2002.
[8] Lev M Bregman. The relaxation method of finding the common point of convex sets and
its application to the solution of problems in convex programming. USSR computational
mathematics and mathematical physics, 7(3):200?217, 1967.
[9] Nicolo Cesa-Bianchi and G?bor Lugosi. Combinatorial bandits. Journal of Computer and
System Sciences, 78(5):1404?1422, 2012.
[10] Thomas H.. Cormen, Charles Eric Leiserson, Ronald L Rivest, and Clifford Stein. Introduction
to algorithms. MIT press Cambridge, 2009.
[11] Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, and Manfred Warmuth. On-line learning
algorithms for path experts with non-additive losses. In Conference on Learning Theory, pages
424?447, 2015.
[12] Varsha Dani, Sham M Kakade, and Thomas P Hayes. The price of bandit information for online
optimization. In Advances in Neural Information Processing Systems, pages 345?352, 2008.
[13] Frank Deutsch. Dykstra?s cyclic projections algorithm: the rate of convergence. In Approximation Theory, Wavelets and Applications, pages 87?94. Springer, 1995.
[14] Travis Dick, Andras Gyorgy, and Csaba Szepesvari. Online learning in Markov decision
processes with changing cost sequences. In Proceedings of the 31st International Conference
on Machine Learning (ICML-14), pages 512?520, 2014.
[15] Eyal Even-Dar, Sham M Kakade, and Yishay Mansour. Online Markov decision processes.
Mathematics of Operations Research, 34(3):726?736, 2009.
[16] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of computer and system sciences, 55(1):119?139, 1997.
[17] Swati Gupta, Michel Goemans, and Patrick Jaillet. Solving combinatorial games using products,
projections and lexicographically optimal bases. Preprint arXiv:1603.00522, 2016.
[18] Andr?s Gy?rgy, Tam?s Linder, G?bor Lugosi, and Gy?rgy Ottucs?k. The on-line shortest path
problem under partial monitoring. Journal of Machine Learning Research, 8(Oct):2369?2403,
2007.
[19] David P Helmbold and Manfred K Warmuth. Learning permutations with exponential weights.
The Journal of Machine Learning Research, 10:1705?1736, 2009.
[20] Mark Herbster and Manfred K Warmuth. Tracking the best expert. Machine Learning, 32(2):151?
178, 1998.
10
[21] Volker Kaibel. Extended formulations in combinatorial optimization. Preprint arXiv:1104.1023,
2011.
[22] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal
of Computer and System Sciences, 71(3):291?307, 2005.
[23] Jon Kleinberg and Eva Tardos. Algorithm design. Addison Wesley, 2006.
[24] Philip A Knight. The Sinkhorn?Knopp algorithm: convergence and applications. SIAM Journal
on Matrix Analysis and Applications, 30(1):261?275, 2008.
[25] Wouter M Koolen, Manfred K Warmuth, and Jyrki Kivinen. Hedging structured concepts. In
Conference on Learning Theory, pages 239?254. Omnipress, 2010.
[26] Dima Kuzmin and Manfred K Warmuth. Optimum follow the leader algorithm. In Learning
Theory, pages 684?686. Springer, 2005.
[27] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for
stochastic combinatorial semi-bandits. In Artificial Intelligence and Statistics, pages 535?543,
2015.
[28] Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and
computation, 108(2):212?261, 1994.
[29] Jean-Louis Loday. The multiple facets of the associahedron. Proc. 2005 Academy Coll. Series,
2005.
[30] R Kipp Martin, Ronald L Rardin, and Brian A Campbell. Polyhedral characterization of discrete
dynamic programming. Operations Research, 38(1):127?138, 1990.
[31] Mehryar Mohri. Weighted automata algorithms. In Handbook of weighted automata, pages
213?254. Springer, 2009.
[32] Holakou Rahmanian, David Helmbold, and S.V.N. Vishwanathan. Online learning of combinatorial objects via extended formulation. Preprint arXiv:1609.05374, 2017.
[33] Arun Rajkumar and Shivani Agarwal. Online decision-making in general combinatorial spaces.
In Advances in Neural Information Processing Systems, pages 3482?3490, 2014.
[34] Daiki Suehiro, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Kiyohito Nagano. Online
prediction under submodular constraints. In International Conference on Algorithmic Learning
Theory, pages 260?274. Springer, 2012.
[35] Eiji Takimoto and Manfred K Warmuth. Path kernels and multiplicative updates. The Journal
of Machine Learning Research, 4:773?818, 2003.
[36] Manfred K Warmuth and Dima Kuzmin. Randomized online PCA algorithms with regret bounds
that are logarithmic in the dimension. Journal of Machine Learning Research, 9(10):2287?2320,
2008.
[37] Shota Yasutake, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Masayuki Takeda. Online linear
optimization over permutations. In Algorithms and Computation, pages 534?543. Springer,
2011.
11
| 6875 |@word trial:28 version:2 briefly:2 polynomial:8 seems:2 middle:1 norm:1 additively:1 crucially:1 decomposition:1 pick:1 incurs:1 kijima:2 carry:1 initial:2 cyclic:1 series:2 selecting:1 past:1 existing:1 current:5 recovered:1 tackling:1 must:1 parsing:1 cruz:4 ronald:2 additive:1 partition:1 remove:1 update:2 greedy:1 leaf:1 intelligence:1 warmuth:15 beginning:1 ith:2 manfred:12 pointer:1 characterization:3 boosting:1 node:49 successive:3 unbounded:1 mathematical:1 along:1 ucsc:2 constructed:1 dn:1 prove:1 consists:2 inside:1 polyhedral:1 introduce:1 expected:1 heinz:1 initiating:1 decomposed:2 considering:1 grayed:1 becomes:4 underlying:2 moreover:1 bounded:2 rivest:1 z:1 developed:1 hindsight:3 finding:2 csaba:2 impractical:1 guarantee:4 every:5 exactly:2 revelation:1 scaled:1 dima:2 k2:1 uk:1 bst:22 unit:3 grant:1 louis:1 before:1 local:3 encoding:1 v2t:1 lev:1 path:39 lugosi:4 plus:1 initialization:1 relaxing:1 pcfgs:1 bi:3 directed:1 acknowledgment:1 kveton:1 recursive:1 regret:14 implement:3 kipp:1 procedure:1 kohei:2 significantly:1 projection:13 get:1 cannot:2 close:2 onto:1 convenience:1 scheduling:2 context:2 branislav:1 charged:1 regardless:1 starting:2 convex:10 focused:1 automaton:2 simplicity:1 helmbold:2 subgraphs:5 handle:1 updated:4 tardos:1 pt:5 play:1 yishay:1 exact:2 programming:36 olivier:1 us:3 associate:1 rajkumar:1 predicts:6 bottom:1 subproblem:7 preprint:3 solved:4 capture:1 eva:1 knight:1 mentioned:1 environment:1 complexity:1 hypergraph:2 dynamic:35 solving:4 rewrite:1 tight:1 creates:1 efficiency:1 learner:2 eric:1 sink:18 easily:1 represented:7 various:2 additivity:1 distinct:1 describe:1 artificial:1 choosing:2 sociated:1 outside:1 whose:3 richer:1 encoded:3 solve:1 jean:3 grammar:1 bsts:11 statistic:1 final:2 online:31 sequence:3 propose:1 product:6 remainder:1 recurses:1 nagano:1 pthe:1 mixing:1 academy:1 normalize:1 rgy:2 takeda:1 exploiting:2 convergence:5 requirement:1 extending:1 optimum:1 renormalizing:1 adam:1 object:32 wider:1 develop:3 solves:2 predicted:2 indicate:1 implies:1 deutsch:1 direction:1 hull:8 stochastic:1 routing:1 successor:3 implementing:1 generalization:1 opt:5 probable:1 brian:1 extension:2 hold:2 considered:1 exp:9 viterbi:1 predict:1 algorithmic:1 lm:3 achieves:2 purpose:1 proc:1 combinatorial:22 visited:1 combinatorially:1 correctness:2 vice:1 tool:2 weighted:6 arun:1 hope:1 dani:1 mit:1 suehiro:1 shota:1 always:4 kalai:1 asp:1 volker:1 probabilistically:1 corollary:1 encode:1 focus:3 indicates:1 contrast:2 greedily:1 helpful:1 typically:3 entire:1 bandit:6 relation:2 going:2 sketched:1 arg:1 among:1 colt:1 denoted:1 ussr:1 special:3 zui:1 initialize:1 equal:1 once:1 santosh:1 beach:1 sampling:5 represents:1 comparators:1 constitutes:1 icml:1 jon:1 future:3 few:1 wen:1 randomly:1 composed:4 divergence:1 national:1 maintain:2 wouter:2 leiserson:1 zheng:1 withp:1 introduces:1 mixture:10 recurse:1 chain:4 subtrees:1 bregman:4 edge:39 partial:2 tree:11 euclidean:1 littlestone:1 masayuki:1 e0:3 instance:2 we0:1 earlier:1 facet:12 shuji:2 yoav:1 cost:10 vertex:13 addressing:1 uniform:1 comprised:2 reported:1 bauschke:1 kn:1 perturbed:3 chooses:1 varsha:1 st:2 international:2 randomized:2 herbster:1 siam:1 told:1 probabilistic:1 physic:1 michael:1 connecting:1 again:2 borwein:1 w2k:1 cesa:1 yasutake:1 choose:1 clifford:1 corner:3 tam:1 expert:11 michel:1 gy:2 includes:2 v2v:1 explicitly:1 mv:2 depends:2 ranking:1 audibert:2 multiplicative:1 break:2 root:1 hedging:1 eyal:1 red:2 start:3 wm:12 maintains:4 qof:1 bayes:1 contribution:1 minimize:1 yves:2 qk:3 efficiently:4 yield:1 generalize:3 bor:4 multiplying:1 monitoring:1 reach:3 ashkan:1 definition:9 against:1 failure:1 frequency:5 e2:4 associated:11 di:3 proof:1 sampled:1 recall:5 back:3 campbell:1 wesley:1 higher:1 follow:3 methodology:1 improved:1 formulation:4 done:2 catalan:1 furthermore:1 until:4 receives:1 perhaps:1 e2m:4 usa:1 building:1 normalized:1 concept:1 awerbuch:1 iteratively:2 wp:1 deal:1 adjacent:1 during:3 branching:1 recurrence:5 game:2 maintained:1 generalized:7 outline:1 apk:1 theoretic:1 omnipress:1 recently:2 charles:1 common:3 koolen:4 empirically:1 exponentially:2 volume:1 extend:1 discussed:3 hypergraphs:1 rth:1 significant:1 versa:1 cambridge:1 dag:28 ai:4 tuning:4 mathematics:3 hp:1 submodular:1 dj:4 dot:1 permutahedron:1 hatano:2 jaillet:1 sinkhorn:2 add:1 base:5 nicolo:1 patrick:1 posterior:1 belongs:1 certain:2 binary:6 continue:1 exploited:1 gyorgy:1 minimum:3 additional:4 recognized:1 converge:1 shortest:3 ii:5 semi:3 multiple:1 full:4 sham:2 reduces:1 d0:1 match:1 lexicographically:1 long:1 cocke:1 qi:1 prediction:2 variant:3 essentially:1 expectation:2 arxiv:3 iteration:1 kernel:2 normalization:2 inflow:9 agarwal:1 achieved:3 sleeping:1 younger:1 background:1 addition:1 interval:2 source:16 leaving:8 recursing:1 induced:1 vitaly:1 flow:18 call:3 integer:1 backwards:1 iii:5 easy:1 concerned:1 reduce:1 qj:3 rod:2 pca:1 azin:1 repeatedly:5 remark:1 dar:1 santa:4 clear:1 stein:1 locally:1 shivani:1 eiji:3 simplest:2 schapire:1 canonical:2 nsf:1 notice:2 andr:1 disjoint:3 track:2 per:4 dummy:3 blue:2 correctly:1 discrete:1 shall:2 zv:4 key:12 putting:1 drawn:1 changing:2 takimoto:5 pj:4 graph:3 baruch:1 concreteness:1 relaxation:1 sum:12 you:1 decision:8 appendix:8 bit:9 bound:17 layer:3 ki:6 followed:1 sleep:1 adapted:1 constraint:7 vishwanathan:2 n3:2 sake:2 bousquet:1 kleinberg:2 u1:1 min:8 expanded:4 vempala:1 martin:1 department:2 developing:2 designated:5 alternate:2 ailon:1 structured:1 belonging:2 legendre:1 describes:1 smaller:2 cormen:1 wi:1 partitioned:1 kakade:2 making:1 projecting:3 mutually:1 discus:2 count:7 dmax:2 needed:1 addison:1 qnew:1 end:1 generalizes:1 operation:3 rewritten:1 apply:3 multipath:26 observe:3 away:1 generic:1 enforce:1 worthwhile:1 travis:1 appearing:1 corinna:1 knapsack:3 original:2 thomas:2 running:1 include:1 remaining:2 maintaining:1 pushing:7 k1:2 prof:1 dykstra:1 added:1 already:1 exclusive:1 cycling:2 win:5 perturbs:1 thank:2 capacity:1 majority:2 philip:1 polytope:35 enforcing:1 ottucs:1 length:2 multiplicatively:3 mini:1 minimizing:2 dick:1 nc:1 unfortunately:2 robert:2 frank:1 subproblems:14 negative:2 implementation:3 depth2:1 proper:4 policy:1 design:1 bianchi:1 upper:5 markov:3 extended:4 mansour:1 perturbation:1 arbitrary:3 introduced:4 david:2 pair:1 required:1 specified:1 z1:1 nick:1 california:2 polytopes:2 nip:1 adversary:6 below:3 lch:3 challenge:1 built:1 green:1 shifting:1 overlap:1 natural:3 eh:15 indicator:1 kivinen:3 recursion:2 nth:1 representing:4 wout:7 minimax:1 mdps:2 concludes:1 fpl:6 knopp:1 kj:1 nir:1 geometric:1 literature:2 multiplication:4 relative:2 freund:1 loss:55 fully:1 permutation:3 multiedges:20 proportional:2 acyclic:1 proven:1 foundation:1 incurred:1 pi:3 balancing:1 normalizes:1 course:1 summary:1 accounted:1 supported:1 mohri:2 keeping:1 free:1 offline:2 allow:1 exponentiated:1 wide:2 taking:2 sparse:1 depth:6 dimension:5 cumulative:3 unaware:2 concretely:3 made:2 adaptive:1 projected:2 coll:1 bm:5 polynomially:1 nov:1 winit:2 ignore:1 cutting:2 dmitry:1 keep:2 adamskiy:1 reveals:6 incoming:2 hayes:1 handbook:1 conservation:1 conclude:1 leader:3 alternatively:1 search:16 iterative:1 table:2 additionally:1 promising:1 kiyohito:1 szepesvari:2 ca:3 parenthesized:1 obtaining:1 init:1 mehryar:2 domain:1 u2t:1 aistats:1 pk:2 main:1 cyk:1 bounding:1 outflow:9 facilitating:1 body:1 kuzmin:2 renormalization:1 andras:1 guiding:1 exponential:5 wavelet:1 theorem:10 removing:1 specific:2 zu:1 bastien:2 explored:2 cortes:1 gupta:1 exists:2 sequential:3 execution:1 illustrates:1 gap:7 subtract:1 entropy:2 generalizing:1 depicted:2 lt:3 simply:2 logarithmic:1 bubeck:2 rardin:1 expressed:1 tracking:2 scalar:1 kuznetsov:1 springer:5 ch:22 corresponds:1 wnew:1 hedge:15 oct:1 goal:4 jyrki:1 consequently:1 price:1 change:1 included:1 total:12 called:8 goemans:1 indicating:2 linder:1 internal:9 mark:1 m2m:6 collins:1 jonathan:1 outgoing:1 |
6,496 | 6,876 | Alternating Estimation for Structured
High-Dimensional Multi-Response Models
Sheng Chen
Arindam Banerjee
Dept. of Computer Science & Engineering
University of Minnesota, Twin Cities
{shengc,banerjee}@cs.umn.edu
Abstract
We consider the problem of learning high-dimensional multi-response linear models with structured parameters. By exploiting the noise correlations among different
responses, we propose an alternating estimation (AltEst) procedure to estimate
the model parameters based on the generalized Dantzig selector (GDS). Under
suitable sample size and resampling assumptions, we show that the error of the
estimates generated by AltEst, with high probability, converges linearly to certain
minimum achievable level, which can be tersely expressed by a few geometric
measures, such as Gaussian width of sets related to the parameter structure. To the
best of our knowledge, this is the first non-asymptotic statistical guarantee for such
AltEst-type algorithm applied to estimation with general structures.
1
Introduction
Multi-response (a.k.a. multivariate) linear models [2, 8, 20, 21] have found numerous applications in
real-world problems, e.g. expression quantitative trait loci (eQTL) mapping in computational biology
[28], land surface temperature prediction in climate informatics [17], neural semantic basis discovery
in cognitive science [30], etc. Unlike simple linear model where each response is a scalar, one obtains
a response vector at each observation in multi-response model, given as a (noisy) linear combinations
of predictors, and the parameter (i.e., coefficient vector) to learn can be either response-specific
(i.e., allowed to be different for every response), or shared by all responses. The multi-response
model has been well studied under the context of the multi-task learning [10], where each response is
coined as a task. In recent years, the multi-task learning literature have largely focused on exploring
the parameter structure across tasks via convex formulations [15, 3, 26]. Another emphasis area in
multi-response modeling is centered around the exploitation of the noise correlation among different
responses [35, 36, 29, 40, 42], instead of assuming that the noise is independent for each response.
To be specific, we consider the following multi-response linear models with m real-valued outputs,
yi = Xi ? ? + ?i ,
?i ? N (0, ?? ) ,
(1)
where yi ? Rm is the response vector, Xi ? Rm?p consists of m p-dimensional feature vectors,
and ?i ? Rm is a noise vector sampled from a multivariate zero-mean Gaussian distribution with
covariance ?? . For simplicity, we assume Diag(?? ) = Im?m throughout the paper. The m
responses share the same underlying parameter ? ? ? Rp , which corresponds to the so-called pooled
model [19]. In fact, this seemingly restrictive setting is general enough to encompass the model
with response-specific parameters, which can be realized by block-diagonalizing rows of Xi and
stacking all coefficient vectors into a ?long? vector. Under the assumption of correlated noise, the
true noise covariance structure ?? is usually unknown. Therefore it is typically required to estimate
the parameter ? ? along with the covariance ?? . In practice, we observe n data points, denoted by
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
D = {(Xi , yi )}ni=1 , and the maximum likelihood estimator (MLE) is simply as follows,
n
2
X
? 21
? MLE = argmin 1 log |?| + 1
??MLE , ?
? (yi ? Xi ?)
2n i=1
2
??Rp , ?0 2
(2)
Although being convex w.r.t. either ? or ? when the other is fixed, the optimization problem
associated with the MLE is jointly non-convex for ? and ?. A popular approach to dealing with such
problem is alternating minimization (AltMin), i.e., alternately solving for ? (and ?) while keeping
? (and ?) fixed. The AltMin algorithm for (2) iteratively performs two simple steps, solving least
squares for ? and computing empirical noise covariance for ?. Recent work [24] has established
the non-asymptotic error bound of this approach for (2) with a brief extension to sparse parameter
setting using iterative hard thresholding method [25]. But they did not allow more general structure
of the parameter. Previous works [35, 29, 33] also considered the regularized MLE approaches for
multi-response models with sparse parameters, which are solved by AltMin-type algorithms as well.
Unfortunately, none of those works provide finite-sample statistical guarantees for their algorithms.
AltMin technique has also been applied to many other problems, such as matrix completion [23],
sparse coding [1], and mixed linear regression [41], with provable performance guarantees. Despite
the success of AltMin, most existing works are dedicated to recovering unstructured sparse or
low-rank parameters, with little attention paid to general structures, e.g., overlapping sparsity [22],
hierarchical sparsity [27], k-support sparsity [4], etc.
In this paper, we study the multi-response linear model in high-dimensional setting, i.e., sample size n
is smaller than the problem dimension p, and the coefficient vector ? ? is assumed to possess a general
low-complexity structure, which can be essentially captured by certain norm k ? k [5]. Structured
estimation using norm regularization/minimization has been extensively studied for simple linear
models over the past decade, and recent advances manage to characterize the estimation error for
convex approaches including Lasso-type (regularized) [38, 31, 6] and Dantzig-type (constrained)
estimator [7, 12, 14], via a few simple geometric measures, e.g., Gaussian width [18, 11] and
restricted norm compatibility [31, 12]. Here we propose an alternating estimation (AltEst) procedure
for finding the true parameters, which essentially alternates between estimating ? through the
generalized Dantzig selector (GDS) [12] using norm k ? k and computing the approximate empirical
noise covariance for ?. Our analysis puts no restriction on what the norm can be, thus the AltEst
framework is applicable to general structures. In contrast to AltMin, our AltEst procedure cannot
be casted as a minimization of some joint objective function for ? and ?, thus is conceptually more
general than AltMin. For the proposed AltEst, we provide the statistical guarantees for the iterate
??t with the resampling assumption (see Section 2), which may justify the applicability of AltEst
technique to other problems without joint objectives for two set of parameters. Specifically, we
show that with overwhelming probability, the estimation error k??t ? ? ? k2 for generally structured
? ? converges linearly to a minimum achievable error given sub-Gaussian design under moderate
sample size. With a straightforward intuition, this minimum achievable error can be tersely expressed
by the aforementioned geometric measures which simply depend on the structure of ? ? . Moreover,
our analysis implies the error bound for single response high-dimensional models as a by-product
?1/2
[12]. Note that the analysis in [24] focuses on the expected prediction error E[?? X(??t ? ? ? )]
for unstructured ? ? , which is related but different from our k??t ? ? ? k2 for generally structured ? ? .
Compared with the error bound derived for unstructured ? ? in [24], our result also yields better
dependency on sample size by removing the log n factor, which seems unnatural to appear.
The rest of the paper is organized as follows. We elaborate our AltEst algorithm in Section 2, along
with the resampling assumption. In Section 3, we present the statistical guarantees for AltEst. We
provide experimental results in Section 4 to support our theoretical development. Due to space
limitations, all proofs are deferred to the supplementary material.
2
Alternating Estimation for High-Dimensional Multi-Response Models
Given the high-dimensional setting for (1), it is natural to consider the regularized MLE for (1) by
adding the norm k ? k to (2), which captures the structural information of ? ? in (1),
n
2
X
? 12
? ?
? = argmin 1 log |?| + 1
?,
? (yi ? Xi ?)
+ ?n k?k ,
(3)
2n i=1
2
??Rp , ?0 2
2
where ?n is a tuning parameter. Using AltMin the update of (3) can be given as
n
2
1 X
? ? 21
??t = argmin
?t?1 (yi ? Xi ?)
+ ?n k?k
p
2n
2
??R
i=1
(4)
n
T
X
?t = 1
?
yi ? Xi ??t yi ? Xi ??t
n i=1
(5)
? t is obtained
The update of ??t is basically solving a regularized least squares problem, and the new ?
?
by computing the approximated empirical covariance of the residues evaluated at ?t . In this work,
we consider an alternative to (4), the generalized Dantzig selector (GDS) [12], which is given by
n
1 X
?1
T
?
??t = argmin k?k s.t.
Xi ?
(6)
(X
?
?
y
)
i
i
? ?n ,
t?1
n
??Rp
i=1
?
where k ? k? is the dual norm of k ? k. Compared with (4), GDS has nicer geometrical properties,
which is favored in the statistical analysis. More importantly, since iteratively solving (6) followed by
covariance estimation (5) no longer minimizes a specific objective function jointly, the updates go
beyond the scope of AltMin, leading to our broader alternating estimation (AltEst) framework, i.e.,
alternately estimating one parameter by suitable approaches while keeping the other fixed. For the
? t can be easily computed in closed
ease of exposition, we focus on the m ? n scenario, so that ?
form as shown in (5). When m > n and ??1
is
sparse,
it
is
beneficial to directly estimate ??1
?
?
using more advanced estimators [16, 9]. Especially the CLIME estimator [9] enjoys certain desirable
properties, which fits into our AltEst framework but not AltMin, and our AltEst analysis does not
rely on the particular estimator we use to estimate noise covariance or its inverse. The algorithmic
details are given in Algorithm 1, for which it is worth noting that every iteration t uses independent
new samples, D2t?1 and D2t in Step 3 and 4, respectively. This assumption is known as resampling,
which facilitates the theoretical analysis by removing the statistical dependency between iterates.
Several existing works benefit from such assumption when analyzing their AltMin-type algorithms
[23, 32, 41]. Conceptually resampling can be implemented by partitioning the whole dataset into T
subsets, though it is unusual to do so in practice. Loosely speaking, AltEst (AltMin) with resampling
is an approximation of the practical AltEst (AltMin) with a single dataset D used by all iterations.
For AltMin, attempts have been made to directly analyze its practical version without resampling,
by studying the properties of the joint objective [37], which come at the price of invoking highly
sophisticated mathematical tools. This technique, however, might fail to work for AltEst since the
procedure is not even associated with a joint objective. In the next section, we will leverage such
resampling assumption to show that the error of ??t generated by Algorithm 1 will converge to a
small value with high probability. We again emphasize that the AltEst framework may work for other
suitable estimators for (? ? , ?? ) although (5) and (6) are considered in our analysis.
Algorithm 1 Alternating Estimation with Resampling
n
Input: Number of iterations T , Datasets D1 = {(Xi , yi )}ni=1 , . . . , D2T = {(Xi , yi )}2T
i=(2T ?1)n+1
? 0 = Im?m
1: Initialize ?
2: for t:= 1 to T do
3:
Solve the GDS (6) for ??t using dataset D2t?1
? t according to (5) using dataset D2t
4:
Compute ?
5: end for
6: return ??T
3
Statistical Guarantees for Alternating Estimation
In this section, we establish the statistical guarantees for our AltEst algorithm. The road map for the
analysis is to first derive the error bounds separately for both (5) and (6), and then combine them
through AltEst procedure to show the error bound of ??t . Throughout the analysis, the design X is
assumed to centered, i.e., E[X] = 0m?p . ?max (?) and ?min (?) are used to denote the largest and
smallest eigenvalue of a real symmetric matrix. Before presenting the results, we provide some basic
but important concepts. First of all, we give the definition of sub-Gaussian matrix X.
3
Definition 1 (Sub-Gaussian Matrix) X ? Rm?p is sub-Gaussian if the ?2 -norm below is finite,
T ? 21 T
(7)
|||X|||?2 =
sup
v ?u X u ? ? < +? ,
?2
v?Sp?1 , u?Sm?1
where ?u = E[XT uuT X]. Further we assume there exist constants ?min and ?max such that
0 < ?min ? ?min (?u ) ? ?max (?u ) ? ?max < +? ,
? u ? Sm?1
(8)
The definition (7) is also used in earlier work [24], which assumes the left end of (8) implicitly.
Lemma 1 gives an example of sub-Gaussian X, showing that condition (7) and (8) are reasonable.
1
1
? 2 , where
Lemma 1 Assume that X ? Rm?p has dependent anisotropic rows such that X = ? 2 X?
? ? Rm?p has independent isotropic rows, and
? ? Rm?m encodes the dependency between rows, X
? satisfies |||?
? ? Rp?p introduces the anisotropy. In this setting, if each row of X
xi |||?2 ? ?
? , then
condition (7) and (8) hold with ? = C ?
? , ?min = ?min (?)?min (?), and ?max = ?max (?)?max (?).
The recovery guarantee of GDS relies on an important notion called restricted eigenvalue (RE). In
multi-response setting, it is defined jointly for designs Xi and a noise covariance ? as follows.
Definition 2 (Restricted Eigenvalue Condition) The designs X1 , X2 , . . . , Xn and the covariance
? together satisfy the restricted eigenvalue condition for set A ? Sp?1 with parameter ? > 0, if
!
n
X
1
XT ??1 Xi v ? ? .
(9)
inf vT
v?A
n i=1 i
Apart from RE condition, the analysis of GDS is carried out on the premise that tuning parameter ?n
is suitably selected, which we define as ?admissible?.
Definition 3 (Admissible Tuning Parameter) The ?n for GDS (6) is said to be admissible if ?n is
chosen such that ? ? belongs to the constraint set, i.e.,
n
n
1 X
1 X
T ?1
?
T ?1
Xi ? (Xi ? ? yi )
=
X i ? ?i
? ?n
(10)
n
n
i=1
?
i=1
?
For structured estimation, one also needs to characterize the structural complexity of ? ? , and an
appropriate choice is the Gaussian width [18]. For any set A ? Rp , its Gaussian width is given
by w(A) = E [supu?A hu, gi], where g ? N (0, Ip?p ) is a standard Gaussian random vector. In
the analysis, the set A of our interests typically relies on the structure of ? ? . Previously Gaussian
width has been applied to statistical analyses for various problems [11, 6, 39], and recent works
[34, 13] show that Gaussian width is computable for many structures. For the rest of the paper, we
use C, C0 , C1 and so on to denote universal constants, which are different from context to context.
3.1
Estimation of Coefficient Vector
In this subsection, we focus on estimating ? ? , i.e., Step 3 of Algorithm 1, using GDS of the form,
n
1 X
?? = argmin k?k s.t.
XTi ??1 (Xi ? ? yi )
? ?n ,
(11)
p
n
??R
i=1
?
where ? is an arbitrary but fixed input noise covariance matrix. The following lemma shows a
deterministic error bound for ?? under the RE condition and admissible ?n defined in (9) and (10).
Lemma 2 Suppose the RE condition (9) is satisfied by X1 , . . . , Xn and ? with ? > 0 for the set
A (? ? ) = cone {v | k? ? + vk ? k? ? k } ? Sp?1 . If ?n is admissible, ?? in (11) satisfies
?n
?
,
(12)
? ? ? ?
? 2?(? ? ) ?
?
2
in which ?(? ? ) is the restricted norm compatibility defined as ?(? ? ) = supv?A(?? )
4
kvk
kvk2 .
From Lemma 2, we can find that the L2 -norm error is mainly determined by three quantities??(? ? ),
?n and ?. The restricted norm compatibility ?(? ? ) purely hinges on the geometrical structure of
? ? and k ? k, thus involving no randomness. On the contrary, ?n and ? need to satisfy their own
conditions, which are bound to deal with random Xi and ?i . The set A(? ? ) involved in RE condition
and restricted norm compatibility has relatively simple structure, which will favor the derivation of
error bound for varieties of norms [13]. If RE condition fails to hold, i.e. ? = 0, the error bound is
meaningless. Though the error is proportional to the user-specified ?n , assigning arbitrarily small
value to ?n may not be admissible. Hence, in order to further derive the recovery guarantees for GDS,
we need to verify RE condition and find the smallest admissible value of ?n .
Restricted Eigenvalue Condition: Firstly the following lemma characterizes the relation between
the expectation and empirical mean of XT ??1 X.
Lemma 3 Given sub-Gaussian X ? Rm?p with its i.i.d. copies X1 , . . . , Xn , and covariance
? = 1 Pn XT ??1 Xi .
? ? Rm?m with eigenvectors u1 , . . . , um , let ? = E[XT ??1 X] and ?
i
i=1
n
?1
Define the set A?j for A ? Sp?1 and each ?j = E[XT uj uTj X] as A?j = {v ? Sp?1 | ?j 2 v ?
cone(A)}. If n ? C1 ?4 ? maxj w2 (A?j ) , with probability at least 1 ? m exp(?C2 n/?4 ), we
have
? ? 1 vT ?v, ? v ? A .
(13)
vT ?v
2
Instead of w(A?j ), ideally we want the condition above on n to be characterized by w(A), which
can be easier to compute in general. The next lemma accomplishes this goal.
Lemma 4 Let ?0 be the ?2 -norm of standard Gaussian random vector and ?u = E[XT uuT X],
where u ? Sm?1 is fixed. For A?u defined in Lemma 3, we have
p
w(A?u ) ? C?0 ?max /?min ? (w(A) + 3) ,
(14)
Lemma 4 implies that the Gaussian width w(A?j ) appearing in Lemma 3 is of the same order as
w(A). Putting Lemma 3 and 4 together, we can obtain the RE condition for the analysis of GDS.
Corollary 1 Under the notations of Lemma 3 and 4, if n ? C1 ?20 ?4 ? ??max
? (w(A) + 3)2 , then the
min
p?1
following inequality holds for all v ? A ? S
with probability at least 1 ? m exp(?C2 n/?4 ),
? ? ?min ? Tr(??1 )
vT ?v
(15)
2
Admissible
Tuning Parameter: Finding the admissible ?n amounts to estimating the value of
Pn
k n1 i=1 XTi ??1 ?i k? in (10), which involves random Xi and ?i . The next lemma establishes a
high-probability bound for this quantity, which can be viewed as the smallest ?safe? choice of ?n .
Lemma 5 Assume that Xi is sub-Gaussian
and ?i ? N (0, ??). The following inequality holds
C 2 w2 (B)
2
with probability at least 1 ? exp ? n?2 ? C2 exp ? 1 4?2
n
?
1 X
C? ?max p
T ?1
?
Xi ? ?i
?
? Tr (??1 ?? ??1 ) ? w(B) ,
n
n
i=1
(16)
?
1
1
where B denotes the unit ball of norm k ? k, ? = supv?B kvk2 , and ? = k??1 ??2 kF /k??1 ??2 k2 .
Estimation Error of GDS: Building on Corollary 1, Lemma 2 and 5, the theorem below characterizes
the estimation of GDS for the multi-response linear model.
Theorem 1 Under the setting of Lemma 5, if n ? C1 ?20 ?4 ? ??max
? (w(A (? ? )) + 3)2 , and ?n is set
min
q
?1
?1
to C2 ? ?max Tr(?n ?? ? ) ? w(B), the estimation error of ?? given by (11) satisfies
p
r
Tr (??1 ?? ??1 ) ?(? ? ) ? w(B)
?
max
?
?
?
?
,
(17)
k?? ? ? k2 ? C?
2
?min
Tr (??1 )
n
5
2
C 2 w2 (B)
with probability at least 1 ? m exp ? C?34n ? exp ? n?2 ? C4 exp ? 5 4?2
.
Remark: We can see from the theorem above that the noise covariance
? input to GDS plays
p
a
role in the error bound through the multiplicative factor ?(?) = Tr (??1 ?? ??1 )/ Tr ??1 . By
taking the derivative of ? 2 (?) w.r.t. ??1 and setting it to 0, we have
2 Tr2 ??1 ?? ??1 ? 2 Tr ??1 Tr ??1 ?? ??1 ? Im?m
?? 2 (?)
=0
=
???1
Tr4 (??1 )
Then we can verify thatq
? = ?? is the solution to the equation above, and thus is the minimizer of
?(?) with ?(?? ) = 1/ Tr(??1
? ). This calculation confirms that multi-response regression could
benefit from taking into account the noise covariance, and the best performance?
is achieved when ??
is known. If we perform ordinary GDS q
by setting ? = Im?m , then ?(?) = 1/ m. Therefore using
?? will reduce the error by a factor of
m/ Tr(??1
? ), compared with ordinary GDS.
One simple structure of ? ? to consider for Theorem 1 is the sparsity?
encoded by L1 norm.?Given s?
?
sparse ? ? , it follows
? from previous results [31, 11] that ?(? ) = O( s), w(A(? )) = O( s log p)
and w(B) = O( log p). Therefore if n ? O(s log p), then with high probability we have
!
r
s
log
p
k?? ? ? ? k2 ? O ?(?) ?
(18)
n
Implications for Simple Linear Models: Our general result in multi-response scenario implies
some existing results for simple linear models. If we set n = 1 and ? = ?? = Im?m , i.e., only one
data point (X, y) is observed and the noise is independent for each response, the GDS is reduced to
??sg = argmin k?k s.t.
XT (X? ? y)
? ? ? ,
(19)
??Rp
which exactly matches that in [12]. To bound its estimation error, we need X to be more structured
?
beyond the sub-Gaussianity. Essentially we consider the model of X in Lemma 1, where rows of X
are additionally assumed to be identical. For such X, a specialized RE condition is as follows.
1
? are
? 12 , and rows of X
Lemma 6 Assume X is defined as in Lemma 1 such that X = ? 2 X?
2
2 4 ?max (?)?max (?)
i.i.d. with |||?
xj ||| ? ?
? . If mn ? C1 ?0 ?
? ? ?min (?)?min (?) ? (w(A) + 3) , with probability at least
4
1 ? exp(?C2 mn/?
? ), the following inequality is satisfied by all v ? A ? Sp?1 ,
? ? m ? ?min ? 12 ??1 ? 21 ? ?min (?) .
(20)
vT ?v
2
Remark: Lemma 6 characterizes the RE condition for a class of specifically structured design X. If
1
? 12 , it becomes
we specialize the general RE condition in Corollary 1 for this setting, X = ? 2 X?
n?
?max (?)?max (?)
C1 ?20 ?
?4
(w(A)
?min (?)?min (?)
with probability 1?
m exp(?C2 n/?
?4 )
? ?
+ 3) ==========? vT ?v
2
?min (?)?min (?)
Tr(??1 )
2
Comparing the general result above with Lemma 6, there are two striking differences. Firstly, Lemma
6 requires the same sample size of mn rather than n, which improves the general one. Secondly, (20)
holds with much higher probability 1 ? exp(?C2 mn/?
?4 ) instead of 1 ? m exp(?C2 n/?
?4 ).
Given this specialized RE condition, we have the recovery guarantees of GDS for simple linear
models, which encompass the settings discussed in [6, 12] as special cases.
Corollary 2 Suppose y = X? ? + ? ? Rm , whereX is described
? ? N (0, I).
as in Lemma 6, and
C12 w2 (B)
m
4
?
With probability at least 1 ? exp ? 2 ? C2 exp ? 4?2
? exp ?C3 m/?
? , ?sg satisfies
s
?max (?)?max (?) ?(? ? ) ? w(B)
?
?
??
?
,
(21)
?sg ? ? ?
? C ?
?2min (?)?2min (?)
m
2
6
3.2
Estimation of Noise Covariance
In this subsection, we consider the estimation of noise covariance ?? given an arbitrary parameter
vector ?. When m is small, we estimate ?? by simply using the sample covariance
n
X
T
? = 1
?
(yi ? Xi ?) (yi ? Xi ?) .
(22)
n i=1
? and ?? , which is sufficient for our AltEst analysis.
Theorem 2 reveals the relation between ?
2
4
q
?max
?
4 ?max (?? )?max
k?
?
?k
and
Theorem 2 If n ? C 4 m ? max 4 ?0 + ? ?min
,
?
2
(?? )
?min (?? )?min
? given by (22) satisfies
Xi is sub-Gaussian, with probability at least 1 ? 2 exp(?C1 m), ?
p
2?max
? 1 ? ? 12
2
?max ?? 2 ??
? 1 + C 2 ?20 m/n +
k? ? ? ?k2
?
?min (?? )
p
? 1 ? ? 12
?min ?? 2 ??
? 1 ? C 2 ?20 m/n
?
1
1
1
(23)
(24)
1
?2 ? ?2
?2
2 ?
? = ?? , then ?max (??
?
Remark: If ?
? ??? ) = ?min (?? ??? ) = 1. Hence ? is nearly equal
to ?? when the upper and lower bounds (23) (24) are close to 1. We would like to point out that there
is nothing specific to the particular form of estimator (22), which makes AltEst work. Similar results
can be obtained for other methods that estimate the inverse covariance matrix ??1
? instead of ?? .
For instance, when m < n and ??1
is
sparse,
we
can
replace
(22)
with
GLasso
[16]
or CLIME [9],
?
and AltEst only requires the counterparts of (23) and (24) in order to work.
3.3
Error Bound for Alternating Estimation
Section 3.1 shows that the noise covariance in GDS affects the error bound by the factor ?(?). In
?
order to bound the error of ??T given by AltEst, we need to further quantify how ? affects ?(?).
? is given as (22) and the condition in Theorem 2 holds, then the inequality below
Lemma 7 If ?
holds with probability at least 1 ? 2 exp(?C1 m),
r
1
?max
? ? ? (?? ) ? 1 + 2C?0 m 4 + 2
k? ? ? ?k2
(25)
? ?
n
?min (?? )
Based on Lemma 7, the following theorem provides the error bound for ??T given by Algorithm 1.
1
q
4
1+2C?0 ( m
?(?? )??(? ? )w(B)
n )
q
?
Theorem 3 Let eorc = C1 ? ??max
and
e
=
e
?
. If n ? C 4 m?
2
min
orc
?max
n
1?2eorc ?
min
(?
)
?
min
(
2 )
q
4
2
?
?
?
(?
)
?(?
)w(B)
?
(?
)?
?(?
)?(?
)w(B)
C1
min
?
?
max
1 ??max
??
max 4 ?0 + C
, ?4 ?max
, 2C
2
?2 (?? )
m
C 2 ?min ?
min (?? )?min
m??min (?? )
max
and also satisfies the condition in Theorem 1, with high probability, the iterate ??T returned by
Algorithm 1 satisfies
T ?1
r
?max
?
?
?
??1 ? ? ?
? emin
(26)
?T ? ?
? emin + 2eorc
?min (?? )
2
2
Remark: The three lower bounds for n inside curly braces correspond to three intuitive requirements.
The first one guarantees that the covariance estimation is accurate enough, and the other two respectively ensure that the initial error of ??1 and eorc are reasonably small , such that the subsequent errors
can contract linearly. eorc is the estimation error incurred by the following oracle estimator,
n
1 X
T ?1
?
?orc = argmin k?k s.t.
Xi ?? (Xi ? ? yi )
? ?n ,
(27)
n
??Rp
i=1
?
which is impossible to implement in practice. On the other hand, emin is the minimum achievable error,
which has an extra multiplicative factor compared with eorc . The numerator of the factor compensates
7
for the error of estimated noise covariance provided that ? = ? ? is plugged in (22), which merely
depends on sample size. Since having ? = ? ? is also unrealistic for (22), the denominator further
accounts for the ballpark difference between ? and ? ? . As we remark after Theorem 1,
qif we perform
ordinary GDS with ? set to Im?m in (11), its error bound eodn satisfies eodn = eorc Tr(??1
? )/m.
q
Note that this factor Tr(??1
? )/m is independent of n, whereas emin will approach eorc with
increasing n as the factor between them converges to one.
4
Experiments
In this section, we present some experimental results to support our theoretical analysis. Specifically we focus on the sparse structure of ? ? captured by L1 norm. Throughout the experiment, we fix problem dimension p = 500, sparsity level of ? ? s = 20, and number of iterations for AltEst T = 5. Entries of design X is generated by i.i.d. standard Gaussians, and
? ? = [1, . . . , 1, ?1, . . . , ?1, 0, . . . , 0]T . ?? is given as a block diagonal matrix with blocks
| {z } | {z } | {z }
10
480
h 10 i
?0 = a1 a1 replicated along diagonal, and number of responses m is assumed to be even.
All plots are obtained by averaging 100 trials. In the first set of experiments, we set a = 0.8, m = 10
and investigate the error of ??t as n varies from 40 to 90. We run AltEst (with and without resampling),
the oracle GDS, and the ordinary GDS with ? = I. The results are given in Figure 1.
For the second experiment, we fix the product mn ? 500, and let m = 2, 4, . . . , 10. For our choice
of ?? , the error incurred by oracle GDS eorc is the same for every m. We compare AltEst with both
oracle and ordinary GDS, and the result is shown in Figure 2(a) and 2(b).
In the third experiment, we test AltEst under different covariance matrices ?? by varying a from
0.5 to 0.9. m is set to 10 and sample size n is 90. We also compare AltEst against both oracle and
ordinary GDS, and the errors are reported in Figure 2(c) and 2(d).
0.18
0.12
0.1
0.08
0.06
0.04
0.14
0.12
0.1
0.08
1.5
2
2.5
3
Iteration t
3.5
4
4.5
5
0.12
0.1
0.08
0.06
0.06
0.04
0.04
1
Oracle GDS
Resampled AltEst
AltEst
Ordinary GDS
0.14
Normalized Error
0.14
n = 40
n = 50
n = 60
n = 70
n = 80
n = 90
0.16
Normalized Error for ??t
Normalized Error for ??t
0.16
0.18
n = 40
n = 50
n = 60
n = 70
n = 80
n = 90
0.16
1
1.5
2
2.5
3
Iteration t
3.5
4
4.5
40
5
45
50
55
60
65
70
Sample Size n
75
80
85
90
(a) Error for AltEst
(b) Error for Resampled AltEst
(c) Comparison of Estimators
Figure 1: (a) When n = 40, AltEst is not quite stable due to the large initial error and poor quality of estimated
covariance. Then the errors start to decrease for n ? 50. (b) Resampld AltEst does benefit from fresh samples,
and its error is slightly smaller than AltEst as well as more stable when n is small. (c) Oracle GDS outperforms
the others, but the performance of AltEst is also competitive. Ordinary GDS is unable to utilize the noise
correlation, thus resulting in relatively large error. By comparing the two implementations of AltEst, we can see
that resampled AltEst yields smaller error especially when data is inadequate, but their errors are very close if n
is suitably large.
0.16
0.1
0.08
0.06
0.12
0.1
0.08
0.06
a = 0.9
a = 0.8
a = 0.7
a = 0.6
a = 0.5
0.14
0.12
0.1
Oracle GDS
AltEst
Ordinary GDS
0.12
Normalized Error
0.12
0.14
0.16
Oracle GDS
AltEst
Ordinary GDS
0.14
Normalized Error for ??t
m=2
m=4
m=6
m=8
m = 10
0.14
Normalized Error
Normalized Error for ??t
0.16
0.08
0.1
0.08
0.06
0.06
0.04
0.04
0.04
1
1.5
2
2.5
3
Iteration t
3.5
4
4.5
(a) AltEst (for m)
5
0.04
2
3
4
5
6
7
Number of Responses m
8
9
10
1
(b) Comparison (for m)
1.5
2
2.5
3
Iteration t
3.5
4
4.5
(c) AltEst (for a)
5
0.02
0.5
0.55
0.6
0.65
0.7
a
0.75
0.8
0.85
0.9
(d) Comparison (for a)
Figure 2: (a) Larger error comes with bigger m, which confirms that emin is increasing along with m when mn
is fixed. (b) The plots for oracle and ordinary GDS imply that eorc and eodn remain unchanged, which matches
the error bounds in Theorem 1. Though emin increases, AltEst still outperform the ordinary GDS by a margin.
(c) The error goes down when the true noise covariance becomes closer to singular, which is expected in view of
Theorem 3. (d) eorc also decreases as a gets larger, and the gap between emin and eodn widens. The definition of
emin in Theorem 3 indicates that the ratio between emin and eorc is almost a constant because both n and m are
fixed. Here we observe that all the ratios at different a are between 1.05 and 1.1, which supports the theoretical
results. Also, Theorem 1 suggests that eodn does not change as ?? varies, which is verified here.
8
Acknowledgements
The research was supported by NSF grants IIS-1563950, IIS-1447566, IIS-1447574, IIS-1422557,
CCF-1451986, CNS- 1314560, IIS-0953274, IIS-1029711, NASA grant NNX12AQ39A, and gifts
from Adobe, IBM, and Yahoo.
References
[1] A. Agarwal, A. Anandkumar, P. Jain, P. Netrapalli, and R. Tandon. Learning sparsely used
overcomplete dictionaries via alternating minimization. CoRR, abs/1310.7991, 2013.
[2] T. W. Anderson. An introduction to multivariate statistical analysis. 2003.
[3] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning,
73(3):243?272, 2008.
[4] A. Argyriou, R. Foygel, and N. Srebro. Sparse prediction with the k-support norm. In NIPS,
2012.
[5] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing
norms. Optimization for Machine Learning, 5, 2011.
[6] A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with norm regularization. In
Advances in Neural Information Processing Systems (NIPS), 2014.
[7] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector.
The Annals of Statistics, 37(4):1705?1732, 2009.
[8] L. Breiman and J. H. Friedman. Predicting multivariate responses in multiple linear regression.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 59(1):3?54, 1997.
[9] T. T. Cai, W. Liu, and X. Luo. A constrained `1 minimization approach to sparse precision
matrix estimation. Journal of the American Statistical Association, 106(494):594?607, 2011.
[10] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[11] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear
inverse problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[12] S. Chatterjee, S. Chen, and A. Banerjee. Generalized dantzig selector: Application to the
k-support norm. In Advances in Neural Information Processing Systems (NIPS), 2014.
[13] S. Chen and A. Banerjee. Structured estimation with atomic norms: General bounds and
applications. In NIPS, pages 2908?2916, 2015.
[14] S. Chen and A. Banerjee. Structured matrix recovery via the generalized dantzig selector. In
Advances in Neural Information Processing Systems, 2016.
[15] T. Evgeniou and M. Pontil. Regularized multi?task learning. In KDD, pages 109?117, 2004.
[16] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the
graphical lasso. Biostatistics, 9(3):432?441, 2008.
[17] A. R. Goncalves, P. Das, S. Chatterjee, V. Sivakumar, F. J. Von Zuben, and A. Banerjee.
Multi-task sparse structure learning. In CIKM, pages 451?460, 2014.
[18] Y. Gordon. Some inequalities for gaussian processes and applications. Israel Journal of
Mathematics, 50(4):265?289, 1985.
[19] W. H. Greene. Econometric Analysis. Prentice Hall, 7. edition, 2011.
[20] A. J. Izenman. Reduced-rank regression for the multivariate linear model. Journal of multivariate analysis, 5(2):248?264, 1975.
[21] A. J. Izenman. Modern Multivariate Statistical Techniques: Regression, Classification, and
Manifold Learning. Springer, 2008.
9
[22] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In ICML,
2009.
[23] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In STOC, pages 665?674, 2013.
[24] P. Jain and A. Tewari. Alternating minimization for regression problems with vector-valued
outputs. In Advances in Neural Information Processing Systems (NIPS), pages 1126?1134,
2015.
[25] P. Jain, A. Tewari, and P. Kar. On iterative hard thresholding methods for high-dimensional
m-estimation. In NIPS, pages 685?693, 2014.
[26] A. Jalali, S. Sanghavi, C. Ruan, and P. K. Ravikumar. A dirty model for multi-task learning. In
Advances in Neural Information Processing Systems (NIPS), pages 964?972, 2010.
[27] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse
coding. J. Mach. Learn. Res., 12:2297?2334, 2011.
[28] S. Kim and E. P. Xing. Tree-guided group lasso for multi-response regression with structured
sparsity, with an application to eqtl mapping. Ann. Appl. Stat., 6(3):1095?1117, 2012.
[29] W. Lee and Y. Liu. Simultaneous multiple response regression and inverse covariance matrix
estimation via penalized gaussian maximum likelihood. J. Multivar. Anal., 111:241?255, 2012.
[30] H. Liu, M. Palatucci, and J. Zhang. Blockwise coordinate descent procedures for the multi-task
lasso, with applications to neural semantic basis discovery. In ICML, pages 649?656, 2009.
[31] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for the analysis
of regularized M -estimators. Statistical Science, 27(4):538?557, 2012.
[32] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. In NIPS,
2013.
[33] P. Rai, A. Kumar, and H. Daume. Simultaneously leveraging output and task structures for
multiple-output regression. In NIPS, pages 3185?3193, 2012.
[34] N. Rao, B. Recht, and R. Nowak. Universal Measurement Bounds for Structured Sparse Signal
Recovery. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
[35] A. J. Rothman, E. Levina, and J. Zhu. Sparse multivariate regression with covariance estimation.
Journal of Computational and Graphical Statistics, 19(4):947?962, 2010.
[36] K.-A. Sohn and S. Kim. Joint estimation of structured sparsity and output structure in multipleoutput regression via inverse-covariance regularization. In AISTATS, pages 1081?1089, 2012.
[37] R. Sun and Z.-Q. Luo. Guaranteed matrix completion via nonconvex factorization. In FOCS,
2015.
[38] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society, Series B, 58(1):267?288, 1996.
[39] J. A. Tropp. Convex Recovery of a Structured Signal from Independent Random Linear
Measurements, pages 67?101. Springer International Publishing, 2015.
[40] M. Wytock and Z. Kolter. Sparse gaussian conditional random fields: Algorithms, theory, and
application to energy forecasting. In International conference on machine learning, pages
1265?1273, 2013.
[41] X. Yi, C. Caramanis, and S. Sanghavi. Alternating minimization for mixed linear regression. In
ICML, pages 613?621, 2014.
[42] X.-T. Yuan and T. Zhang. Partial gaussian graphical model estimation. IEEE Transactions on
Information Theory, 60:1673?1687, 2014.
10
| 6876 |@word multitask:1 trial:1 exploitation:1 version:1 achievable:4 norm:22 seems:1 suitably:2 c0:1 hu:1 confirms:2 covariance:28 jacob:1 invoking:1 paid:1 tr:14 initial:2 liu:3 series:2 past:1 existing:3 outperforms:1 comparing:2 luo:2 assigning:1 subsequent:1 kdd:1 plot:2 update:3 resampling:10 intelligence:1 selected:1 isotropic:1 iterates:1 provides:1 firstly:2 zhang:2 mathematical:1 along:4 kvk2:2 c2:9 focs:1 consists:1 specialize:1 yuan:1 combine:1 inside:1 expected:2 multi:22 anisotropy:1 little:1 overwhelming:1 xti:2 increasing:2 becomes:2 provided:1 estimating:4 underlying:1 moreover:1 notation:1 biostatistics:1 gift:1 what:1 israel:1 argmin:7 ballpark:1 minimizes:1 unified:1 finding:2 guarantee:11 quantitative:1 every:3 exactly:1 um:1 rm:10 k2:7 supv:2 partitioning:1 unit:1 grant:2 appear:1 before:1 engineering:1 despite:1 mach:1 analyzing:1 sivakumar:2 might:1 emphasis:1 dantzig:7 studied:2 suggests:1 appl:1 ease:1 factorization:1 practical:2 atomic:1 practice:3 block:3 implement:1 supu:1 procedure:6 pontil:2 area:1 empirical:4 universal:2 vert:1 road:1 get:1 cannot:1 close:2 selection:1 put:1 context:3 impossible:1 prentice:1 restriction:1 map:1 deterministic:1 straightforward:1 attention:1 go:2 convex:8 focused:1 simplicity:1 unstructured:3 recovery:6 estimator:10 d1:1 importantly:1 notion:1 coordinate:1 annals:1 suppose:2 play:1 user:1 tandon:1 us:1 thatq:1 curly:1 approximated:1 sparsely:1 observed:1 role:1 solved:1 capture:1 sun:1 decrease:2 intuition:1 complexity:2 ideally:1 depend:1 solving:4 purely:1 basis:2 tersely:2 easily:1 joint:5 various:1 caramanis:1 derivation:1 jain:5 artificial:1 quite:1 encoded:1 supplementary:1 valued:2 solve:1 larger:2 compensates:1 favor:1 statistic:3 gi:1 jointly:3 noisy:1 ip:1 seemingly:1 eigenvalue:5 cai:1 propose:2 product:2 intuitive:1 inducing:1 exploiting:1 requirement:1 converges:3 derive:2 completion:3 stat:1 netrapalli:3 implemented:1 c:1 recovering:1 implies:3 come:2 quantify:1 involves:1 safe:1 guided:1 centered:2 material:1 premise:1 fix:2 rothman:1 secondly:1 im:6 exploring:1 extension:1 hold:7 around:1 considered:2 hall:1 exp:16 mapping:2 scope:1 algorithmic:1 dictionary:1 bickel:1 smallest:3 estimation:32 applicable:1 eqtl:2 largest:1 city:1 tool:1 establishes:1 minimization:9 gaussian:22 rather:1 pn:2 shrinkage:1 breiman:1 varying:1 broader:1 corollary:4 derived:1 focus:4 vk:1 rank:3 likelihood:2 mainly:1 indicates:1 contrast:1 kim:2 dependent:1 typically:2 relation:2 compatibility:4 among:2 aforementioned:1 dual:1 denoted:1 favored:1 yahoo:1 development:1 classification:1 constrained:2 special:1 initialize:1 ruan:1 equal:1 field:1 evgeniou:2 having:1 beach:1 biology:1 identical:1 yu:1 icml:3 nearly:1 others:1 sanghavi:4 gordon:1 few:2 modern:1 simultaneously:1 maxj:1 geometry:1 phase:1 cns:1 n1:1 attempt:1 ab:1 friedman:2 interest:1 highly:1 investigate:1 umn:1 deferred:1 introduces:1 fazayeli:1 kvk:1 implication:1 accurate:1 closer:1 nowak:1 partial:1 tree:1 loosely:1 plugged:1 re:13 overcomplete:1 theoretical:4 instance:1 modeling:1 earlier:1 rao:1 caruana:1 ordinary:12 stacking:1 applicability:1 subset:1 entry:1 predictor:1 inadequate:1 characterize:2 reported:1 dependency:3 varies:2 shengc:1 proximal:1 gd:35 st:1 recht:2 international:3 negahban:1 contract:1 lee:1 informatics:1 tr2:1 together:2 nicer:1 again:1 von:1 satisfied:2 manage:1 wytock:1 cognitive:1 american:1 derivative:1 leading:1 return:1 account:2 parrilo:1 c12:1 twin:1 pooled:1 coding:2 coefficient:4 gaussianity:1 satisfy:2 kolter:1 depends:1 multiplicative:2 view:1 closed:1 analyze:1 sup:1 characterizes:3 start:1 competitive:1 xing:1 clime:2 square:2 ni:2 largely:1 yield:2 correspond:1 conceptually:2 basically:1 none:1 worth:1 randomness:1 simultaneous:2 definition:6 against:1 energy:1 involved:1 associated:2 proof:1 sampled:1 dataset:4 popular:1 knowledge:1 subsection:2 improves:1 organized:1 sophisticated:1 jenatton:2 nasa:1 higher:1 methodology:1 response:33 emin:9 formulation:1 evaluated:1 though:3 ritov:1 anderson:1 correlation:3 sheng:1 hand:1 tropp:1 overlapping:1 banerjee:7 quality:1 usa:1 building:1 normalized:7 concept:1 true:3 verify:2 counterpart:1 regularization:3 hence:2 ccf:1 alternating:14 symmetric:1 iteratively:2 semantic:2 d2t:5 climate:1 deal:1 numerator:1 width:7 generalized:5 presenting:1 performs:1 dedicated:1 temperature:1 l1:2 geometrical:2 arindam:1 specialized:2 anisotropic:1 discussed:1 association:1 trait:1 measurement:2 tuning:4 mathematics:2 minnesota:1 stable:2 longer:1 surface:1 etc:2 multivariate:8 own:1 recent:4 moderate:1 inf:1 apart:1 scenario:2 belongs:1 certain:3 nonconvex:1 inequality:5 kar:1 success:1 arbitrarily:1 vt:6 yi:16 captured:2 minimum:4 accomplishes:1 converge:1 signal:2 ii:6 encompass:2 desirable:1 multiple:3 match:2 characterized:1 calculation:1 bach:2 long:2 multivar:1 retrieval:1 levina:1 mle:6 ravikumar:2 bigger:1 a1:2 adobe:1 prediction:3 involving:1 regression:13 basic:1 denominator:1 essentially:3 expectation:1 iteration:8 palatucci:1 agarwal:1 achieved:1 c1:10 whereas:1 residue:1 separately:1 want:1 singular:1 w2:4 rest:2 unlike:1 posse:1 meaningless:1 brace:1 extra:1 facilitates:1 contrary:1 leveraging:1 anandkumar:1 structural:2 noting:1 leverage:1 enough:2 iterate:2 variety:1 fit:1 xj:1 affect:2 hastie:1 lasso:8 reduce:1 computable:1 expression:1 casted:1 unnatural:1 forecasting:1 returned:1 speaking:1 remark:5 generally:2 tewari:2 eigenvectors:1 amount:1 tsybakov:1 extensively:1 sohn:1 reduced:2 outperform:1 exist:1 nsf:1 estimated:2 cikm:1 tibshirani:2 group:2 putting:1 verified:1 utilize:1 econometric:1 graph:1 merely:1 year:1 cone:2 run:1 inverse:6 striking:1 throughout:3 reasonable:1 almost:1 chandrasekaran:1 bound:22 resampled:3 followed:1 guaranteed:1 oracle:10 greene:1 constraint:1 x2:1 encodes:1 u1:1 min:38 kumar:1 multipleoutput:1 utj:1 relatively:2 structured:14 according:1 alternate:1 rai:1 combination:1 ball:1 poor:1 across:1 smaller:3 beneficial:1 slightly:1 remain:1 restricted:8 equation:1 previously:1 foygel:1 nnx12aq39a:1 fail:1 locus:1 end:2 unusual:1 studying:1 gaussians:1 observe:2 hierarchical:2 altmin:14 appropriate:1 appearing:1 alternative:1 rp:8 assumes:1 denotes:1 ensure:1 dirty:1 publishing:1 graphical:3 hinge:1 widens:1 coined:1 restrictive:1 especially:2 establish:1 uj:1 society:2 unchanged:1 objective:5 izenman:2 realized:1 quantity:2 diagonal:2 jalali:1 said:1 unable:1 manifold:1 provable:1 fresh:1 willsky:1 assuming:1 ratio:2 unfortunately:1 qif:1 stoc:1 blockwise:1 design:6 implementation:1 anal:1 unknown:1 perform:2 upper:1 observation:1 datasets:1 sm:3 finite:2 descent:1 arbitrary:2 required:1 specified:1 c3:1 c4:1 established:1 nip:10 alternately:2 beyond:2 usually:1 below:3 sparsity:8 including:1 max:35 royal:2 wainwright:1 unrealistic:1 suitable:3 overlap:1 natural:1 rely:1 regularized:6 predicting:1 diagonalizing:1 advanced:1 mn:6 zhu:1 brief:1 imply:1 numerous:1 carried:1 geometric:3 discovery:2 literature:1 l2:1 kf:1 sg:3 asymptotic:2 acknowledgement:1 glasso:1 mixed:2 goncalves:1 limitation:1 proportional:1 srebro:1 foundation:1 incurred:2 sufficient:1 thresholding:2 share:1 land:1 ibm:1 row:7 penalized:1 supported:1 keeping:2 copy:1 enjoys:1 allow:1 taking:2 sparse:16 benefit:3 dimension:2 xn:3 world:1 made:1 replicated:1 transaction:1 approximate:1 selector:6 obtains:1 emphasize:1 implicitly:1 dealing:1 reveals:1 mairal:2 assumed:4 xi:28 iterative:2 decade:1 additionally:1 learn:2 reasonably:1 ca:1 diag:1 da:1 did:1 sp:6 aistats:2 linearly:3 whole:1 noise:20 edition:1 daume:1 nothing:1 allowed:1 x1:3 orc:2 elaborate:1 precision:1 sub:9 fails:1 third:1 admissible:9 removing:2 theorem:15 down:1 specific:5 xt:8 showing:1 uut:2 adding:1 corr:1 chatterjee:2 margin:1 chen:5 easier:1 gap:1 simply:3 expressed:2 scalar:1 springer:2 corresponds:1 minimizer:1 satisfies:8 relies:2 obozinski:3 conditional:1 goal:1 viewed:1 ann:1 exposition:1 shared:1 price:1 replace:1 hard:2 change:1 specifically:3 determined:1 justify:1 averaging:1 lemma:27 called:2 experimental:2 support:6 dept:1 argyriou:2 correlated:1 |
6,497 | 6,877 | Convolutional Gaussian Processes
Mark van der Wilk
Department of Engineering
University of Cambridge, UK
[email protected]
Carl Edward Rasmussen
Department of Engineering
University of Cambridge, UK
[email protected]
James Hensman
prowler.io
Cambridge, UK
[email protected]
Abstract
We present a practical way of introducing convolutional structure into Gaussian
processes, making them more suited to high-dimensional inputs like images. The
main contribution of our work is the construction of an inter-domain inducing point
approximation that is well-tailored to the convolutional kernel. This allows us to
gain the generalisation benefit of a convolutional kernel, together with fast but
accurate posterior inference. We investigate several variations of the convolutional
kernel, and apply it to MNIST and CIFAR-10, where we obtain significant improvements over existing Gaussian process models. We also show how the marginal
likelihood can be used to find an optimal weighting between convolutional and
RBF kernels to further improve performance. This illustration of the usefulness
of the marginal likelihood may help automate discovering architectures in larger
models.
1
Introduction
Gaussian processes (GPs) [1] can be used as a flexible prior over functions, which makes them an
elegant building block in Bayesian nonparametric models. In recent work, there has been much
progress in addressing the computational issues preventing GPs from scaling to large problems
[2, 3, 4, 5]. However, orthogonal to being able to algorithmically handle large quantities of data is the
question of how to build GP models that generalise well. The properties of a GP prior, and hence its
ability to generalise in a specific problem, are fully encoded by its covariance function (or kernel).
Most common kernel functions rely on rather rudimentary and local metrics for generalisation, like
the Euclidean distance. This has been widely criticised, notably by Bengio [6], who argued that deep
architectures allow for more non-local generalisation. While deep architectures have seen enormous
success in recent years, it is an interesting research question to investigate what kind of non-local
generalisation structures can be encoded in shallow structures like kernels, while preserving the
elegant properties of GPs.
Convolutional structures have non-local influence and have successfully been applied in neural
networks to improve generalisation for image data [see e.g. 7, 8]. In this work, we investigate
how Gaussian processes can be equipped with convolutional structures, together with accurate
approximations that make them applicable in practice. A previous approach by Wilson et al. [9]
transforms the inputs to a kernel using a convolutional neural network. This produces a valid kernel
since applying a deterministic transformation to kernel inputs results in a valid kernel [see e.g. 1, 10],
with the (many) parameters of the transformation becoming kernel hyperparameters. We stress that
our approach is different in that the process itself is convolved, which does not require the introduction
of additional parameters. Although our method does have inducing points that play a similar role
to the filters in a convolutional neural network (convnet), these are variational parameters and are
therefore more protected from over-fitting.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2
Background
Interest in Gaussian processes in the machine learning community started with the realisation that
a shallow but infinitely wide neural network with Gaussian weights was a Gaussian process [11] ?
a nonparametric model with analytically tractable posteriors and marginal likelihoods. This gives
two main desirable properties. Firstly, the posterior gives uncertainty estimates, which, combined
with having an infinite number of basis functions, results in sensibly large uncertainties far from
the data (see Qui?onero-Candela and Rasmussen [12, fig. 5] for a useful illustration). Secondly,
the marginal
likelihood can be used to select kernel hyperparameters. The main drawback is an
O N 3 computational cost for N observations. Because of this, much attention over recent years
has been devoted to scaling GP inference to large datasets through sparse approximations [2, 13, 14],
minibatch-based optimisation [3], exploiting structure in the covariance matrix [e.g. 15] and Fourier
methods [16, 17].
In this work, we adopt the variational framework for approximation
in GP models, because it
can simultaneously give a computational speed-up to O N M 2 (with M N ) through sparse
approximations [2] and approximate posteriors due to non-Gaussian likelihoods [18]. The variational
choice is both elegant and practical: it can be shown that the variational objective minimises the
KL divergence across the entire latent process [4, 19], which guarantees that the exact model will
be approximated given enough resources. Other methods, such as EP/FITC [14, 20, 21, 22], can
be seen as approximate models that do not share this property, leading to behaviour that would not
be expected from the model that is to be approximated [23]. It is worth noting however, that our
method for convolutional GPs is not specific to the variational framework, and can be used without
modification with other objective functions, such as variations on EP.
2.1
Gaussian variational approximation
We adopt the popular choice of combining a sparse GP approximation with a Gaussian assumption,
using a variational objective as introduced in [24]. We choose our model to be
f (?) | ? ? GP (0, k(?, ?)) ,
(1)
iid
yi | f, xi ? p(yi | f (xi )) ,
(2)
where p(yi | f (xi )) is some non-Gaussian likelihood, for example a Bernoulli distribution through a
probit link function for classification. The kernel parameters ? are to be estimated by approximate
maximum likelihood, and we drop them from the notation hereon. Following Titsias [2], we choose
the approximate posterior to be a GP with its marginal distribution specified at M ?inducing inputs?
M
Z = {zm }M
m=1 . Denoting the value of the GP at those points as u = {f (zm )}m=1 , the approximate
posterior process is constructed from the specified marginal and the prior conditional1 :
u ? N m, S ,
(3)
> ?1
> ?1
f (?) | u ? GP ku (?) Kuu u, k(?, ?) ? ku (?) Kuu ku (?) .
(4)
The vector-valued function ku (?) gives the covariance between u and the remainder of f , and is
M
constructed from the kernel: ku (?) = [k(zm , ?)]m=1 . The matrix Kuu is the prior covariance of u.
The variational parameters m, S and Z are then optimised with respect to the evidence lower bound
(ELBO):
X
ELBO =
Eq(f (xi )) [log p(yi | f (xi ))] ? KL[q(u)||p(u)] .
(5)
i
Here, q(u) is the density of u associated with equation (3), and p(u) is the prior density from (1).
Expectations are taken with respect to the marginals of the posterior approximation, given by
q(f (xi )) = N ?i , ?i2 ,
(6)
?i = ku (xi )> K?1
uu m ,
?i2
= k(xi , xi ) +
Kfu K?1
uu (S
(7)
?
Kuu )K?1
uu Kuf
.
(8)
1
The construction of the approximate posterior can alternatively be seen as a GP posterior to a regression
problem, where the q(u) indirectly specifies the likelihood. Variational inference will then adjust the inputs and
likelihood of this regression problem to make the approximation close to the true posterior in KL divergence.
2
The matrices Kuu and Kfu are obtained by evaluating the kernel as k(zm , zm0 ) and k(xn , zm )
respectively. The KL divergence term of the ELBO is analytically tractable, whilst the expectation
term can be computed using one-dimensional quadrature. The form of the ELBO means that
stochastic optimisation using minibatches is applicable. A full discussion of the methodology is
given by Matthews [19]. We optimise the ELBO instead of the marginal likelihood to find the
hyperparameters.
2.2
Inter-domain variational GPs
Inter-domain Gaussian processes [25] work by replacing the variables u, which we have above
assumed to be observations of the function at the inducing inputs Z, with more complicated variables
made by some linear operator on the function. Using linear operators ensures that the inducing
variables u are still jointly Gaussian with the other points on the GP. Implementing inter-domain
inducing variables can therefore be a drop-in replacement to inducing points, requiring only that the
appropriate (cross-)covariances Kfu and Kuu are used.
The key advantage of the inter-domain approach is that the approximate posterior mean?s (7) effective
basis functions ku (?) can be manipulated by the linear operator which constructs u. This can make
the approximation more flexible, or give other computational benefits. For example, Hensman et al.
[17] used the Fourier transform to construct u such that the Kuu matrix becomes easier to invert.
Inter-domain inducing variables are usually constructed using a weighted integral of the GP:
Z
um = ?(x; zm )f (x) dx ,
(9)
where the weighting function ? depends on some parameters zm . The covariance between the
inducing variable um and a point on the function is then
Z
cov(um , f (xn )) = k(zm , xn ) = ?(x; zm )k(x, xn ) dx ,
(10)
and the covariance between two inducing variables is
ZZ
cov(um , um0 ) = k(zm , zm0 ) =
?(x; zm )?(x0 ; zm0 )k(x, x0 ) dx dx0 .
(11)
Using inter-domain inducing variables in the variational framework is straightforward if the above
integrals are tractable. The results are substituted for the kernel evaluations in equations (7) and (8).
Our proposed method will be an inter-domain approximation in the sense that the inducing input
space is different from the input space of the kernel. However, instead of relying on an integral
transformation of the GP, we construct the inducing variables u alongside the new kernel such that
the effective basis functions contain a convolution operation.
2.3
Additive GPs
We would like to draw attention to previously studied additive models [26, 27], in order to highlight
the similarity with the convolutional kernels we will introduce later. Additive models construct a
prior GP as a sum of functions over subsets of the input dimensions, resulting in a kernel with the
same additive structure. For example, summing over each input dimension i, we get
X
X
f (x) =
fi (x[i]) =? k(x, x0 ) =
ki (x[i], x0 [i]) .
(12)
i
i
This kernel exhibits some non-local generalisation, as the relative function values along one dimension
will be the same regardless of the input along other dimensions. In practice, this specific additive
model is rather too restrictive to fit data well, since it assumes that all variables affect the response
y independently. At the other extreme, the popular squared exponential kernel allows interactions
between all dimensions, but this turns out to be not restrictive enough: for high-dimensional problems
we need to impose some restriction on the form of the function.
In this work, we build an additive kernel inspired by the convolution operator found in convnets.
The same function is applied to patches from the input, which allows adjacent pixels to interact, but
imposes an additive structure otherwise.
3
3
Convolutional Gaussian Processes
We begin by constructing the exact convolutional Gaussian process model, highlighting its connections
to existing neural network models, and challenges in performing inference.
Convolutional kernel construction Our aim is to construct a GP prior on functions on images of
size D = W ? H to real valued responses: f : RD ? R. We start with a patch-response function,
g : RE ? R, mapping from patches of size E. We use a stride of 1 to extract all patches, so for
patches of size E = w ? h, we get a total of P = (W ? w + 1) ? (H ? h + 1) patches. We can
start by simply making the overall function f the sum of all patch responses. If g(?) is given a GP
prior, a GP prior will also be induced on f (?):
X
g ? GP (0, kg (z, z0 )) , f (x) =
g x[p] ,
(13)
p
?
=? f ? GP ?0,
P X
P
X
p=1
?
0
kg x[p] , x0[p ] ? ,
(14)
p0 =1
where x[p] indicates the pth patch of the image x. This construction is reminiscent of the additive
models discussed earlier, since a function is applied to subsets of the input. However, in this case, the
same function g(?) is applied to all input subsets. This allows all patches in the image to inform the
value of the patch-response function, regardless of their location.
Comparison to convnets This approach is similar in spirit to convnets. Both methods start with a
function that is applied to each patch. In the construction above, we introduce a single patch-response
function g(?) that is non-linear and nonparametric. Convnets, on the other hand, rely on many linear
filters, followed by a non-linearity. The flexibility of a single convolutional layer is controlled by the
number of filters, while depth is important in order to allow for enough non-linearity. In our case,
adding more non-linear filters to the construction of f (?) does not increase the capacity to learn. The
patch responses of the multiple filters would be summed, resulting in simply a summed kernel for the
prior over g.
Computational issues Similar kernels have been proposed in various forms [28, 29], but have
never been applied directly in GPs, probably due to the prohibitive costs. Direct implementation
of a GP using kf would be infeasible not only due to the usual cubic cost w.r.t. the number of data
points, but also due to it requiring P 2 evaluations of kg per element of Kff . For MNIST with patches
of size 5, P 2 ? 3.3 ? 105 , resulting in the kernel evaluations becoming a significant bottleneck.
Sparse inducing point methods require M 2 + N M kernel evaluations of kf . As an illustration, the
Kuu matrix for 750 inducing points (which we use in our experiments) would require ? 700 GB of
memory for backpropagation. Luckily, this can largely be avoided.
4
Inducing patch approximations
In the next few sections, we will introduce several variants of the convolutional Gaussian process,
and illustrate their properties using toy and real datasets. Our main contribution is showing that
convolutional structure can be embedded in kernels, and that they can be used within the framework
of nonparametric Gaussian process approximations. We do so by constructing the kernel in tandem
with a suitable domain in which to place the inducing variables. Implementation2 requires minimal
changes to existing implementations of sparse variational GP inference, and can leverage GPU
implementations of convolution operations (see appendix). In the appendix we also describe how the
same inference method can be applied to kernels with general invariances.
4.1
Translation invariant convolutional GP
Here we introduce the simplest version of our method. We start with the construction from section
3, with an RBF kernel for kg . In order to obtain a tractable method, we want to approximate the
2
Ours can be found on https://github.com/markvdw/convgp, together with code for replicating the
experiments, and trained models. It is based on GPflow [30], allowing utilisation of GPUs.
4
(a) Rectangles dataset.
(b) MNIST 0-vs-1 dataset.
Figure 1: The optimised inducing patches for the translation invariant kernel. The inducing patches
are sorted by the value of their corresponding inducing output, illustrating the evidence each patch
has in favour of a class.
true posterior using a small set of inducing points. The main idea is to place these inducing points
in the input space of patches, rather than images. This corresponds to using inter-domain inducing
points. In order to use this approximation we simply need to find the appropriate inter-domain (cross-)
covariances Kuu and Kfu , which are easily found from the construction of the convolutional kernel
in equation 14:
"
#
X
X
kf u (x, z) = Eg [f (x)g(z)] = Eg
g(x[p] )g(z) =
kg x[p] , z ,
(15)
p
p
kuu (z, z0 ) = Eg [g(z)g(z0 )] = kg (z, z0 ) .
(16)
This improves on the computation from the standard inducing point method, since only covariances
between the image patches and inducing patches are needed, allowing Kfu to be calculated with
N M P instead of N M P 2 kernel evaluations. Since Kuu now only requires the covariances between
inducing patches, its cost is M 2 instead of M 2 P 2 evaluations. However, evaluating diag [Kff ] does
still require N P 2 evaluations, although N can be small when using minibatch optimisation.This
brings the cost of computing the kernel matrices down significantly compared to the O N M 2 cost
of the calculation of the ELBO.
In order to highlight the capabilities of the new kernel, we now consider two toy tasks: classifying
rectangles and distinguishing zeros from ones in MNIST.
Toy demo: rectangles The rectangles dataset is an artificial dataset containing 1200 images of size
28 ? 28. Each image contains the outline of a randomly generated rectangle, and is labelled according
to whether the rectangle has larger width or length. Despite its simplicity, the dataset is tricky for
standard kernel-based methods, including Gaussian processes, because of the high dimensionality of
the input, and the strong dependence of the label on multiple pixel locations.
To tackle the rectangles dataset with the convolutional GP, we used a patch size of 3 ? 3 and 16
inducing points initialised with uniform random noise. We optimised using Adam [31] (0.01 learning
rate & 100 data points per minibatch) and obtained 1.4% error and a negative log predictive probability
(nlpp) of 0.055 on the test set. For comparison, an RBF kernel with 1200 optimally placed inducing
points, optimised with BFGS, gave 5.0% error and an nlpp of 0.258. Our model is both better in terms
of performance, and uses fewer inducing points. The model works because it is able to recognise
and count vertical and horizontal bars in the patches. The locations of the inducing points quickly
recognise the horizontal and vertical lines in the images ? see Figure 1a.
Illustration: Zeros vs ones MNIST We perform a similar experiment for classifying MNIST 0
and 1 digits. This time, we initialise using patches from the training data and use 50 inducing features,
shown in figure 1b. Features in the top left are in favour of classifying a zero, and tend to be diagonal
or bent lines, while features for ones tend to be blank space or vertical lines. We get 0.3% error.
5
Full MNIST Next, we turn to the full multi-class MNIST dataset. Our setup follows Hensman
et al. [5], with 10 independent latent GPs using the same convolutional kernel, and constraining q(u)
to a Gaussian (see section 2). It seems that this translation invariant kernel is too restrictive for this
task, since the error rate converges at around 2.1%, compared to 1.9% for the RBF kernel.
4.2
Weighted convolutional kernels
We saw in the previous section that although the translation invariant kernel excelled at the rectangles
task, it under-performed compared to the RBF on MNIST. Full translation invariance is too strong
a constraint, which makes intuitive sense for image classification, as the same feature in different
locations of the image can imply different classes. This can be remedied without leaving the family
of Gaussian processes by relaxing the constraint of requiring each patch to give the same contribution,
regardless of its position in the image. We do so by introducing a weight for each patch. Denoting
again the underlying patch-based GP as g, the image-based GP f is given by
X
f (x) =
wp g(x[p] ) .
(17)
p
The weights {wp }P
p=1 adjust the relative importance of the response for each location in the image.
Only kf and kf u differ from the invariant case, and can be found to be:
X
wp wq kg (x[p] , xq ) ,
(18)
kf (x, x) =
pq
kf u (x, z) =
X
wp kg (x[p] , z) .
(19)
p
The patch weights w ? RP are now kernel hyperparameters, and we optimise them with respect
the the ELBO in the same fashion as the underlying parameters of the kernel kg . This introduces P
hyperparameters into the kernel ? slightly less than the number of input pixels, which is how many
hyperparameters an automatic relevance determination kernel would have.
Toy demo: rectangles The errors in the previous section were caused by rectangles along the edge
of the image, which contained bars which only contribute once to the classification score. Bars in the
centre contribute to multiple patches. The weighting allows some up-weighting of patches along the
edge. This results in near-perfect classification, with no classification errors and an nlpp of 0.005.
Full MNIST The weighting causes a significant reduction in error over the translation invariant
and RBF kernels (table 1 & figure 2). The weighted convolutional kernel obtains 1.22% error ? a
significant improvement over 1.9% for the RBF kernel [5]. Krauth et al. [32] report 1.55% error
using an RBF kernel, but using a leave-one-out objective for finding the hyperparameters.
4.3
Does convolution capture everything?
As discussed earlier, the additive nature of the convolutional kernel places constraints on the possible
functions in the prior. While these constraints have been shown to be useful for classifying MNIST,
we lose the guarantee (that e.g. the RBF provides) of being able to model any continuous function
arbitrarily well in the large-data limit. This is because convolutional kernels are not universal [33, 34]
in the image input space, despite being nonparametric. This places convolutional kernels in a middle
ground between parametric and universal kernels (see the appendix for a discussion). A kernel
that is universal and has some amount of convolutional structure can be obtained by summing an
RBF component: k(x, x0 ) = krbf (x, x0 ) + kconv (x, x0 ). Equivalently, the GP is constructed by the
sum f (x) = fconv (x) + frbf (x). This allows the universal RBF to model any residuals that the
convolutional structure cannot explain. We use the marginal likelihood estimate to automatically
weigh how much of the process should be explained by each of the components, in the same way as
is done in other additive models [27, 35].
Inference in such a model is straightforward under the usual inducing point framework ? it only
requires evaluating the sum of kernels. The case considered here is more complicated since we want
the inducing inputs for the RBF to lie in the space of images, while we want to use inducing patches
6
for the convolutional kernel. This forces us to use a slightly different form for the approximating GP,
representing the inducing inputs and outputs separately, as
uconv
?conv
?N
,S ,
(20)
urbf
?rbf
f (?) | u = fconv (?) | uconv + frbf (?) | urbf .
(21)
The variational lower bound changes only through the equations (7) and (8), which must now contain
contributions of the two component Gaussian processes. If covariances in the posterior between fconv
and frbf are to be allowed, S must be a full-rank 2M ? 2M matrix. A mean-field approximation can
be chosen as well, in which case S can be M ? M block-diagonal, saving some parameters. Note
that regardless of which approach is chosen, the largest matrix to be inverted is still M ? M , as uconv
and urbf are independent in the prior (see the appendix for more details).
3
0.12
2.5
0.1
Test nlpp
Test error (%)
Full MNIST By adding an RBF component, we indeed get an extra reduction in error and nlpp
from 1.22% to 1.17% and 0.048 to 0.039 respectively (table 1 & figure 2). The variances for the
convolutional and RBF kernels are 14.3 and 0.011 respectively, showing that the convolutional kernel
explains most of the variance in the data.
2
1.5
0.08
0.06
0.04
1
0
5
10
Time (hrs)
0
5
10
Time (hrs)
Figure 2: Test error (left) and negative log predictive probability (nlpp, right) for MNIST, using RBF
(blue), translation invariant convolutional (orange), weighted convolutional (green) and weighted
convolutional + RBF (red) kernels.
Kernel
Invariant
RBF
Weighted
Weighted + RBF
M
750
750
750
750
Error (%)
2.08%
1.90%
1.22%
1.17%
NLPP
0.077
0.068
0.048
0.039
Table 1: Final results for MNIST.
4.4
Convolutional kernels for colour images
Our final variants of the convolutional kernel handle images with multiple colour channels. The
addition of colour presents an interesting modelling challenge, as the input dimensionality increases
significantly, with a large amount of redundant information. As a baseline, the weighted convolutional
kernel from section 4.2 can be used by taking all patches from each colour channel together, resulting
in C times more patches, where C is the number of colour channels. This kernel can only account for
linear interactions between colour channels through the weights, and is also constrained to give the
same patch response regardless of the colour channel. A step up in flexibility would be to define g(?)
to take a w ? h ? C patch with all C colour channels. This trades off increasing the dimensionality
of the patch-response function input with allowing it to learn non-linear interactions between the
colour channels. We call this the colour-patch variant. A middle ground that does not increase the
dimensionality as much, is to use a different patch-response function gc (?) for each colour channel.
7
We will refer to this as the multi-channel convolutional kernel. We construct the overall function f as
P X
C
X
f (x) =
wpc gc x[pc] .
(22)
p=1 c=1
For this variant, inference becomes similar to section 4.3, although for a different reason. While
all gc (?)s can use the same inducing patch inputs, we need access to each gc (x[pc] ) separately in
order to fully specify f (x). This causes us to require separate inducing outputs for each gc . In our
approximation, we share the inducing inputs, while, as was done in section 4.3, representing the
inducing outputs separately. The equations for f (?)|u are changed only through the matrices Kfu
and Kuu being N ? M C and M C ? M C respectively. Given that the gc (?) are independent in the
prior, and the inducing inputs are constrained to be the same, Kuu is a block-diagonal repetition of
kg (zm , zm0 ). All the elements of Kfu are given by
"
#
X
X
kf gc (x, z) = E{gc }Cc=1
wpc gc x[pc] gc (z) =
wpc kg (x[pc] , z) .
(23)
p
p
As in section 4.3, we have the choice to represent a full CM ? CM covariance matrix for all inducing
variables u, or go for a mean-field approximation requiring only C M ? M matrices. Again, both
versions require no expensive matrix operations larger than M ? M (see appendix).
Finally, a simplification can be made in order to avoid representing C patch-response functions. If
the weighting of each of the colour channels is constant w.r.t. the patch location (i.e. wpc = wp wc ),
the model is equivalent to using a patch-response function with an additive kernel:
X
X
X
f (x) =
wp
wc gc (x[pc] ) =
wp g?(x[pc] ) ,
(24)
p
c
p
!
g?(?) ? GP
0,
X
wc kc (?, ?)
.
(25)
c
CIFAR-10 We conclude the experiments by an investigation of CIFAR-10 [36], where 32 ? 32
sized RGB images are to be classified. We use a similar setup to the previous MNIST experiments,
by using 5 ? 5 patches. Again, all latent functions share the same kernel for the prior, including the
patch weights. We compare an RBF kernel to 4 variants of the convolutional kernel: the baseline
?weighted?, the colour-patch, the colour-patch variant with additive structure (equation 24), and the
multi-channel with mean-field inference. All models use 1000 inducing inputs and are trained using
Adam. Due to memory constraints on the GPU, a minibatch size of 40 had to be used for the weighted,
additive and multi-channel models.
Test errors and nlpps during training are shown in figure 3. Any convolutional structure significantly
improves classification performance, with colour interactions seeming particularly important, as the
best performing model is the multi-channel GP. The final error rate of the multi-channel kernel was
35.4%, compared to 48.6% for the RBF kernel. While we acknowledge that this is far from state
of the art using deep nets, it is a significant improvement over existing Gaussian process models,
including the 44.95% error reported by Krauth et al. [32], where an RBF kernel was used together
with their leave-one-out objective for the hyperparameters. This improvement is orthogonal to the
use of a new kernel.
5
Conclusion
We introduced a method for efficiently using convolutional structure in Gaussian processes, akin to
how it has been used in neural nets. Our main contribution is showing how placing the inducing
inputs in the space of patches gives rise to a natural inter-domain approximation that fits in sparse GP
approximation frameworks. We discuss several variations of convolutional kernels and show how they
can be used to push the performance of Gaussian process models on image datasets. Additionally, we
show how the marginal likelihood can be used to assess to what extent a dataset can be explained
with only convolutional structure. We show that convolutional structure is not sufficient, and that
performance can be improved by adding a small amount of ?fully connected? (RBF). The ability to
do this, and automatically tune the hyperparameters is a real strength of Gaussian processes. It would
be great if this ability could be incorporated in larger or deeper models as well.
8
2.6
Test nlpp
Test error (%)
60
50
40
2.4
2.2
2.0
1.8
0
10
20
30
Time (hrs)
40
0
10
20
30
Time (hrs)
40
Figure 3: Test error (left) and nlpp (right) for CIFAR-10, using RBF (blue), baseline weighted
convolutional (orange), full-colour weighted convolutional (green), additive (red), and multi-channel
(purple).
Acknowledgements
CER gratefully acknowledges support from EPSRC grant EP/J012300. MvdW is generously supported by a Qualcomm Innovation Fellowship.
References
[1] Carl Edward Rasmussen and Christopher K.I. Williams. Gaussian Processes for Machine Learning. MIT
Press, 2006.
[2] Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In Proceedings
of the 12th International Conference on Artificial Intelligence and Statistics, pages 567?574, 2009.
[3] James Hensman, Nicol? Fusi, and Neil D. Lawrence. Gaussian processes for big data. In Proceedings of
the 29th Conference on Uncertainty in Artificial Intelligence (UAI), pages 282?290, 2013.
[4] Alexander G. de G. Matthews, James Hensman, Richard E. Turner, and Zoubin Ghahramani. On sparse
variational methods and the Kullback-Leibler divergence between stochastic processes. In Proceedings of
the 19th International Conference on Artificial Intelligence and Statistics, pages 231?238, 2016.
[5] James Hensman, Alexander G. de G. Matthews, Maurizio Filippone, and Zoubin Ghahramani. MCMC for
variationally sparse Gaussian processes. In Advances in Neural Information Processing Systems 28, pages
1639?1647, 2015.
[6] Yoshua Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):
1?127, January 2009. ISSN 1935-8237.
[7] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. Advances in Neural Information Processing Systems 25, pages 1097?1105, 2012.
[9] Andrew G. Wilson, Zhiting Hu, Ruslan R. Salakhutdinov, and Eric P. Xing. Stochastic variational deep
kernel learning. In Advances in Neural Information Processing Systems, pages 2586?2594, 2016.
[10] Roberto Calandra, Jan Peters, Carl Edward Rasmussen, and Marc Peter Deisenroth. Manifold gaussian
processes for regression. In 2016 International Joint Conference on Neural Networks (IJCNN), pages
3338?3345, 2016.
[11] Radford M. Neal. Bayesian learning for neural networks, volume 118. Springer, 1996.
[12] Joaquin Qui?onero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian
process regression. Journal of Machine Learning Research, 6:1939?1959, 2005.
[13] Matthias Seeger, Christopher K. I. Williams, and Neil D. Lawrence. Fast forward selection to speed up
sparse Gaussian process regression. In Proceedings of the Ninth International Workshop on Artificial
Intelligence and Statistics, 2003.
[14] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances in
Neural Information Processing Systems 18, pages 1257?1264, 2005.
[15] Andrew Wilson and Hannes Nickisch. Kernel interpolation for scalable structured Gaussian processes
(KISS-GP). In Proceedings of the 32nd International Conference on Machine Learning (ICML), pages
1775?1784, 2015.
[16] Miguel L?zaro-Gredilla, Joaquin Qui?onero-Candela, Carl Edward Rasmussen, and An?bal R FigueirasVidal. Sparse spectrum Gaussian process regression. Journal of Machine Learning Research, 11:1865?1881,
2010.
9
[17] James Hensman, Nicolas Durrande, and Arno Solin. Variational fourier features for gaussian processes.
arXiv preprint arXiv:1611.06740, 2016.
[18] Manfred Opper and C?dric Archambeau. The variational Gaussian approximation revisited. Neural
Computation, 21(3):786?792, 2009.
[19] Alexander G. de G. Matthews. Scalable Gaussian Process Inference Using Variational Methods. PhD
thesis, University of Cambridge, Cambridge, UK, 2016. available at http://mlg.eng.cam.ac.uk/
matthews/thesis.pdf.
[20] Daniel Hern?ndez-Lobato and Jos? Miguel Hern?ndez-Lobato. Scalable gaussian process classification via
expectation propagation. In Artificial Intelligence and Statistics, pages 168?176, 2016.
[21] Thang D. Bui, Josiah Yan, and Richard E. Turner. A unifying framework for sparse gaussian process
approximation using power expectation propagation. arXiv preprint arXiv:1605.07066, May 2016.
[22] Carlos Villacampa-Calvo and Daniel Hern?ndez-Lobato. Scalable multi-class Gaussian process classification using expectation propagation. In Proceedings of the 34th International Conference on Machine
Learning, volume 70 of Proceedings of Machine Learning Research, pages 3550?3559, 2017.
[23] Matthias Stephan Bauer, Mark van der Wilk, and Carl Edward Rasmussen. Understanding probabilistic
sparse gaussian process approximations. In Advances in neural information processing systems, 2016.
[24] James Hensman, Alexander G. de G. Matthews, and Zoubin Ghahramani. Scalable variational Gaussian
process classification. In Proceedings of the 18th International Conference on Artificial Intelligence and
Statistics, pages 351?360, 2015.
[25] Anibal Figueiras-Vidal and Miguel L?zaro-Gredilla. Inter-domain Gaussian processes for sparse inference
using inducing features. In Advances in Neural Information Processing Systems 22, pages 1087?1095.
Curran Associates, Inc., 2009.
[26] Nicolas Durrande, David Ginsbourger, and Olivier Roustant. Additive covariance kernels for highdimensional Gaussian process modeling. In Annales de la Facult? de Sciences de Toulouse, volume 21,
pages p?481, 2012.
[27] David K. Duvenaud, Hannes Nickisch, and Carl E. Rasmussen. Additive Gaussian processes. In Advances
in neural information processing systems, pages 226?234, 2011.
[28] Julien Mairal, Piotr Koniusz, Zaid Harchaoui, and Cordelia Schmid. Convolutional kernel networks.
Advances in Neural Information Processing Systems 27, pages 2627?2635, 2014.
[29] Gaurav Pandey and Ambedkar Dukkipati. Learning by stretching deep networks. In Tony Jebara and
Eric P. Xing, editors, Proceedings of the 31st International Conference on Machine Learning (ICML-14),
pages 1719?1727. JMLR Workshop and Conference Proceedings, 2014.
[30] Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke. Fujii, Alexis Boukouvalas,
Pablo Le?n-Villagr?, Zoubin Ghahramani, and James Hensman. GPflow: A Gaussian process library using
TensorFlow. Journal of Machine Learning Research, 18(40):1?6, 2017.
[31] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[32] Karl Krauth, Edwin V. Bonilla, Kurt Cutajar, and Maurizio Filippone. AutoGP: Exploring the capabilities
and limitations of Gaussian process models, 2016.
[33] Ingo Steinwart. On the Influence of the Kernel on the Consistency of Support Vector Machines. Journal of
Machine Learning Research, 2:67?93, 2001.
[34] Bharath K. Sriperumbudur, Kenji Fukumizu, and Gert R. G. Lanckriet. Universality, characteristic kernels
and rkhs embedding of measures. Journal of Machine Learning Research, 12:2389?2410, July 2011.
[35] David K. Duvenaud, James R. Lloyd, Roger B. Grosse, Joshua B. Tenenbaum, and Zoubin Ghahramani.
Structure discovery in nonparametric regression through compositional kernel search. In ICML (3), pages
1166?1174, 2013.
[36] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Learning multiple layers of features from tiny images.
Technical report, University of Toronto, 2009. URL http://www.cs.toronto.edu/~kriz/cifar.
html.
10
| 6877 |@word illustrating:1 middle:2 version:2 seems:1 nd:1 hu:1 rgb:1 covariance:13 p0:1 eng:1 reduction:2 ndez:3 contains:1 score:1 daniel:2 denoting:2 kuf:1 ours:1 document:1 kurt:1 rkhs:1 existing:4 blank:1 com:1 diederik:1 dx:3 reminiscent:1 gpu:2 must:2 universality:1 additive:16 zaid:1 drop:2 v:2 intelligence:6 discovering:1 prohibitive:1 fewer:1 keisuke:1 kff:2 manfred:1 provides:1 contribute:2 location:6 revisited:1 toronto:2 firstly:1 fujii:1 along:4 constructed:4 direct:1 fitting:1 introduce:4 x0:8 inter:12 notably:1 indeed:1 villagr:1 expected:1 multi:8 excelled:1 inspired:1 relying:1 salakhutdinov:1 automatically:2 equipped:1 tandem:1 becomes:2 begin:1 conv:1 notation:1 linearity:2 underlying:2 increasing:1 what:2 kg:11 kind:1 cm:2 arno:1 whilst:1 finding:1 transformation:3 guarantee:2 pseudo:1 tackle:1 sensibly:1 um:4 uk:7 tricky:1 grant:1 engineering:2 local:5 limit:1 io:2 despite:2 optimised:4 becoming:2 interpolation:1 studied:1 relaxing:1 archambeau:1 practical:2 lecun:1 zaro:2 practice:2 block:3 backpropagation:1 digit:1 jan:1 universal:4 yan:1 significantly:3 zoubin:6 get:4 cannot:1 close:1 selection:1 operator:4 influence:2 applying:1 restriction:1 equivalent:1 deterministic:1 www:1 lobato:3 straightforward:2 attention:2 regardless:5 independently:1 go:1 williams:2 jimmy:1 simplicity:1 initialise:1 embedding:1 handle:2 gert:1 variation:3 construction:8 play:1 fconv:3 exact:2 olivier:1 carl:7 gps:8 distinguishing:1 us:1 curran:1 alexis:1 associate:1 element:2 trend:1 approximated:2 expensive:1 particularly:1 recognition:1 lanckriet:1 ep:3 role:1 epsrc:1 preprint:3 capture:1 ensures:1 connected:1 trade:1 weigh:1 cam:3 dukkipati:1 trained:2 predictive:2 titsias:2 eric:2 basis:3 edwin:1 easily:1 joint:1 various:1 fast:2 effective:2 describe:1 artificial:7 encoded:2 larger:4 widely:1 valued:2 elbo:7 otherwise:1 ability:3 cov:2 qualcomm:1 statistic:5 neil:2 gp:31 jointly:1 itself:1 transform:1 final:3 advantage:1 net:2 matthias:2 interaction:4 zm:12 remainder:1 combining:1 flexibility:2 intuitive:1 inducing:45 figueiras:1 exploiting:1 sutskever:1 produce:1 adam:3 converges:1 perfect:1 leave:2 help:1 illustrate:1 andrew:2 ac:3 figueirasvidal:1 miguel:3 minimises:1 progress:1 eq:1 strong:2 edward:7 kenji:1 c:1 uu:3 differ:1 drawback:1 filter:5 stochastic:4 luckily:1 implementing:1 everything:1 explains:1 argued:1 require:6 behaviour:1 investigation:1 secondly:1 exploring:1 around:1 considered:1 ground:2 duvenaud:2 great:1 lawrence:2 mapping:1 automate:1 matthew:7 adopt:2 ruslan:1 applicable:2 lose:1 label:1 saw:1 largest:1 repetition:1 successfully:1 weighted:12 fukumizu:1 mit:1 generously:1 gaurav:1 gaussian:49 frbf:3 aim:1 dric:1 rather:3 avoid:1 wilson:3 improvement:4 bernoulli:1 likelihood:12 indicates:1 modelling:1 rank:1 seeger:1 baseline:3 sense:2 kfu:7 inference:11 entire:1 kconv:1 kc:1 pixel:3 issue:2 classification:10 flexible:2 html:1 overall:2 constrained:2 summed:2 orange:2 art:1 marginal:9 field:3 construct:6 once:1 having:1 piotr:1 beach:1 zz:1 never:1 placing:1 cer:1 thang:1 icml:3 cordelia:1 report:2 yoshua:2 realisation:1 few:1 richard:2 randomly:1 manipulated:1 simultaneously:1 divergence:4 replacement:1 interest:1 cer54:1 investigate:3 evaluation:7 adjust:2 introduces:1 extreme:1 pc:6 devoted:1 accurate:2 integral:3 edge:2 orthogonal:2 euclidean:1 re:1 minimal:1 earlier:2 modeling:1 cost:6 introducing:2 addressing:1 subset:3 uniform:1 usefulness:1 krizhevsky:2 nickson:1 too:3 calandra:1 optimally:1 reported:1 nickisch:2 combined:1 durrande:2 st:2 density:2 international:8 probabilistic:1 off:1 jos:1 together:5 quickly:1 ilya:1 squared:1 again:3 thesis:2 containing:1 choose:2 leading:1 toy:4 account:1 bfgs:1 de:8 stride:1 seeming:1 lloyd:1 inc:1 caused:1 bonilla:1 depends:1 later:1 performed:1 view:1 candela:3 red:2 start:4 xing:2 carlos:1 complicated:2 capability:2 contribution:5 ass:1 purple:1 convolutional:49 variance:2 who:1 largely:1 efficiently:1 ambedkar:1 stretching:1 characteristic:1 bayesian:2 iid:1 onero:3 worth:1 cc:1 classified:1 bharath:1 explain:1 inform:1 mlg:1 sriperumbudur:1 initialised:1 james:9 associated:1 gain:1 dataset:8 popular:2 improves:2 dimensionality:4 cutajar:1 variationally:1 methodology:1 response:13 specify:1 improved:1 hannes:2 toulouse:1 done:2 tom:1 roger:1 convnets:4 hand:1 joaquin:2 horizontal:2 steinwart:1 replacing:1 christopher:2 propagation:3 minibatch:4 brings:1 usa:1 building:1 requiring:4 true:2 contain:2 hence:1 analytically:2 wp:7 leibler:1 i2:2 utilisation:1 eg:3 neal:1 adjacent:1 during:1 width:1 kriz:1 gpflow:2 bal:1 pdf:1 stress:1 outline:1 hereon:1 rudimentary:1 image:23 variational:20 snelson:1 fi:1 common:1 krauth:3 wpc:4 volume:3 discussed:2 marginals:1 significant:5 refer:1 cambridge:5 ai:1 rd:1 automatic:1 consistency:1 centre:1 replicating:1 had:1 pq:1 gratefully:1 access:1 similarity:1 patrick:1 posterior:13 nlpp:9 recent:3 krbf:1 success:1 arbitrarily:1 der:3 yi:4 joshua:1 inverted:1 seen:3 preserving:1 additional:1 impose:1 redundant:1 july:1 full:9 desirable:1 multiple:5 harchaoui:1 technical:1 determination:1 calculation:1 cross:2 long:1 cifar:5 bent:1 controlled:1 variant:6 regression:7 scalable:5 optimisation:3 metric:1 expectation:5 arxiv:6 kernel:85 tailored:1 represent:1 filippone:2 invert:1 background:1 want:3 separately:3 addition:1 fellowship:1 leaving:1 extra:1 probably:1 induced:1 tend:2 elegant:3 spirit:1 call:1 near:1 noting:1 leverage:1 constraining:1 bengio:3 enough:3 stephan:1 vinod:1 affect:1 fit:2 gave:1 architecture:4 idea:1 haffner:1 bottleneck:1 favour:2 whether:1 gb:1 colour:16 url:1 akin:1 peter:2 kuu:13 cause:2 urbf:3 compositional:1 deep:7 useful:2 tune:1 transforms:1 nonparametric:6 amount:3 tenenbaum:1 simplest:1 http:3 specifies:1 estimated:1 algorithmically:1 per:2 blue:2 key:1 enormous:1 rectangle:10 annales:1 year:2 sum:4 uncertainty:3 place:4 family:1 yann:1 patch:48 recognise:2 draw:1 fusi:1 appendix:5 scaling:2 qui:3 bound:2 ki:1 layer:2 followed:1 simplification:1 strength:1 ijcnn:1 constraint:5 alex:2 wc:3 fourier:3 speed:2 performing:2 gpus:1 maurizio:2 department:2 structured:1 according:1 gredilla:2 across:1 slightly:2 shallow:2 making:2 modification:1 explained:2 invariant:8 taken:1 resource:1 equation:6 previously:1 hern:3 turn:2 count:1 discus:1 needed:1 tractable:4 available:1 operation:3 vidal:1 apply:1 indirectly:1 appropriate:2 convolved:1 rp:1 assumes:1 top:1 michalis:1 tony:1 unifying:2 uconv:3 restrictive:3 ghahramani:6 build:2 approximating:1 objective:5 question:2 quantity:1 parametric:1 dependence:1 usual:2 diagonal:3 exhibit:1 gradient:1 distance:1 convnet:1 link:1 remedied:1 capacity:1 separate:1 manifold:1 extent:1 reason:1 code:1 length:1 issn:1 illustration:4 innovation:1 equivalently:1 setup:2 negative:2 rise:1 ba:1 implementation:3 perform:1 allowing:3 vertical:3 observation:2 convolution:4 datasets:3 wilk:3 acknowledge:1 ingo:1 solin:1 january:1 hinton:2 incorporated:1 gc:11 ninth:1 mv310:1 jebara:1 community:1 introduced:2 david:3 pablo:1 kl:4 specified:2 connection:1 imagenet:1 tensorflow:1 kingma:1 nip:1 anibal:1 boukouvalas:1 able:3 bar:3 alongside:1 usually:1 challenge:2 optimise:2 memory:2 including:3 green:2 zhiting:1 power:1 suitable:1 natural:1 rely:2 force:1 hr:4 residual:1 turner:2 representing:3 fitc:1 improve:2 github:1 imply:1 library:1 julien:1 started:1 acknowledges:1 extract:1 schmid:1 roberto:1 xq:1 prior:14 understanding:1 acknowledgement:1 discovery:1 kf:8 nicol:1 relative:2 embedded:1 fully:3 probit:1 highlight:2 roustant:1 zm0:4 interesting:2 limitation:1 geoffrey:2 foundation:1 sufficient:1 imposes:1 editor:1 classifying:4 share:3 tiny:1 translation:7 karl:1 changed:1 placed:1 supported:1 rasmussen:8 infeasible:1 allow:2 deeper:1 generalise:2 wide:1 taking:1 sparse:16 van:3 benefit:2 bauer:1 opper:1 hensman:9 xn:4 valid:2 evaluating:3 dimension:5 depth:1 preventing:1 calculated:1 made:2 forward:1 ginsbourger:1 avoided:1 pth:1 far:2 approximate:9 obtains:1 kullback:1 bui:1 uai:1 koniusz:1 mairal:1 summing:2 assumed:1 conclude:1 xi:9 demo:2 alternatively:1 spectrum:1 continuous:1 latent:3 facult:1 pandey:1 protected:1 search:1 table:3 additionally:1 ku:7 learn:2 nature:1 ca:1 channel:15 nicolas:2 interact:1 bottou:1 constructing:2 domain:13 substituted:1 diag:1 marc:1 main:6 big:1 noise:1 hyperparameters:9 allowed:1 quadrature:1 fig:1 cubic:1 fashion:1 grosse:1 position:1 exponential:1 lie:1 jmlr:1 weighting:6 z0:4 down:1 specific:3 showing:3 evidence:2 workshop:2 mnist:15 adding:3 importance:1 phd:1 push:1 easier:1 suited:1 simply:3 infinitely:1 highlighting:1 contained:1 kiss:1 radford:1 springer:1 corresponds:1 minibatches:1 nair:1 sorted:1 sized:1 rbf:24 labelled:1 change:2 generalisation:6 infinite:1 saving:1 total:1 invariance:2 la:1 select:1 deisenroth:1 highdimensional:1 wq:1 mark:3 support:2 dx0:1 alexander:5 relevance:1 prowler:2 mcmc:1 |
6,498 | 6,878 | Estimation of the covariance structure of heavy-tailed
distributions
Stanislav Minsker
Department of Mathematics
University of Southern California
Los Angeles, CA 90007
[email protected]
Xiaohan Wei
Department of Electrical Engineering
University of Southern California
Los Angeles, CA 90007
[email protected]
Abstract
We propose and analyze a new estimator of the covariance matrix that admits strong
theoretical guarantees under weak assumptions on the underlying distribution,
such as existence of moments of only low order. While estimation of covariance
matrices corresponding to sub-Gaussian distributions is well-understood, much
less in known in the case of heavy-tailed data. As K. Balasubramanian and M.
Yuan write 1 , ?data from real-world experiments oftentimes tend to be corrupted
with outliers and/or exhibit heavy tails. In such cases, it is not clear that those
covariance matrix estimators .. remain optimal? and ?..what are the other possible
strategies to deal with heavy tailed distributions warrant further studies.? We make
a step towards answering this question and prove tight deviation inequalities for the
proposed estimator that depend only on the parameters controlling the ?intrinsic
dimension? associated to the covariance matrix (as opposed to the dimension of
the ambient space); in particular, our results are applicable in the case of highdimensional observations.
1
Introduction
Estimation of the covariance matrix is one of the fundamental problems in data analysis: many
important statistical tools, such as Principal Component Analysis (PCA, Hotelling, 1933) and
regression analysis, involve covariance estimation as a crucial step. For instance, PCA has immediate
applications to nonlinear dimension reduction and manifold learning techniques (Allard et al., 2012),
genetics (Novembre et al., 2008), computational biology (Alter et al., 2000), among many others.
However, assumptions underlying the theoretical analysis of most existing estimators, such as various
modifications of the sample covariance matrix, are often restrictive and do not hold for real-world
scenarios. Usually, such estimators rely on heuristic (and often bias-producing) data preprocessing,
such as outlier removal. To eliminate such preprocessing step from the equation, one has to develop a
class of new statistical estimators that admit strong performance guarantees, such as exponentially
tight concentration around the unknown parameter of interest, under weak assumptions on the
underlying distribution, such as existence of moments of only low order. In particular, such heavytailed distributions serve as a viable model for data corrupted with outliers ? an almost inevitable
scenario for applications.
We make a step towards solving this problem: using tools from the random matrix theory, we will
develop a class of robust estimators that are numerically tractable and are supported by strong
theoretical evidence under much weaker conditions than currently available analogues. The term
?robustness? refers to the fact that our estimators admit provably good performance even when the
underlying distribution is heavy-tailed.
1
Balasubramanian and Yuan (2016)
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1
Notation and organization of the paper
Given A ? Rd1 ?d2 , let AT ? Rd2 ?d1 be transpose of A. If A is symmetric, we will write ?max (A)
and ?min (A) for the largest and smallest eigenvalues of A. Next, we will introduce
pthe matrix norms
used in the paper. Everywhere below, k ? k stands for the operator norm kAk := ?max (AT A). If
d1 ?d2
d1 = d2 = d, ?
we denote by trA
, the nuclear norm
?the trace of A. For A ? R
? k ? k1 is defined
as kAk1 = tr( AT A), where AT A is a nonnegativep
definite matrix such that ( AT A)2 = AT A.
The Frobenius (or Hilbert-Schmidt) norm is kAkF = tr(AT A), and the associated inner product
is hA1 , A2 i = tr(A?1 A2 ). For z ? Rd , kzk2 stands for the usual Euclidean norm of z. Let A, B be
two self-adjoint matrices. We will write A B (or A B) iff A ? B is nonnegative (or positive)
definite. For a, b ? R, we set a ? b := max(a, b) and a ? b := min(a, b). We will also use the
standard Big-O and little-o notation when necessary.
Finally, we give a definition of a matrix function. Let f be a real-valued function defined on an interval
T ? R, and let A ? Rd?d be a symmetric matrix with the eigenvalue decomposition A = U ?U ?
such that ?j (A) ? T, j = 1, . . . , d. We define f (A) as f (A) = U f (?)U ? , where
??
??
?
?
?1
f (?1 )
??
?
?
??
..
..
f (?) = f ??
?? := ?
?.
.
.
?d
f (?d )
Few comments about organization of the material in the rest of the paper: section 1.2 provides an
overview of the related work. Section 2 contains the mains results of the paper. The proofs are
outlined in section 4; longer technical arguments can be found in the supplementary material.
1.2
Problem formulation and overview of the existing work
d
Let
X ? R be a random vector with mean EX = ?0 , covariance matrix ?0 =
E (X ? ?0 )(X ? ?0 )T , and assume EkX ? ?0 k42 < ?. Let X1 , . . . , Xm be i.i.d. copies of
X. Our goal is to estimate the covariance matrix ? from Xj , j ? m. This problem and its variations
have previously received significant attention by the research community: excellent expository papers
by Cai et al. (2016) and Fan et al. (2016) discuss the topic in detail. However, strong guarantees
for the best known estimators hold (with few exceptions mentioned below) under the restrictive
assumption that X is either bounded with probability 1 or has sub-Gaussian distribution, meaning
that there exists ? > 0 such that for any v ? Rd of unit Euclidean norm,
Pr (|hv, X ? ?0 i| ? t) ? 2e?
t2 ? 2
2
.
In the discussion accompanying the paper by Cai et al. (2016), Balasubramanian and Yuan (2016)
write that ?data from real-world experiments oftentimes tend to be corrupted with outliers and/or
exhibit heavy tails. In such cases, it is not clear that those covariance matrix estimators described
in this article remain optimal? and ?..what are the other possible strategies to deal with heavy tailed
distributions warrant further studies.? This motivates our main goal: develop new estimators of the
covariance matrix that (i) are computationally tractable and perform well when applied to heavy-tailed
data and (ii) admit strong theoretical guarantees (such as exponentially tight concentration around
the unknown covariance matrix) under weak assumptions on the underlying distribution. Note that,
unlike the majority of existing literature, we do not impose any further conditions on the moments of
X, or on the ?shape? of its distribution, such as elliptical symmetry.
Robust estimators of covariance and scatter have been studied extensively during the past few decades.
However, majority of rigorous theoretical results were obtained for the class of elliptically symmetric
distributions which is a natural generalization of the Gaussian distribution; we mention just a small
subsample among the thousands of published works. Notable examples include the Minimum
Covariance Determinant estimator and the Minimum Volume Ellipsoid estimator which are discussed
in (Hubert et al., 2008), as well Tyler?s (Tyler, 1987) M-estimator of scatter. Works by Fan et al.
(2016); Wegkamp et al. (2016); Han and Liu (2017) exploit the connection between Kendall?s tau
and Pearson?s correlation coefficient (Fang et al., 1990) in the context of elliptical distributions to
obtain robust estimators of correlation matrices. Interesting results for shrinkage-type estimators
have been obtained by Ledoit and Wolf (2004); Ledoit et al. (2012). In a recent work, Chen et al.
(2015) study Huber?s ?-contamination model which assumes that the data is generated from the
2
distribution of the form (1 ? ?)F + ?Q, where Q is an arbitrary distribution of ?outliers? and F is
an elliptical distribution of ?inliers?, and propose novel estimator based on the notion of ?matrix
depth? which is related to Tukey?s depth function (Tukey, 1975); a related class of problems has
been studies by Diakonikolas et al. (2016). The main difference of the approach investigated in this
paper is the ability to handle a much wider class of distributions that are not elliptically symmetric
and only satisfy weak moment assumptions. Recent papers by Catoni (2016), Giulini (2015), Fan
et al. (2016, 2017); Fan and Kim (2017) and Minsker (2016) are closest in spirit to this direction.
For instance, Catoni (2016) constructs a robust estimator of the Gram matrix of a random vector
2
Z ? Rd (as well as its covariance matrix) via estimating the quadratic form E hZ, ui uniformly over
all kuk2 = 1. However, the bounds are obtained under conditions more stringent than those required
by our framework, and resulting estimators are difficult to evaluate in applications even for data of
moderate dimension. Fan et al. (2016) obtain bounds in norms other than the operator norm which
the focus of the present paper (however, we plan to address optimality guarantees with respect to
other norms in the future). Minsker (2016) and Fan et al. (2016) use adaptive truncation arguments to
construct robust estimators of the covariance matrix. However, their results are only applicable to the
situation when the data is centered (that is, ?0 = 0). In the robust estimation framework, rigorous
extension of the arguments to the case of non-centered high-dimensional observations is non-trivial
and requires new tools, especially if one wants to avoid statistically inefficient procedures such as
sample splitting. We formulate and prove such extensions in this paper.
2
Main results
Definition of our estimator has its roots in the technique proposed by Catoni (2012). Let
?(x) = (|x| ? 1) sign(x)
(1)
be the usual truncation function. As before, let X1 , . . . , Xm be i.i.d. copies of X, and assume that ?
b
b as
is a suitable estimator of the mean ?0 from these samples, to be specified later. We define ?
m
X
b := 1
?
? ?(Xi ? ?
b)(Xi ? ?
b)T ,
(2)
m? i=1
where ? ' m?1/2 is small (the exact value will be given later). It easily follows from the definition
of the matrix function that
m
X
(Xi ? ?
b)(Xi ? ?
b)T
2
b= 1
?
?
kX
?
?
b
k
?
i
2 ,
2
m? i=1
kXi ? ?
bk2
hence it is easily computable. Note that ?(x) = x in the neighborhood of 0; it implies that whenever
2
all random variables ? kXi ? ?
bk2 , 1 ? i ? m are ?small? (say, bounded above by 1) and ?
? is the
b is close to the usual sample covariance estimator. On the other hand, ? ?truncates?
sample mean, ?
?
2
kXi ? ?
bk2 on level ' m, thus limiting the effect of outliers. Our results (formally stated below,
see Theorem 2.1) imply that for an appropriate choice of ? = ?(t, m, ?),
r
?
b
? ? ?0
? C0 ?0
m
with probability ? 1 ? de?? for some positive constant C0 , where
2
?02 :=
E kX ? ?0 k2 (X ? ?0 )(X ? ?0 )T
is the "matrix variance".
2.1
Robust mean estimation
There are several ways to construct a suitable estimator of the mean ?0 . We present the one obtained
via the ?median-of-means? approach. Let x1 , . . . , xk ? Rd . Recall that the geometric median of
x1 , . . . , xk is defined as
med (x1 , . . . , xk ) := argmin
z?Rd
3
k
X
j=1
kz ? xj k2 .
j
k
Let 1 < ? < ? be the confidence parameter, and set k = 3.5? + 1; we will assume that k ?
j k
Divide the sample X1 , . . . , Xm into k disjoint groups G1 , . . . , Gk of size m
k each, and define
?
?j :=
m
2.
1 X
Xi , j = 1 . . . k,
|Gj |
i?Gj
?
? := med (?
?1 , . . . , ?
?k ) .
(3)
It then follows from Corollary 4.1 in (Minsker, 2015) that
r
tr(?0 )(? + 1)
? e?? .
Pr k?
? ? ?k2 ? 11
m
2.2
(4)
Robust covariance estimation
b be the estimator defined in (2) with ?
b
Let ?
b being the ?median-of-means? estimator (3). Then ?
admits the following performance guarantees:
q
?
. Moreover, let d := ?02 /k?0 k2 , and suppose
Lemma 2.1. Assume that ? ? ?0 , and set ? = ?1 m
that m ? Cd?, where C > 0 is an absolute constant. Then
r
?
b
? ? ?0
? 3?
m
(5)
with probability at least 1 ? 5de?? .
Remark 2.1. The quantity d? is a measure of ?intrinsic dimension? akin to the ?effective rank?
(?0 )
r = trk?
; see Lemma 2.3 below for more details. Moreover, note that the claim of Lemma 2.1
0k
holds for any ? ? ?0 , rather than just for ? = ?0 ; this ?degree of freedom? allows construction of
adaptive estimators, as it is shown below.
The statement above suggests that one has to know the value of (or a tight upper bound on) the
b More often than not, such information is
?matrix variance? ?02 in order to obtain a good estimator ?.
unavailable. To make the estimator completely data-dependent, we will use Lepski?s method (Lepski,
1992). To this end, assume that ?min , ?max are ?crude? preliminary bounds such that
?min ? ?0 ? ?max .
Usually, ?min and ?max do not need to be precise, and can potentially differ from ?0 by several
orders of magnitude. Set
?j := ?min 2j and J = {j ? Z : ?min ? ?j < 2?max } .
Note that the cardinality
q of J satisfies card(J ) ? 1 + log2 (?max /?min ). For each j ? J , define
?j := ?(j, ?) =
1
?j
?
m.
Define
b m,j =
?
m
1 X
? ?j (Xi ? ?
b)(Xi ? ?
b)T .
m?j i=1
Finally, set
(
r
?
b
b
j? := min j ? J : ?k > j s.t. k ? J ,
?m,k ? ?m,j
? 6?k
m
)
(6)
b ? := ?
b m,j . Note that the estimator ?
b ? depends only on X1 , . . . , Xm , as well as ?
and ?
?
min , ?max .
Our main result is the following statement regarding the performance of the data-dependent estimator
b ?:
?
Theorem 2.1.
Suppose
m ? Cd?, then, the following inequality holds with probability at least
2?max
1 ? 5d log2 ?
e?? :
min
r
?
b
.
?? ? ?0
? 18?0
m
4
An immediate corollary of Theorem 2.1 is the quantitative result for the performance of PCA based
b ? . Let Proj be the orthogonal projector on a subspace corresponding to the k
on the estimator ?
k
largest positive eigenvalues ?1 , . . . , ?k of ?0 (here, we assume for simplicity that all the eigenvalues
[k ? the orthogonal projector of the same rank as Projk corresponding to the k
are distinct), and Proj
b ? . The following bound follows from the Davis-Kahan perturbation theorem
largest eigenvalues of ?
(Davis and Kahan, 1970), more specifically, its version due to Zwald and Blanchard (2006, Theorem
3 ).
q
?
Corollary 2.1. Let ?k = ?k ? ?k+1 , and assume that ?k ? 72?0 m
. Then
r
Proj
[k ? Projk
? 36 ?0 ?
?k
m
max e?? .
with probability ? 1 ? 5d log2 2?
?
min
It is worth comparing the bound of Lemma 2.1 and Theorem 2.1 above to results of the paper by
b 0m under the assumption that the random
Fan et al. (2016), which constructs a covariance estimator ?
b 0 satisfies
vector X is centered, and supv?Rd :kvk2 ?1 E |hv, Xi|4 = B < ?. More specifically, ?
m
the inequality
!
r C ?Bd
b0
1
P
?m ? ?0
?
? de?? ,
(7)
m
where C1 > 0 is an absolute constant. The main difference between (7) and the bounds of Lemma
2.1 and Theorem 2.1 is that the latter are expressed in terms of ?02 , while the former is in terms of B.
The following lemma demonstrates that our bounds are at least as good:
Lemma 2.2. Suppose that EX = 0 and supv?Rd :kvk2 ?1 E |hv, Xi|4 = B < ?. Then Bd ? ?02 .
It follows from the above lemma that dp
= ?02 /k?0 k2 . d. Hence, By Theorem 2.1, the error rate of
b ? is bounded above by O( d/m) if m & d. It has been shown (for example, Lounici,
estimator ?
p
2014) that the minimax lower bound of covariance estimation is of order ?( d/m). Hence, the
bounds of Fan et al. (2016) as well as our results imply correct order of the error. That being said, the
?intrinsic dimension? d? reflects the structure of the covariance matrix and can potentially be much
smaller than d, as it is shown in the next section.
2.3
Bounds in terms of intrinsic dimension
In this section, we show that
p under a slightly stronger assumption on the fourth moment of the random
vector X, the bound O( d/m) is suboptimal, while our estimator can achieve a much better rate
in terms of the ?intrinsic dimension? associated to the covariance matrix. This makes our estimator
useful in applications involving high-dimensional covariance estimation, such as PCA. Assume the
following uniform bound on the kurtosis:
r
4
(k)
E X (k) ? ?0
(8)
max
2 = R < ?,
k=1,2,...,d
(k)
(k)
E X ? ?0
(k)
where X (k) , ?0 denotes the k-th entry of X and ?0 respectively. The intrinsic dimension of the
covariance matrix ?0 can be measured by the effective rank defined as
r(?0 ) =
tr(?0 )
.
k?0 k
Note that we always have r(?0 ) ? rank(?0 ) ? d, and it some situations r(?0 ) rank(?0 ), for
instance if the covariance matrix is ?approximately low-rank?, meaning that it has many small
eigenvalues. The constant ?02 is closely related to the effective rank as is shown in the following
lemma (the proof of which is included in the supplementary material):
5
Lemma 2.3. Suppose that (8) holds. Then,
r(?0 )k?0 k2 ? ?02 ? R2 r(?0 )k?0 k2 .
As a result, we have r(?0 ) ? d ? R2 r(?0 ). The following corollary immediately follows from
Theorem 2.1 and Lemma 2.3:
Corollary 2.2. Suppose that m ? C?r(?0 ) for an absolute constant C > 0 and that (8) holds.
Then
r
r(?0 )?
b
?? ? ?0
? 18Rk?0 k
m
2?max
??
e .
with probability at least 1 ? 5d log2 ?
min
3
Applications: low-rank covariance estimation
In many data sets encountered in modern applications (for instance, gene expression profiles (Saal
et al., 2007)), dimension of the observations, hence the corresponding covariance matrix, is larger
than the available sample size. However, it is often possible, and natural, to assume that the unknown
matrix possesses special structure, such as low rank, thus reducing the ?effective dimension? of the
problem. The goal of this section is to present an estimator of the covariance matrix that is ?adaptive?
to the possible low-rank structure; such estimators are well-known and have been previously studied
for the bounded and sub-Gaussian observations (Lounici, 2014). We extend these results to the case
of heavy-tailed observations; in particular, we show that the estimator obtained via soft-thresholding
b ? admits optimal guarantees in the Frobenius (as well as operator)
applied to the eigenvalues of ?
norm.
b ? be the estimator defined in the previous section, see equation (6), and set
Let ?
2
?
b
b
?? = argmin
A ? ??
+ ? kAk1 ,
(9)
F
A?Rd?d
where ? > 0 controls the amount of penalty. It is well-known (e.g., see the proof of Theorem 1 in
b ? can be written explicitly as
Lounici (2014)) that ?
2n
b ?? =
?
d
X
b ? ? ? /2, 0 vi (?
b ? )vi (?
b ? )T ,
max ?i ?
i=1
b ? ) and vi (?
b ? ) are the eigenvalues and corresponding eigenvectors of ?
b ? . We are ready to
where ?i (?
state the main result of this section.
q
?
Theorem 3.1. For any ? ? 36?0 m
,
"
#
?
2
(1 + 2)2 2
b?
2
? rank(A) .
(10)
?? ? ?0
? infd?d kA ? ?0 kF +
8
F
A?R
max e?? .
with probability ? 1 ? 5d log2 2?
?
min
q
?
In particular, if rank(?0 ) = r and ? = 36?0 m
, we obtain that
2
? 2 ?r
b?
?? ? ?0
? 162 ?02 1 + 2
m
F
2?max
??
with probability ? 1 ? 5d log2 ?
e .
min
4
4.1
Proofs
Proof of Lemma 2.1
The result is a simple corollary of the following statement.
6
Lemma 4.1. Set ? =
1
?
q
?
m,
where ? ? ?0 and m ? ?. Let d := ?02 /k?0 k2 . Then, with probability
at least 1 ? 5de?? ,
r
?
b
? ? ?0
? 2?
m
?s
?
s
?
94
43
54
32
2
5
d?
d?
d?
d?
?
?
?
?
?
?,
+C 0 k?0 k ?
+
+d
+ 2 + d4
+
k?0 k m
k?0 k m
k?0 k m
m
m
m
where C 0 > 1 is an absolute constant.
Now, by Corollary ?? in the supplement, it follows that d = ?02 /k?0 k2 ? tr(?0 )/k?0 k ? 1. Thus,
assuming that the sample size satisfies m ? (6C 0 )4 d?, then, d?/m ? 1/(6C 0 )4 < 1, and by some
algebraic manipulations we have that
r
r
r
?
?
?
b
+?
= 3?
.
(11)
? ? ?0
? 2?
m
m
m
For completeness, a detailed computation is given in the supplement. This finishes the proof.
4.2
Proof of Lemma 4.1
p
Let B? = 11 2tr(?0 )?/m
be the error bound
b defined in (3). Let
of the robust mean estimator ?
T
Zi = Xi ? ?0 , ?? = E (Zi ? ?)(Zi ? ?) , ?i = 1, 2, ? ? ? , d, and
m
X
(Xi ? ?)(Xi ? ?)T
2
?? = 1
?
?
kX
?
?k
,
?
i
2
2
m? i=1
kXi ? ?k2
for any k?k2 ? B? . We begin by noting that the error can be bounded by the supremum of an
empirical process indexed by ?, i.e.
?
?
?
sup
?
(12)
? ? ?0
? sup
?
? ? ?0
?
? ? ??
+ k?? ? ?0 k
k?k2 ?B?
k?k2 ?B?
??
with probability at least 1 ? e . We first estimate the second term k?? ? ?0 k. For any k?k2 ? B? ,
h
i
2
2
k?? ? ?0 k =
E (Zi ? ?)(Zi ? ?)T ? Zi ZiT
=
sup
E hZi ? ?, vi ? hZi , vi
v?Rd :kvk2 ?1
tr(?0 )?
,
m
with probability at least 1 ? e?? . It follows from Corollary ?? in the supplement that with the same
probability
?2 ?
?2 ?
d?
k?? ? ?0 k ? 242 0
? 242
= 242k?0 k .
(13)
k?0 km
k?0 km
m
Our main task is then to bound the first term in (12). To this end, we rewrite it as a double supremum
of an empirical process:
?
T ?
sup
?
?
?
=
sup
?
?
?
v
?
?
?
? v
= (?T v)2 ? k?k22 ? B?2 = 242
k?k2 ?B?
k?k2 ?B? ,kvk2 ?1
It remains to estimate the supremum above.
q
?
Lemma 4.2. Set ? = ?1 m
, where ? ? ?0 and m ? ?. Let d := ?02 /k?0 k2 . Then, with probability
at least 1 ? 4de?? ,
r
?
T ?
sup
v ?? ? ?? v ? 2?
m
k?k2 ?B? ,kvk2 ?1
?s
?
s
?
34
54
32
94
2
5
d?
?
d?
?
d?
?
?
d?
?
?,
+C 00 k?0 k ?
+
+
+d
+ 2 + d4
k?0 k m
k?0 k m
k?0 k m
m
m
m
where C 00 > 1 is an absolute constant.
7
Note that ? ? ?0 by defnition, thus, d ? ? 2 /k?0 k2 . Combining the above lemma with (12) and (13)
finishes the proof.
4.3
Proof of Theorem 2.1
Define ?j := min {j ? J : ?j ? ?0 }, and note that ??j ? 2?0 . We will demonstrate that j? ? ?j with
high probability. Observe that
?
(
r )?
[
b m,k ? ?m,?j
> 6?k ? ?
Pr (j? > ?j) ? Pr ?
?
n
k?J :k>?
j
!
r
r !
X
?
?
b
b
? Pr
?m,?j ? ?0
> 3??j
+
Pr
?m,k ? ?0
> 3?k
m
m
k?J : k>?
j
?max
? 5de?? + 5d log2
e?? ,
?min
where we applied (5) to estimate each of the probabilities in the sum under the assumption that the
number of samples m ? Cd? and ?k ? ??j ? ?0 . It is now easy to see that the event
(
r )
\
?
b
B=
?m,k ? ?0
? 3?k
m
?
k?J :k?j
2?max
?
e?? is contained in E = {j? ? ?j}. Hence, on B
r
r
?
?
b
b
b
b
+ 3??j
?? ? ?0
? k?? ? ?m,?j k + k?m,?j ? ?0 k ? 6??j
m
m
r
r
r
?
?
?
? 12?0
+ 6?0
= 18?0
,
m
m
m
and the claim follows.
of probability ? 1 ? 5d log2
4.4
min
Proof of Theorem 3.1
The proof is based on the following lemma:
o
n
b
Lemma 4.3. Inequality (10) holds on the event E = ? ? 2
?
? ? ?0
.
To verify this statement, it is enough to repeat the steps of the proof of Theorem 1 in Lounici (2014),
b ?.
replacing each occurrence of the sample covariance matrix
by its ?robust
analogue? ?
q
It then follows from Theorem 2.1 that Pr(E) ? 1 ? 5d log2
2?max
?
min
e?? whenever ? ? 36?0
?
m.
Acknowledgments
Research of S. Minsker and X. Wei was partially supported by the National Science Foundation grant
NSF DMS-1712956.
References
Allard, W. K., G. Chen, and M. Maggioni (2012). Multi-scale geometric methods for data sets II: Geometric
multi-resolution analysis. Applied and Computational Harmonic Analysis 32(3), 435?462.
Alter, O., P. O. Brown, and D. Botstein (2000). Singular value decomposition for genome-wide expression data
processing and modeling. Proceedings of the National Academy of Sciences 97(18), 10101?10106.
Balasubramanian, K. and M. Yuan (2016). Discussion of ?Estimating structured high-dimensional covariance
and precision matrices: optimal rates and adaptive estimation?. Electronic Journal of Statistics 10(1), 71?73.
Bhatia, R. (2013). Matrix analysis, Volume 169. Springer Science & Business Media.
8
Boucheron, S., G. Lugosi, and P. Massart (2013). Concentration inequalities: A nonasymptotic theory of
independence. Oxford university press.
Cai, T. T., Z. Ren, and H. H. Zhou (2016). Estimating structured high-dimensional covariance and precision
matrices: optimal rates and adaptive estimation. Electron. J. Statist. 10(1), 1?59.
Catoni, O. (2012). Challenging the empirical mean and empirical variance: a deviation study. In Annales de
l?Institut Henri Poincar?, Probabilit?s et Statistiques, Volume 48, pp. 1148?1185.
Catoni, O. (2016). PAC-Bayesian bounds for the Gram matrix and least squares regression with a random design.
arXiv preprint arXiv:1603.05229.
Chen, M., C. Gao, and Z. Ren (2015). Robust covariance matrix estimation via matrix depth. arXiv preprint
arXiv:1506.00691.
Davis, C. and W. M. Kahan (1970). The rotation of eigenvectors by a perturbation. iii. SIAM Journal on
Numerical Analysis 7(1), 1?46.
Diakonikolas, I., G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart (2016). Robust estimators in high
dimensions without the computational intractability. In Foundations of Computer Science (FOCS), 2016
IEEE 57th Annual Symposium on, pp. 655?664. IEEE.
Fan, J. and D. Kim (2017). Robust high-dimensional volatility matrix estimation for high-frequency factor
model. Journal of the American Statistical Association.
Fan, J., Q. Li, and Y. Wang (2017). Estimation of high dimensional mean regression in the absence of symmetry
and light tail assumptions. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79(1),
247?265.
Fan, J., Y. Liao, and H. Liu (2016). An overview of the estimation of large covariance and precision matrices.
The Econometrics Journal 19(1), C1?C32.
Fan, J., W. Wang, and Y. Zhong (2016). An `? eigenvector perturbation bound and its application to robust
covariance estimation. arXiv preprint arXiv:1603.03516.
Fan, J., W. Wang, and Z. Zhu (2016). Robust low-rank matrix recovery. arXiv preprint arXiv:1603.08315.
Fang, K.-T., S. Kotz, and K. W. Ng (1990). Symmetric multivariate and related distributions. Chapman and Hall.
Giulini, I. (2015). PAC-Bayesian bounds for Principal Component Analysis in Hilbert spaces. arXiv preprint
arXiv:1511.06263.
Han, F. and H. Liu (2017). ECA: high dimensional elliptical component analysis in non-Gaussian distributions.
Journal of the American Statistical Association.
Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of
educational psychology 24(6), 417.
Hubert, M., P. J. Rousseeuw, and S. Van Aelst (2008). High-breakdown robust multivariate methods. Statistical
Science, 92?119.
Ledoit, O. and M. Wolf (2004). A well-conditioned estimator for large-dimensional covariance matrices. Journal
of multivariate analysis 88(2), 365?411.
Ledoit, O., M. Wolf, et al. (2012). Nonlinear shrinkage estimation of large-dimensional covariance matrices.
The Annals of Statistics 40(2), 1024?1060.
Lepski, O. (1992). Asymptotically minimax adaptive estimation. I: Upper bounds. optimally adaptive estimates.
Theory of Probability & Its Applications 36(4), 682?697.
Lounici, K. (2014). High-dimensional covariance matrix estimation with missing observations. Bernoulli 20(3),
1029?1058.
Minsker, S. (2015). Geometric median and robust estimation in Banach spaces. Bernoulli 21(4), 2308?2335.
Minsker, S. (2016). Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries. arXiv
preprint arXiv:1605.07129.
Novembre, J., T. Johnson, K. Bryc, Z. Kutalik, A. R. Boyko, A. Auton, A. Indap, K. S. King, S. Bergmann,
M. R. Nelson, et al. (2008). Genes mirror geography within Europe. Nature 456(7218), 98?101.
9
Saal, L. H., P. Johansson, K. Holm, S. K. Gruvberger-Saal, Q.-B. She, M. Maurer, S. Koujak, A. A. Ferrando,
P. Malmstr?m, L. Memeo, et al. (2007). Poor prognosis in carcinoma is associated with a gene expression
signature of aberrant PTEN tumor suppressor pathway activity. Proceedings of the National Academy of
Sciences 104(18), 7564?7569.
Tropp, J. A. (2012). User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12(4),
389?434.
Tropp, J. A. (2015). An introduction to matrix concentration inequalities. arXiv preprint arXiv:1501.01571.
Tukey, J. W. (1975). Mathematics and the picturing of data. In Proceedings of the international congress of
mathematicians, Volume 2, pp. 523?531.
Tyler, D. E. (1987). A distribution-free M-estimator of multivariate scatter. The Annals of Statistics, 234?251.
Wegkamp, M., Y. Zhao, et al. (2016). Adaptive estimation of the copula correlation matrix for semiparametric
elliptical copulas. Bernoulli 22(2), 1184?1226.
Zwald, L. and G. Blanchard (2006). On the convergence of eigenspaces in kernel principal component analysis.
In Advances in Neural Information Processing Systems 18, pp. 1649?1656. Cambridge, MA: MIT Press.
10
| 6878 |@word determinant:1 version:1 norm:10 johansson:1 stronger:1 c0:2 d2:3 km:2 covariance:38 decomposition:2 mention:1 tr:8 reduction:1 moment:5 liu:3 contains:1 series:1 giulini:2 past:1 existing:3 elliptical:5 comparing:1 ka:1 aberrant:1 scatter:3 bd:2 saal:3 written:1 numerical:1 shape:1 rd2:1 xk:3 provides:1 completeness:1 math:1 kvk2:5 symposium:1 viable:1 yuan:4 prove:2 defnition:1 focs:1 pathway:1 introduce:1 huber:1 multi:2 balasubramanian:4 little:1 cardinality:1 begin:1 estimating:3 underlying:5 notation:2 bounded:5 moreover:2 medium:1 what:2 argmin:2 eigenvector:1 mathematician:1 guarantee:7 quantitative:1 friendly:1 k2:19 supv:2 demonstrates:1 control:1 unit:1 grant:1 producing:1 positive:3 before:1 engineering:1 understood:1 congress:1 minsker:8 oxford:1 approximately:1 lugosi:1 studied:2 suggests:1 challenging:1 kane:1 statistically:1 acknowledgment:1 definite:2 procedure:1 poincar:1 probabilit:1 empirical:4 confidence:1 refers:1 close:1 operator:3 context:1 zwald:2 projector:2 missing:1 educational:1 attention:1 formulate:1 resolution:1 simplicity:1 splitting:1 immediately:1 recovery:1 estimator:46 nuclear:1 fang:2 handle:1 notion:1 variation:1 maggioni:1 limiting:1 annals:2 controlling:1 suppose:5 construction:1 user:1 exact:1 econometrics:1 breakdown:1 preprint:7 electrical:1 hv:3 wang:3 thousand:1 contamination:1 mentioned:1 ui:1 signature:1 depend:1 tight:4 solving:1 rewrite:1 serve:1 completely:1 easily:2 various:1 distinct:1 effective:4 bhatia:1 pearson:1 neighborhood:1 heuristic:1 supplementary:2 valued:1 larger:1 say:1 ability:1 statistic:3 g1:1 kahan:3 ledoit:4 eigenvalue:8 kurtosis:1 cai:3 propose:2 product:1 combining:1 kak1:2 pthe:1 iff:1 achieve:1 academy:2 adjoint:1 frobenius:2 los:2 convergence:1 double:1 wider:1 volatility:1 develop:3 measured:1 b0:1 received:1 zit:1 strong:5 implies:1 differ:1 direction:1 closely:1 correct:1 centered:3 stringent:1 material:3 generalization:1 geography:1 preliminary:1 extension:2 hold:7 accompanying:1 around:2 hall:1 tyler:3 claim:2 electron:1 smallest:1 a2:2 heavytailed:1 estimation:22 projk:2 applicable:2 currently:1 largest:3 tool:3 reflects:1 mit:1 gaussian:6 always:1 rather:1 avoid:1 zhou:1 shrinkage:2 zhong:1 corollary:8 focus:1 she:1 rank:13 bernoulli:3 rigorous:2 kim:2 dependent:2 eliminate:1 proj:3 provably:1 among:2 plan:1 special:1 copula:2 construct:4 beach:1 ng:1 chapman:1 biology:1 warrant:2 inevitable:1 alter:2 future:1 others:1 t2:1 few:3 modern:1 national:3 usc:2 freedom:1 organization:2 interest:1 picturing:1 light:1 inliers:1 hubert:2 ambient:1 necessary:1 eigenspaces:1 orthogonal:2 institut:1 indexed:1 euclidean:2 divide:1 maurer:1 theoretical:5 instance:4 soft:1 modeling:1 stewart:1 deviation:2 entry:2 uniform:1 johnson:1 optimally:1 corrupted:3 kxi:4 st:1 fundamental:1 siam:1 international:1 wegkamp:2 moitra:1 opposed:1 hzi:2 admit:3 american:2 inefficient:1 zhao:1 li:2 nonasymptotic:1 de:7 coefficient:1 blanchard:2 tra:1 satisfy:1 notable:1 explicitly:1 kzk2:1 depends:1 vi:5 later:2 root:1 kendall:1 analyze:1 tukey:3 sup:6 square:1 variance:3 weak:4 bayesian:2 ren:2 worth:1 published:1 c32:1 whenever:2 definition:3 pp:4 frequency:1 dm:1 gruvberger:1 associated:4 proof:12 recall:1 hilbert:2 botstein:1 methodology:1 wei:2 novembre:2 formulation:1 lounici:5 just:2 correlation:3 statistiques:1 hand:1 tropp:2 replacing:1 nonlinear:2 usa:1 effect:1 k22:1 verify:1 brown:1 former:1 hence:5 symmetric:5 boucheron:1 deal:2 during:1 self:1 davis:3 kak:1 d4:2 bergmann:1 trk:1 demonstrate:1 eca:1 meaning:2 harmonic:1 pten:1 novel:1 rotation:1 overview:3 exponentially:2 volume:4 banach:1 tail:4 discussed:1 extend:1 association:2 numerically:1 significant:1 cambridge:1 rd:10 outlined:1 mathematics:2 han:2 longer:1 europe:1 gj:2 closest:1 multivariate:4 recent:2 moderate:1 scenario:2 manipulation:1 inequality:6 allard:2 minimum:2 impose:1 ii:2 technical:1 long:1 involving:1 regression:3 liao:1 arxiv:14 kernel:1 c1:2 want:1 semiparametric:1 interval:1 median:4 singular:1 crucial:1 rest:1 unlike:1 posse:1 massart:1 comment:1 hz:1 tend:2 med:2 spirit:1 noting:1 iii:1 easy:1 enough:1 xj:2 finish:2 zi:6 independence:1 psychology:1 suboptimal:1 prognosis:1 inner:1 regarding:1 computable:1 angeles:2 expression:3 pca:4 akin:1 penalty:1 algebraic:1 elliptically:2 remark:1 useful:1 clear:2 involve:1 eigenvectors:2 detailed:1 amount:1 rousseeuw:1 extensively:1 statist:1 nsf:1 sign:1 ekx:1 disjoint:1 carcinoma:1 write:4 group:1 annales:1 asymptotically:1 sum:2 everywhere:1 fourth:1 almost:1 kotz:1 electronic:1 bound:20 fan:13 quadratic:1 encountered:1 nonnegative:1 annual:1 activity:1 argument:3 min:19 optimality:1 department:2 structured:2 expository:1 poor:1 remain:2 smaller:1 slightly:1 modification:1 outlier:6 pr:7 computationally:1 equation:2 previously:2 remains:1 discus:1 know:1 tractable:2 end:2 auton:1 available:2 observe:1 appropriate:1 occurrence:1 hotelling:2 schmidt:1 robustness:1 existence:2 assumes:1 denotes:1 include:1 log2:9 exploit:1 restrictive:2 k1:1 especially:1 society:1 question:1 quantity:1 strategy:2 concentration:4 usual:3 diakonikolas:2 said:1 southern:2 exhibit:2 dp:1 subspace:1 card:1 majority:2 nelson:1 topic:1 manifold:1 trivial:1 assuming:1 holm:1 ellipsoid:1 difficult:1 truncates:1 statement:4 potentially:2 kamath:1 gk:1 trace:1 stated:1 design:1 motivates:1 unknown:3 perform:1 upper:2 observation:6 immediate:2 situation:2 precise:1 perturbation:3 arbitrary:1 community:1 required:1 specified:1 connection:1 california:2 nip:1 address:1 usually:2 below:5 xm:4 max:19 tau:1 royal:1 analogue:2 suitable:2 event:2 natural:2 rely:1 business:1 zhu:1 minimax:2 imply:2 ready:1 literature:1 geometric:4 removal:1 kf:1 stanislav:1 kakf:1 interesting:1 foundation:2 degree:1 article:1 thresholding:1 bk2:3 intractability:1 heavy:10 cd:3 genetics:1 supported:2 repeat:1 transpose:1 copy:2 truncation:2 free:1 bias:1 weaker:1 wide:1 xiaohan:1 absolute:5 van:1 ha1:1 dimension:12 depth:3 world:3 stand:2 gram:2 kz:1 genome:1 adaptive:8 preprocessing:2 oftentimes:2 henri:1 gene:3 supremum:3 suppressor:1 xi:12 lepski:3 decade:1 tailed:8 nature:1 robust:17 ca:3 symmetry:2 unavailable:1 excellent:1 investigated:1 complex:1 main:8 big:1 subsample:1 profile:1 x1:7 k42:1 sub:4 precision:3 comput:1 crude:1 answering:1 theorem:15 kuk2:1 rk:1 pac:2 r2:2 admits:3 evidence:1 intrinsic:6 exists:1 supplement:3 mirror:1 catoni:5 magnitude:1 conditioned:1 kx:3 chen:3 rd1:1 gao:1 expressed:1 contained:1 partially:1 springer:1 wolf:3 satisfies:3 ma:1 goal:3 king:1 towards:2 absence:1 included:1 specifically:2 uniformly:1 reducing:1 principal:4 lemma:18 tumor:1 exception:1 formally:1 highdimensional:1 latter:1 evaluate:1 d1:3 ex:2 |
6,499 | 6,879 | 6879 |@word |
Subsets and Splits